CN111124149A - Input method and electronic equipment - Google Patents

Input method and electronic equipment Download PDF

Info

Publication number
CN111124149A
CN111124149A CN201911195337.1A CN201911195337A CN111124149A CN 111124149 A CN111124149 A CN 111124149A CN 201911195337 A CN201911195337 A CN 201911195337A CN 111124149 A CN111124149 A CN 111124149A
Authority
CN
China
Prior art keywords
input
electronic device
candidate
contents
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911195337.1A
Other languages
Chinese (zh)
Inventor
康新龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911195337.1A priority Critical patent/CN111124149A/en
Publication of CN111124149A publication Critical patent/CN111124149A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The input method and the electronic equipment provided by the embodiment of the invention are applied to the technical field of communication, and are used for solving the problems of complicated input steps and single input information selected by a user in the traditional information input process. The method comprises the following steps: the first electronic device receives a first input; responding to the first input, and displaying N candidate input contents by the first electronic equipment; responding to a second input aiming at a target candidate input content in the N candidate input contents, sending the target candidate input content to a second electronic device, and displaying the target candidate input content in a first session window between the first electronic device and the second electronic device; wherein the N candidate input contents include: and presetting input content and/or historical input content of a second conversation window in the third electronic equipment.

Description

Input method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an input method and electronic equipment.
Background
With the development of communication technology, electronic devices (such as smart phones) are used as main communication tools, besides basic voice communication, text communication also plays an important role, and input methods are used as important tools for text communication, so that accuracy and input speed of inputting texts are receiving more and more attention of users.
In the conventional information input process, when the user has inconvenience in inputting, the user usually inputs a small number of characters in the input box, so that the electronic device can screen the historical input information related to the characters from the historical input records of the input box for the user to select. The whole input process requires manual participation of a user, the input steps are complicated, and the input information selected by the user is single.
Disclosure of Invention
The input method and the electronic equipment provided by the embodiment of the invention solve the problems of complicated input steps and single input information selected by a user in the traditional information input process.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an input method provided in an embodiment of the present invention includes: the first electronic device receives a first input; responding to the first input, and displaying N candidate input contents by the first electronic equipment; responding to a second input aiming at a target candidate input content in the N candidate input contents, sending the target candidate input content to a second electronic device, and displaying the target candidate input content in a first session window between the first electronic device and the second electronic device; wherein the N candidate input contents include: and presetting input content and/or historical input content of a second conversation window in the third electronic equipment.
In a second aspect, an embodiment of the present invention further provides a first electronic device, where the receiving module is configured to receive a first input; the display module is used for responding to the first input received by the receiving module and displaying N candidate input contents; the display module responds to a second input aiming at target candidate input content in the N candidate input contents and sends the target candidate input content to the second electronic equipment; the display module is further configured to display the target candidate input content in a first session window between the first electronic device and the second electronic device; wherein the N candidate input contents include: and presetting input content and/or historical input content of a second conversation window in the third electronic equipment.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the input method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the input method according to the first aspect.
In an embodiment of the present invention, after receiving a first input for triggering an input function of a first session window of a first application program, a first electronic device directly displays N candidate input contents for a user to select, and after receiving a second input by the user for a target candidate input content of the N candidate input contents, the first electronic device sends the target candidate input content to a second electronic device, and displays the target candidate input content in the first session window between the first electronic device and the second electronic device, because the N candidate input contents include: the preset input content and/or the historical input content of the second conversation window in the third electronic equipment provide richer candidate input content for the user, and the overall input efficiency is improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method of inputting a command according to an embodiment of the present invention;
fig. 3 is one of schematic diagrams of an interface applied to an input method according to an embodiment of the present invention;
fig. 4 is a second schematic view of an interface applied by an input method according to the embodiment of the present invention;
fig. 5 is a third schematic view of an interface applied by an input method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
It should be noted that "a plurality" herein means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
It should be noted that, for the convenience of clearly describing the technical solutions of the embodiments of the present invention, in the embodiments of the present invention, words such as "first" and "second" are used to distinguish the same items or similar items with substantially the same functions or actions, and those skilled in the art can understand that the words such as "first" and "second" do not limit the quantity and execution order. For example, the first input and the second input are for distinguishing different inputs, rather than for describing a particular order of inputs; the first and second conversation windows are for distinguishing different conversation windows, not for describing a specific order of conversation windows.
The execution main body of the input method provided in the embodiment of the present invention may be the first electronic device, or may also be a functional module and/or a functional entity capable of implementing the input method in the first electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited.
For example, taking an electronic device as a terminal device as an example, the terminal device in the embodiment of the present invention may be a mobile terminal device, and may also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the input method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the input method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the input method may run based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the input method provided by the embodiment of the present invention by running the software program in the android operating system.
The input method according to the embodiment of the present invention is described below with reference to the flow chart of the input method shown in fig. 2, where fig. 2 is a schematic flow chart of the input method according to the embodiment of the present invention, and includes steps 201 to 203:
step 201: the first electronic device receives a first input.
Illustratively, the first input is used to trigger an input function of the first conversation window.
In an embodiment of the present invention, the first input may be: the touch input of the user on the first session window, or the voice instruction input by the user, or the specific gesture input by the user may be determined according to the actual use requirement, and the embodiment of the present invention is not limited. For example, the touch input may be a click input of the first session window by the user.
The specific gesture in the embodiment of the invention can be any one of a single-click gesture, a sliding gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the invention can be single click input, double click input or click input of any number of times, and the click input can also be long-time press input or short-time press input.
In an embodiment of the present invention, in a case that the first electronic device displays a first session window and the first session window includes an input box, the first input may be an input of a user for the input box. For example, clicking on the input box causes a cursor to be displayed in the input box, triggering the input function of the conversation window so that the user can input information in the input box.
In this embodiment of the present invention, the first application may be any application installed in the first electronic device and having a session window. For example, taking the first application program as a chat application program, the first session window may be a chat interface between the user and a friend, and the user may input information by touching an input box of the chat interface (i.e., triggering an input function of the chat interface).
Step 202: and responding to the first input, and displaying N candidate input contents by the first electronic equipment.
In an embodiment of the present invention, the candidate input content may include: characters, expressions, pictures, voice and other information.
It should be noted that the N candidate input contents may be candidate input contents of a second session window of one third electronic device, or may be candidate input contents corresponding to second session windows of multiple third electronic devices, which is not limited in this embodiment of the present invention.
For example, the third electronic device may be an electronic device within a predetermined range of the first electronic device, or may be an electronic device having a friend relationship with the first electronic device, for example, a friend relationship exists between a user account in the first electronic device and a user account in the third electronic device.
For example, the first electronic device may classify all input contents stored therein, and the first electronic device may display N candidate input contents after receiving the first input. In one example, the first electronic device may classify all the candidate input contents according to the association relationship between the third electronic device corresponding to each candidate input content and the first electronic device. For example, the input content of a third electronic device located near the first electronic device is classified as a type of candidate input content (e.g., may be referred to as "nearby" shared input content), and the input content of all third electronic devices having a friend relationship with the first electronic device is classified as a type of candidate input content (e.g., may be referred to as "friend" shared input content). For example, the category of the candidate input content may be preset by the user, or may be a category of the candidate input content selected by the user, which is not limited in the embodiment of the present invention. For example, the first electronic device may prioritize display of "buddy" sharing input content.
Optionally, in this embodiment of the present invention, the second session window may be a session window of a second application program, and the first session window is a session window of a first application program, where a program type of the first application program is the same as a program type of the second application program. For example, both are game-type applications or both are interactive-type applications.
Optionally, in this embodiment of the present invention, the second session window may be a session window of any application program in the third electronic device.
In an example, after acquiring a plurality of input contents sent by the third electronic device, the first electronic device may classify the plurality of candidate input contents according to program types of application programs corresponding to the candidate input contents, and store the input contents of session windows of the same type of application programs as one candidate input content set. In this way, when an input function of a first session window of a first application in a first electronic device is triggered, the first electronic device may search for a set of input content corresponding to an application type of the first application and display one or more candidate input contents in the set of input content for selection by a user.
Optionally, in this embodiment of the present invention, the first electronic device may further establish a one-to-one connection with a third electronic device, and after the establishment is successful, the third electronic device may directly share the N candidate input contents with the first electronic device.
Step 203: and responding to a second input aiming at target candidate input content in the N candidate input contents, sending the target candidate input content to the second electronic equipment by the first electronic equipment, and displaying the target candidate input content in a first session window between the first electronic equipment and the second electronic equipment.
In an embodiment of the present invention, the N candidate input contents include: and presetting input content and/or historical input content of a second conversation window in the third electronic equipment.
In an embodiment of the present invention, the target candidate input content is at least one of the N candidate input contents.
Exemplarily, the preset input content refers to: and inputting the input content which is stored by the user in the second conversation window in advance in the third electronic equipment.
Illustratively, the history input content includes: the second conversation window has historical input content for a first predetermined period of time (e.g., one week). In one example, the history input content is: historical input content with a frequency greater than a predetermined threshold is input in the second conversation window.
Illustratively, the history input content is target history input content input in a target history running process of an application program corresponding to the second session window; the running times of the target historical running process meet the preset condition.
Illustratively, the predetermined conditions are: the ratio of the historical running times of the application program corresponding to the second conversation window to the running times of the target historical running process is larger than or equal to a preset threshold value.
In one example, the third electronic device can also determine the historical input content according to the ratio of the historical running times of the application program corresponding to the second conversation window to the running times of the target historical running process. For example, for any historical input content in a certain application, the electronic device calculates the ratio of the historical starting times of the application to the running times of the running process of the application for inputting the any historical input content, and if the ratio is larger than or equal to the preset threshold value, the any historical input content is used as the target historical input content. For example, if a certain application is started ten times and the same input content is entered seven times, the input content is used as the target history input content.
Optionally, in this embodiment of the present invention, the third electronic device may send 1 or more input contents to the "information sharing" server. In this way, other electronic devices that establish a connection with the "information sharing" server can each obtain input information by accessing the "information sharing" server. It is to be understood that, the first electronic device and the third electronic device are both connected to the "information sharing" server, that is, the third electronic device may send the N candidate input contents to the "information sharing" server, and the first electronic device may access the "information sharing" server to obtain the N candidate input contents.
In an embodiment of the present invention, the second input may include: the click input of the user for the target candidate input content, or the voice instruction input by the user, or the specific gesture input by the user may be determined according to the actual use requirement, and the embodiment of the present invention is not limited. For example, the specific gesture for the target candidate input content may refer to the description in the first input above, and is not described herein again.
In the embodiment of the present invention, after receiving the second input, the first electronic device copies the target candidate input content as the reply information to the input area of the first session window. For example, taking the second input as the pressing input of the target candidate input content by the user as an example, the first electronic device may determine whether to copy the target candidate input content into the input area of the first session window according to whether the pressing duration of the second input is greater than a predetermined threshold, for example, if the pressing duration is greater than or equal to the predetermined threshold, copy the target candidate input content into the input area of the first session window; otherwise, if the pressing duration is smaller than the preset threshold, only the target candidate input content is selected.
Optionally, in the embodiment of the present invention, the scheme provided by the embodiment of the present invention further includes the following steps a1 to a step A3:
step A1: the first electronic device displays a second interface.
The second interface comprises a first control, and the first control is used for triggering the first electronic equipment to acquire input content from the third electronic equipment.
Step A2: the first electronic device receives a fifth input for the first control.
Step A3: and responding to the fifth input, the first electronic equipment sends an information sharing request to the third electronic equipment, and stores the input content after receiving the input content sent by the third electronic equipment.
Illustratively, the fifth input may include: the click input of the user for the first control, or the voice instruction input by the user, or the specific gesture input by the user may be determined according to the actual use requirement, and the embodiment of the present invention is not limited. For example, the specific gesture for the first control may refer to the description in the first input above, and is not described herein again.
For example, the information sharing request is used to request the third electronic device to share the input content sent by the third electronic device, if the third electronic device agrees to share, the input content of the third electronic device is sent to the first electronic device, and if the third electronic device disagrees to share, the input content of the third electronic device is not sent to the first electronic device. For example, the user may also select to manually store the received input content, and store some or all of the received input content according to the user's requirements.
For example, the second interface may further display a second control, where the second control is used to trigger the first electronic device to send the input content of the first electronic device to the third electronic device. It should be noted that the third electronic device may be an electronic device within a predetermined range of the first electronic device, or an electronic device having a friend relationship with the first electronic device.
For example, as shown in fig. 3, the interface currently displayed by the first electronic device is an "information sharing" interface (i.e., the second interface, e.g., 31 in fig. 3), the "information sharing" interface 31 includes a "receiving" control (i.e., the first control, e.g., 32 in fig. 3), when a user wants to obtain input content shared by the third electronic device, the user can click the "receiving" control, and then the first electronic device receives the input content shared by the third electronic device and stores the received input content. In addition, the "share information" interface 31 further includes a "send" control, and when the user wants to share the input content with the third electronic device, the user can click the "send" control to send the input content in the input box to the third electronic device.
In the input method provided in the embodiment of the present invention, after receiving a first input for triggering an input function of a first session window of a first application program, a first electronic device directly displays N candidate input contents for a user to select, and after receiving a second input by the user for a target candidate input content of the N candidate input contents, the first electronic device sends the target candidate input content to a second electronic device, and displays the target candidate input content in the first session window between the first electronic device and the second electronic device, because the N candidate input contents include: the preset input content and/or the historical input content of the second conversation window in the third electronic equipment provide richer candidate input content for the user, and the overall input efficiency is improved.
Optionally, in an embodiment of the present invention, in a case that the N candidate input contents include input contents corresponding to a second conversation window, the step 202 includes steps 202a1 to 202a 3:
step 202a 1: in response to the first input, the first electronic device displays a first interface.
And displaying at least one mark on the first interface, wherein one mark indicates an application program type.
For example, the first interface may be displayed in a superimposed manner on the first session window of the first electronic device.
Step 202a 2: the first electronic device receives a third input for a target identification of the at least one identification.
Step 202a 3: and responding to the third input, and displaying N candidate input contents by the first electronic equipment.
And the second session window is a session window of a second application program, and the second application program belongs to the application program type corresponding to the target identifier.
In an embodiment of the present invention, the third input may include: the click input of the user for the target identifier, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present invention. For example, the specific gesture for the target identification may refer to the description in the first input above, and is not described herein again.
Illustratively, the N candidate input contents displayed by the first electronic device are preset input contents and/or historical input contents of the second application program corresponding to the target identifier.
For example, the target identifier may be named for the name of the application type indicated by the identifier.
For example, as shown in fig. 3, the "shared information" interface 31 further includes an "application 1" identifier (i.e., the above-mentioned object identifier), the "application 1" identifier corresponds to an application type, and the user clicks the "application 1" identifier, and the input content of the application type corresponding to the "application 1" identifier is displayed in the "shared information" interface 31.
Therefore, when the user inputs information in different application programs, the first electronic equipment can display the input content with the same type as the application program of the application program for the user to select, and therefore the input efficiency of the user is improved.
Optionally, in an embodiment of the present invention, the step 202 includes a step 202 b:
step 202 b: and the first electronic equipment displays N candidate input contents on the first session window.
For example, the first electronic device may display a third interface on the first session window in an overlapping manner, where the N candidate input contents are displayed on the third interface.
For example, the third interface is displayed in a manner of being superimposed on the first session window in a pop-up window manner, and for example, when the user clicks on the input box, the third interface is displayed in a pop-up manner on the first session window.
For example, the user may resize the third interface, or the user may move the display position of the third interface, or the user may control the display or the hiding of the third interface.
For example, the size of the third interface may be a default size, or may be flexibly adjusted according to the operation of the user. In one example, the number of input contents displayed in the third interface may vary with the size of the third interface, e.g., the larger the size of the third interface, the larger the number of input contents displayed, and the smaller the size of the third interface, the smaller the number of input contents displayed.
For example, when the user drags the third interface, the third interface may move on the first session window along with the user's dragging.
For example, as shown in fig. 4, taking a chat interface (e.g. 41 in fig. 4) in which the first session window is a "chat" APP as an example, when the user wants to reply to information, the user may click an input box (i.e. the first input) in the chat interface 41, then display an interface of "friend sharing information" (i.e. the third interface, e.g. 42 in fig. 4) on the chat interface 41, and 2 pieces of input content, namely "friend sharing information 1" and "friend sharing information 2", are displayed in the sharing interface 42, where the 2 pieces of input content are input content of the third electronic device, and when the user wants to use "friend sharing information 2" as reply content, the user may click "friend sharing information 2", and may display "friend sharing information 2" as reply information in the input box.
For example, when the first electronic device detects that the input of the target candidate input content is finished, the first electronic device may hide the N candidate input contents. For example, when the first electronic device detects that the user has retracted the input keyboard, the first electronic device closes the third interface displaying the N candidate input contents.
For example, the first electronic device may further display a control on the first session window for triggering the first electronic device to display the N candidate input contents, for example, the user manually controls to display or hide the N candidate input contents by clicking the control.
Optionally, in an embodiment of the present invention, the step 202 includes a step 202 c:
step 202 c: and the first electronic equipment displays N target input boxes on the first session window.
Wherein a candidate input content is displayed in a target input box.
Illustratively, each target input box includes a send control for sending candidate input content in the target input box. That is, when the user touches the sending control, the first electronic device may output the target candidate input content within the first session window.
Illustratively, the first electronic device may display a blank input box while displaying N target input boxes on the first session window. In this way, if the user is unsatisfied with the N candidate input contents, the user can manually input the candidate input contents in the blank input box.
For example, as shown in fig. 5, taking a chat interface (e.g. 51 in fig. 5) in which the first session window is a "chat" APP as an example, when the user wants to reply to a message, the user clicks an input box (i.e. the first input) in the chat interface 41, so that the first electronic device displays 4 input boxes on the chat interface 51, where the 4 input boxes include 3 target input boxes (e.g. 52 in fig. 5) and a blank input box (e.g. 53 in fig. 5). Wherein each target input box has an input content displayed therein, and each input box has a "send" control. In this way, the user can select the required input content from the input contents of the 3 target input boxes, directly send the input content by clicking the 'sending' control, and can also manually input the input content in the blank input box.
Therefore, the first electronic device displays the N candidate input contents in the first session window in the form of the input frame, and the user can directly send the input contents in the target input frame as the reply contents, so that the input contents in the input frame do not need to be manually copied, and the input efficiency of the input contents is improved.
Optionally, in an embodiment of the present invention, the N candidate input contents are: based on N input contents of the M input contents transmitted from the third electronic device to the first electronic device, the step 202 includes a step 202 d:
step 202 d: and if the first electronic equipment does not receive a fourth input aiming at the first candidate input content in the N candidate input contents within the preset time, updating the first candidate input content into a second candidate input content.
Wherein the second candidate input content is the input content other than the N input contents of the M input contents.
For example, the first candidate input content is at least one of N candidate input contents, and the second candidate input content is N candidate input contents of the M input contents.
In an embodiment of the present invention, the fourth input may include: the specific details of the click input, or the voice command, or the specific gesture may be determined according to actual usage requirements, and the embodiment of the present invention is not limited. For example, the specific gesture for the first candidate input content may refer to the description in the first input above, and is not described herein again.
For example, the first electronic device may sort the M input contents or the N input contents according to a preset sorting rule. In one example, the above-mentioned ordering rule includes at least one of: according to the sequence of the receiving time of the input content, according to the using frequency of the input content, and according to the distance between the third electronic equipment and the first electronic equipment corresponding to the input content. For example, the input contents are ranked from high to low according to the use frequency of the input contents, or the third electronic device corresponding to the input contents is ranked from close to far from the first electronic device.
For example, after the first electronic device sorts the M input contents, the first N candidate input contents may be displayed. When the first electronic device detects that any input content in the N candidate input contents does not execute any operation within the preset time, updating the sorting, moving the any input content to the last, and at the moment, displaying the first N candidate input contents currently sorted by the first electronic device.
In this way, the first electronic device enables the user to select more appropriate input content from the N candidate input contents as the input content to be replied by updating the displayed input content in real time, so that the complicated operation of manual input by the user is avoided.
In the embodiment of the present invention, the input methods shown in the above method drawings are all exemplarily described with reference to one drawing in the embodiment of the present invention. In specific implementation, the input method shown in each method drawing can also be implemented by combining any other drawing which can be combined and is illustrated in the above embodiments, and details are not described here.
Fig. 6 is a schematic structural diagram of a first electronic device according to an embodiment of the present invention, and as shown in fig. 6, the first electronic device 600 includes: a receiving module 601, a display module 602, and a sending module 603, wherein: the receiving module 601 is configured to receive a first input; the display module 602 is configured to display N candidate input contents in response to the first input received by the receiving module 601; the sending module 603 is configured to send, to a second electronic device, a target candidate input content in the N candidate input contents in response to a second input, which is received by the receiving module 601, for the target candidate input content; the display module 602 is further configured to display the target candidate input content in a first session window between a first electronic device and the second electronic device; wherein the N candidate input contents include: and presetting input content and/or historical input content of a second conversation window in the third electronic equipment.
Optionally, the second session window is a session window of a second application program; the first session window is a session window of a first application program; the program type of the first application program is the same as the program type of the second application program.
Optionally, in a case that the N candidate input contents include candidate input contents corresponding to the second session window, the display module 602 is specifically configured to display a first interface, where at least one identifier is displayed on the first interface, and one identifier indicates an application type; the receiving module 601 is further configured to receive a third input for a target identifier in the at least one identifier; the display module 602 is specifically configured to display N candidate input contents in response to the third input received by the receiving module 601; and the second session window is a session window of a second application program, and the second application program belongs to the application program type corresponding to the target identifier.
Optionally, the display module 602 is specifically configured to display N target input boxes on the first session window, where one candidate input content is displayed in one target input box.
Optionally, the N candidate input contents are: the third electronic equipment sends N input contents in the M input contents to the first electronic equipment; the display module 602 is specifically configured to update the first candidate input content to a second candidate input content if the first electronic device does not receive a fourth input for the first candidate input content within a predetermined time, where the second input content is another input content of the M input contents except the N input contents.
Optionally, as shown in fig. 6, the first electronic device further includes a storage module 604, where: the display module 602 is further configured to display a second interface, where the second interface includes a first control; the receiving module 601 is further configured to receive a fifth input for the first control; the sending module 603 is configured to send an information sharing request to the third electronic device in response to the fifth input received by the receiving module 601; the storage module 604 is configured to receive the input content sent by the third electronic device in the receiving module 601, and store the input content.
In the first electronic device provided in the embodiment of the present invention, after receiving a first input for triggering an input function of a first session window of a first application, the first electronic device directly displays N candidate input contents for a user to select, and after receiving a second input of a target candidate input content of the N candidate input contents from the user, the first electronic device sends the target candidate input content to a second electronic device, and displays the target candidate input content in the first session window between the first electronic device and the second electronic device, because the N candidate input contents include: the preset input content and/or the historical input content of the second conversation window in the third electronic equipment provide richer candidate input content for the user, and the overall input efficiency is improved.
It should be noted that, as shown in fig. 6, modules that are necessarily included in the first electronic device 600 are illustrated by solid line boxes, such as a receiving module 601; modules that may or may not be included in the first electronic device 600 are illustrated with dashed boxes, such as the memory module 604.
The first electronic device provided in the embodiment of the present invention is capable of implementing each process implemented by the first electronic device in the foregoing method embodiments, and is not described here again to avoid repetition.
Take an electronic device as an example. Fig. 7 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the configuration of the terminal device 100 shown in fig. 7 does not constitute a limitation of the terminal device, and that the terminal device 100 may include more or less components than those shown, or combine some components, or arrange different components. In the embodiment of the present invention, the terminal device 100 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
Wherein, the user input unit 107 is used for receiving a first input; a processor 110 for displaying N candidate input contents in response to the first input, the processor 110 being further configured to send a target candidate input content of the N candidate input contents in response to a second input for the target candidate input content and display the target candidate input content in a first session window; wherein, the N candidate input contents are: candidate input content of a second conversation window in a third electronic device; the candidate input contents comprise: preset input content, and/or historical input content.
In the first electronic device provided in the embodiment of the present invention, after receiving a first input for triggering an input function of a first session window of a first application, the first electronic device directly displays N candidate input contents for a user to select, and after receiving a second input of a target candidate input content of the N candidate input contents from the user, the first electronic device sends the target candidate input content to a second electronic device, and displays the target candidate input content in the first session window between the first electronic device and the second electronic device, because the N candidate input contents include: the preset input content and/or the historical input content of the second conversation window in the third electronic equipment provide richer candidate input content for the user, and the overall input efficiency is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device 100 provides the user with wireless broadband internet access via the network module 102, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device 100. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 7, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device 100, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device 100, connects various parts of the entire terminal device 100 by various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device 100. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor 110, where the computer program, when executed by the processor, implements each process of the input method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the input method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (13)

1. An input method applied to a first electronic device, the method comprising:
receiving a first input;
displaying N candidate input contents in response to the first input;
responding to a second input aiming at target candidate input content in the N candidate input contents, sending the target candidate input content to a second electronic device, and displaying the target candidate input content in a first conversation window between the first electronic device and the second electronic device;
wherein the N candidate input contents comprise: and presetting input content and/or historical input content of a second conversation window in the third electronic equipment.
2. The method of claim 1, wherein the second session window is a session window of a second application; the first conversation window is a conversation window of a first application program; the program type of the first application program is the same as the program type of the second application program.
3. The method of claim 1, wherein in the event that the N candidate input contents comprise input contents of the second conversation window, the displaying N candidate input contents in response to the first input comprises:
displaying a first interface in response to the first input, the first interface displaying at least one identifier, one identifier indicating an application type;
receiving a third input for a target identifier of the at least one identifier;
displaying the N candidate input contents in response to the third input;
and the second session window is a session window of a second application program, and the second application program belongs to the application program type corresponding to the target identifier.
4. The method of claim 1, wherein displaying the N candidate input contents comprises:
and displaying N target input boxes on the first session window, wherein one candidate input content is displayed in one target input box.
5. The method of claim 1, wherein the N candidate input contents are: the third electronic device sends N input contents of the M input contents to the first electronic device;
the displaying N candidate input contents comprises:
if the first electronic device does not receive a fourth input aiming at a first input content in the N candidate input contents within a preset time, updating the first input content into a second input content, wherein the second input content is other input contents except the N input contents in the M input contents.
6. The method of claim 1, further comprising:
displaying a second interface, the second interface comprising a first control;
receiving a fifth input for the first control;
and responding to the fifth input, sending an information sharing request to the third electronic equipment, and storing the input content after receiving the input content sent by the third electronic equipment.
7. A first electronic device, wherein the first electronic device comprises:
a receiving module for receiving a first input;
the display module is used for responding to the first input received by the receiving module and displaying N candidate input contents;
a sending module, configured to send a target candidate input content of the N candidate input contents to a second electronic device in response to a second input for the target candidate input content;
the display module is further configured to display the target candidate input content in a first session window between the first electronic device and the second electronic device;
wherein the N candidate input contents comprise: and presetting input content and/or historical input content of a second conversation window in the third electronic equipment.
8. The first electronic device of claim 7, wherein the second session window is a session window of a second application; the first conversation window is a conversation window of a first application program; the program type of the first application program is the same as the program type of the second application program.
9. The first electronic device of claim 7,
the display module is further configured to display a first interface in response to the first input received by the receiving module under the condition that the N candidate input contents include candidate input contents corresponding to the second session window, where at least one identifier is displayed on the first interface, and one identifier indicates an application program type;
the receiving module is further configured to receive a third input for a target identifier of the at least one identifier displayed by the display module;
the display module is specifically configured to display the N candidate input contents in response to the third input received by the receiving module;
and the second session window is a session window of a second application program, and the second application program belongs to the application program type corresponding to the target identifier.
10. The first electronic device of claim 7,
the display module is specifically configured to display N target input boxes on the first session window, where one candidate input content is displayed in one target input box.
11. The first electronic device of claim 7, wherein the N candidate input contents are: the third electronic device sends N input contents of the M input contents to the first electronic device;
the display module is specifically configured to update the first input candidate content to a second candidate input content if the first electronic device does not receive a fourth input for the first candidate input content within a predetermined time, where the second input content is another input content of the M input contents except the N input contents.
12. The first electronic device of claim 7, wherein the first electronic device further comprises a memory module;
the display module is further configured to display a second interface, where the second interface includes a first control;
the receiving module is further configured to receive a fifth input for the first control;
the sending module is further configured to send an information sharing request to the third electronic device in response to the fifth input received by the receiving module;
the storage module is configured to receive, at the receiving module, input content sent by the third electronic device, and store the input content.
13. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the input method according to any one of claims 1 to 6.
CN201911195337.1A 2019-11-28 2019-11-28 Input method and electronic equipment Pending CN111124149A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195337.1A CN111124149A (en) 2019-11-28 2019-11-28 Input method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195337.1A CN111124149A (en) 2019-11-28 2019-11-28 Input method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111124149A true CN111124149A (en) 2020-05-08

Family

ID=70497034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195337.1A Pending CN111124149A (en) 2019-11-28 2019-11-28 Input method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111124149A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076158A (en) * 2021-03-26 2021-07-06 维沃移动通信有限公司 Display control method and display control device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110060649A1 (en) * 2008-04-11 2011-03-10 Dunk Craig A Systems, methods and apparatus for providing media content
CN106886296A (en) * 2017-02-15 2017-06-23 中国联合网络通信集团有限公司 The treating method and apparatus of the dictionary of input method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110060649A1 (en) * 2008-04-11 2011-03-10 Dunk Craig A Systems, methods and apparatus for providing media content
CN106886296A (en) * 2017-02-15 2017-06-23 中国联合网络通信集团有限公司 The treating method and apparatus of the dictionary of input method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Q我一线通 QQ与MSN实用趣味手册: "Q我一线通 QQ与MSN实用趣味手册", 31 January 2004, 海洋出版社, pages: 2 - 5 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076158A (en) * 2021-03-26 2021-07-06 维沃移动通信有限公司 Display control method and display control device

Similar Documents

Publication Publication Date Title
CN110069306B (en) Message display method and terminal equipment
CN111061574B (en) Object sharing method and electronic device
CN111010332A (en) Group chat method and electronic equipment
CN111142747B (en) Group management method and electronic equipment
CN110062105B (en) Interface display method and terminal equipment
CN110502163B (en) Terminal device control method and terminal device
CN110489029B (en) Icon display method and terminal equipment
CN111142723B (en) Icon moving method and electronic equipment
CN110752981B (en) Information control method and electronic equipment
CN108874906B (en) Information recommendation method and terminal
CN109976611B (en) Terminal device control method and terminal device
CN111026299A (en) Information sharing method and electronic equipment
CN111610904B (en) Icon arrangement method, electronic device and storage medium
CN110703972B (en) File control method and electronic equipment
CN111273993B (en) Icon arrangement method and electronic equipment
CN110225180B (en) Content input method and terminal equipment
WO2021093766A1 (en) Message display method, and electronic apparatus
CN110049486B (en) SIM card selection method and terminal equipment
CN109408072B (en) Application program deleting method and terminal equipment
CN110647277A (en) Control method and terminal equipment
CN111124709A (en) Text processing method and electronic equipment
CN110221741B (en) Icon management method and terminal equipment
CN111090529A (en) Method for sharing information and electronic equipment
CN111090489A (en) Information control method and electronic equipment
CN111338525A (en) Control method of electronic equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination