CN110012151B - Information display method and terminal equipment - Google Patents

Information display method and terminal equipment Download PDF

Info

Publication number
CN110012151B
CN110012151B CN201910132306.5A CN201910132306A CN110012151B CN 110012151 B CN110012151 B CN 110012151B CN 201910132306 A CN201910132306 A CN 201910132306A CN 110012151 B CN110012151 B CN 110012151B
Authority
CN
China
Prior art keywords
input
information
sub
user
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910132306.5A
Other languages
Chinese (zh)
Other versions
CN110012151A (en
Inventor
张经睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910132306.5A priority Critical patent/CN110012151B/en
Publication of CN110012151A publication Critical patent/CN110012151A/en
Application granted granted Critical
Publication of CN110012151B publication Critical patent/CN110012151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72466User interfaces specially adapted for cordless or mobile telephones with selection means, e.g. keys, having functions defined by the mode or the status of the device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides an information display method and terminal equipment, relates to the technical field of communication, and aims to solve the problem that the terminal equipment cannot respond or cannot accurately respond to an operation instruction input by a user voice. The method comprises the following steps: receiving a first input of a user, wherein the first input is at least used for triggering the terminal equipment to enable the voice assistant function; and responding to the first input, and displaying at least one piece of information, wherein each piece of information in the at least one piece of information is used for indicating an instruction, and the instruction indicated by each piece of information is a history instruction of the voice input of the user acquired by the terminal equipment through the voice assistant function. The method may be applied in scenarios where the user uses a voice assistant function.

Description

Information display method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an information display method and terminal equipment.
Background
With the rapid development of communication technology, a user can input an operation instruction by voice (for example, the user inputs "opens an application") to trigger the terminal device to execute the operation instruction.
At present, in the process of inputting an operation instruction by a user through voice, the terminal device cannot accurately acquire the operation instruction input by the user through the limitation of voice input. Specifically, if the user speaks an operation instruction in a noisy environment or the user cannot accurately and clearly speak the operation instruction, the terminal device may not accurately obtain the operation instruction input by the user, so that the terminal device may not respond or cannot accurately respond to the operation instruction input by the user.
Disclosure of Invention
The embodiment of the invention provides an information display method, which aims to solve the problem that a terminal device cannot respond or cannot accurately respond to an operation instruction input by a user voice.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information display method, where the method includes: a first input from a user is received and at least one message is displayed in response to the first input. Wherein the first input is at least used for triggering the terminal equipment to enable the voice assistant function; each piece of information in the at least one piece of information is used for indicating an instruction, and the instruction indicated by each piece of information is a history instruction of the voice input of the user acquired by the terminal equipment through the voice assistant function.
In a second aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a receiving module and a display module. The receiving module is used for receiving a first input of a user, wherein the first input is at least used for triggering the terminal equipment to enable the voice assistant function; and the display module is used for responding to the first input received by the receiving module and displaying at least one piece of information, wherein each piece of information in the at least one piece of information is used for indicating an instruction, and the instruction indicated by each piece of information is a history instruction of the voice input of the user acquired by the terminal equipment through the voice assistant function.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the information display method provided in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor, the steps of the information display method provided in the first aspect are implemented.
In an embodiment of the present invention, a first input of a user (at least for triggering the terminal device to enable the voice assistant function) may be received, and in response to the first input, at least one piece of information (each piece of information in the at least one piece of information is used for indicating an instruction, and the instruction indicated by each piece of information is a history instruction of the voice input of the user acquired by the terminal device through the voice assistant function) is displayed. By the scheme, since the terminal device can store information indicating the history instructions which are voice-input by the user through the voice assistant function in the terminal device, the terminal device can present the information indicating the history instructions to the user when the user is triggered by the first input. Therefore, when the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the user can directly input the information displayed by the terminal device, and the terminal device can be triggered to execute corresponding operation. In other words, in the embodiment of the invention, even if the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the terminal device can still accurately respond to the historical instruction input by the previous voice of the user, so that the corresponding operation is accurately executed.
Drawings
Fig. 1 is a schematic diagram of an architecture of an android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an information display method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an interface for displaying at least one message according to an embodiment of the present invention;
fig. 4 is a second schematic diagram of an information display method according to an embodiment of the present invention;
fig. 5 is a third schematic diagram of an information display method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a voice assistant interface provided by an embodiment of the present invention;
FIG. 7 is a second schematic diagram of an interface for displaying at least one message according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 9 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 10 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of elements means two or more elements, and the like.
The embodiment of the invention provides an information display method and terminal equipment, which can receive a first input (at least used for triggering the terminal equipment to start a voice assistant function) of a user, and display at least one piece of information (each piece of information in the at least one piece of information is used for indicating an instruction, and the instruction indicated by each piece of information is a historical instruction of the voice input of the user, acquired by the terminal equipment through the voice assistant function) in response to the first input. By the scheme, since the terminal device can store information indicating the history instructions which are voice-input by the user through the voice assistant function in the terminal device, the terminal device can present the information indicating the history instructions to the user when the user is triggered by the first input. Therefore, when the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the user can directly input the information displayed by the terminal device, and the terminal device can be triggered to execute corresponding operation. In other words, in the embodiment of the invention, even if the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the terminal device can still accurately respond to the historical instruction input by the previous voice of the user, so that the corresponding operation is accurately executed.
The terminal in the embodiment of the present invention may be a terminal having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the information display method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the information display method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the information display method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal can realize the information display method provided by the embodiment of the invention by running the software program in the android operating system.
The terminal equipment in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. For example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
The execution main body of the information display method provided in the embodiment of the present invention may be the terminal device, or may also be a functional module and/or a functional entity capable of implementing the information display method in the terminal device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain an information display method provided by the embodiment of the present invention.
In the embodiment of the present invention, when a user needs to control a terminal device through a voice assistant function, but the user is not suitable for voice input currently, the user may trigger the terminal device through one input (i.e. the first input in the embodiment of the present invention), and present information indicating a history instruction that the user has input by voice through the voice assistant function to the user, so that the user may trigger the terminal device to execute the instruction indicated by the information by selecting one information. The scenes unsuitable for voice input may include any possible scenes such as a noisy environment, a public place, information input by voice being privacy information of a user, and unwanted voice input by the user, which may be determined specifically according to a use requirement, and the embodiment of the present invention is not limited.
An information display method provided by an embodiment of the present invention is exemplarily described below with reference to the drawings.
As shown in fig. 2, an embodiment of the present invention provides an information display method, which may include S201 and S202 described below.
S201, receiving a first input of a user by the terminal equipment.
The first input is at least used for triggering the terminal equipment to enable the voice assistant function.
Optionally, in this embodiment of the present invention, the first input may be input by a user to a physical key of the terminal device (for example, the physical key of the terminal device may be used to trigger the enabling of the voice assistant function), or may be input by the user to an icon of an application program in the terminal device, which may enable the voice assistant function.
Specifically, in the case that the first input is a physical key input of the terminal device by the user, the first input may be any possible form of input such as a click input (specifically, a single-click input or a double-click input), a long-press input, and the like; in the case that the first input is an icon input of the application program which can enable the voice assistant function in the terminal device by the user, the first input may be any possible form of input such as a click input (specifically, a single click input or a double click input), a slide input, a double press input, and a long press input. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
The above-mentioned re-pressing input is also referred to as a pressure touch input, and refers to an input that a user presses a certain pressure value (greater than a preset pressure threshold value) on an icon of an application program that enables a voice assistant function.
In the embodiment of the present invention, the physical key used for triggering the voice assistant function in the terminal device may be a Jovi key. That is, the first input may be an input of Jovi key by the user.
It is to be understood that, in the embodiment of the present invention, the first input may be a non-voice input. Thus, when the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the user may trigger the terminal device to display at least one piece of information (i.e., information indicating the history instruction that the user has input by voice through the voice assistant function) through the non-voice input to the terminal device (i.e., the first input), so that the user may directly input the information displayed by the terminal device (e.g., select a piece of information, where the input is a non-voice input), and may trigger the terminal device to perform a corresponding operation.
S202, the terminal equipment responds to the first input and displays at least one piece of information.
Each piece of information in the at least one piece of information may be used to indicate an instruction, and the instruction indicated by each piece of information is a history instruction of the voice input of the user, acquired by the terminal device through the voice assistant function. It is understood that each message is a text message corresponding to a voice message (or called a voice command) input by the user through the voice assistant function.
Optionally, in the embodiment of the present invention, as shown in fig. 3, the at least one piece of information may be any possible information, such as "view short message", "view news today", "view gallery", and the like, for indicating the history instruction that is voice-input by the user through the voice assistant function. Wherein, the part of the check box is that the user has triggered the selected instruction, and as shown in 30 in fig. 3, the part of the check box (i.e. the user has triggered the selected instruction) is "view short message".
Optionally, in this embodiment of the present invention, after the user triggers the terminal device to enable the voice assistant function through the first input, the terminal device may display a voice input control on the interface displaying the at least one piece of information, where the voice input control may be used for the user to trigger voice input. Namely, in the embodiment of the present invention, the user may trigger the terminal device to execute an instruction indicated by a certain piece of information by inputting the at least one piece of information; the voice input can be performed through the input of the voice input control, so that the terminal equipment can be controlled through the voice assistant function.
Illustratively, the voice input control may be "hold Jovi key to speak to me bar" as shown at 31 in fig. 3.
In the embodiment of the invention, after the user triggers the terminal device to enable the voice assistant function through the first input, the terminal device can display information of the historical instruction which is input by the user through the voice assistant function on a screen of the terminal device. Therefore, when the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the user can directly input the information displayed by the terminal device, and the terminal device can be triggered to execute the corresponding operation. Therefore, even if the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the terminal device can still accurately respond to the historical instruction input by the user in the previous voice, so that the corresponding operation is accurately executed.
Optionally, in this embodiment of the present invention, after the terminal device displays the at least one piece of information, the user may trigger the terminal device to execute the instruction indicated by the piece of information by inputting (i.e., the second input below) some piece of information (i.e., the target information below) in the at least one piece of information. Therefore, the terminal equipment can be triggered to respond to the historical command input by the voice before the user without the voice input of the user, and the corresponding operation is accurately executed.
For example, referring to fig. 2, as shown in fig. 4, after S202, the information display method provided in the embodiment of the present invention may further include S203 and S204 described below.
S203, the terminal equipment receives a second input of the target information in the at least one information from the user.
In the embodiment of the present invention, after the terminal device displays the at least one piece of information, the user may trigger the terminal device to execute the instruction indicated by the target information through a second input to the target information in the at least one piece of information.
Optionally, in the embodiment of the present invention, one possible implementation manner is: the second input may be a touch input of the user in the area where the target information is located, for example, a click input (specifically, a single-click input or a double-click input), a long-press input, and other possible forms of input.
Another possible implementation is: the second input may comprise two sub-inputs, for example referred to as sub-input 1 and sub-input 2, respectively, below. The sub-input 1 may be an input of a user to a physical key on the terminal device, and the sub-input 2 may be a touch input of the user or an input of the user to a physical key on the terminal device. Specifically, the physical keys of the terminal device may be any possible keys such as a "volume up" key, a "volume down" key, a voice assistant key, a screen lock key, and the like, and the embodiment of the present invention is not limited.
For the description of the touch input in another possible implementation, reference may be specifically made to the description of the touch input in the above one possible implementation, and details are not described here again.
Specifically, the following manner one and manner two are taken as examples, and the second input (including sub-input 1 and sub-input 2) in the another possible implementation manner is respectively exemplified.
The first method is as follows: the user can select the target information through inputting physical keys on the terminal equipment, and then the target information is input through touch control in the area where the target information is located, so that the terminal equipment is triggered to execute the instruction indicated by the target information.
For example, assuming that the target information is "view gallery", and the above physical keys are "volume up" key and/or "volume down" key, the user may select "view gallery" by inputting the "volume up" key and/or the "volume down" key, and trigger the terminal device to open the gallery application by touch input in the area where the "view gallery" is located, that is, trigger the terminal device to display the interface of the gallery application.
The second method comprises the following steps: the user can select the target information through inputting one or some physical keys on the terminal equipment, and then trigger the terminal equipment to execute the instruction indicated by the target information through inputting another or other physical keys on the terminal equipment.
For example, assuming that the target information is "view gallery", the one or some of the physical keys are "volume up" key and/or "volume down" key, and the another or some of the physical keys are voice assistant key or screen lock key, the user may select the "view gallery" information by inputting the "volume up" key and/or "volume down" key, and then trigger the terminal device to open the gallery application, that is, trigger the terminal device to display the interface of the gallery application.
It is to be understood that, in the embodiment of the present invention, the second input may be a non-voice input. In other words, in the embodiment of the present invention, the input of the user to the terminal device may be a non-voice input, so that even if the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the terminal device can still accurately respond to the history instruction input by the user by the voice before, thereby accurately executing the corresponding operation.
And S204, the terminal equipment responds to the second input and executes the instruction indicated by the target information.
In this embodiment of the present invention, after the user selects the target information (i.e. one of the at least one information) through the second input, the terminal device may respond to the second input to trigger the terminal device to execute the instruction indicated by the target information.
For example, assuming that the target information selected by the user through the second input is "view gallery", the terminal device may execute an instruction indicated by "view gallery" in response to the second input, that is, may understand that an interface of the gallery application is displayed for the terminal device.
In the embodiment of the invention, after the user calls out at least one piece of information used for indicating the history instruction input by the voice of the user through the first input, the user can select target information used for indicating the instruction which the user needs to execute by the terminal equipment through the second input so as to trigger the terminal equipment to execute the instruction indicated by the target information. Therefore, when the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the user can select one of the at least one information of the historical instruction input by the user through non-voice input to trigger the terminal device to execute the instruction indicated by the information, and the user does not need to input the operation instruction by voice to trigger the terminal device to execute the operation instruction like the prior art. Therefore, the terminal device can be enabled to accurately respond to the history instruction input by the user in the previous voice, thereby accurately executing the corresponding operation.
Optionally, in this embodiment of the present invention, after the user triggers the terminal device to enable the voice assistant function through one input, the terminal device may display a voice assistant interface or a target prompt message, where both the voice assistant interface and the target prompt message may be used to indicate that the terminal device has enabled the voice assistant function. The user may then trigger the terminal device to display the at least one message via another input. That is, in this case, the first input provided by the embodiment of the present invention may include two sub-inputs, namely, a first sub-input and a second sub-input. Wherein the first sub-input may be used to trigger the terminal device to enable the voice assistant function, and the second sub-input may be used to trigger the terminal device to display at least one message.
It is to be understood that, in an embodiment of the present invention, in a possible implementation manner, a user may directly trigger the terminal device to display the at least one piece of information through one input (for example, the first input in S201 and S202); for example, the user presses the icon of the application program which can enable the voice assistant function in the terminal equipment to trigger the terminal equipment to display the at least one piece of information. In another possible implementation manner, the user may also trigger the terminal device to enable the voice assistant function (i.e., display the voice assistant interface or the target prompt message) through one input (e.g., a first sub-input described below), and then trigger the terminal device to display the at least one message through another input (e.g., a second sub-input described below); for example, the user presses the icon of the application program which can enable the voice assistant function again in the terminal device to trigger the terminal device to enable the voice assistant function, and then the user inputs the target combination key of the terminal device to trigger the terminal device to display the at least one piece of information. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
For example, in conjunction with fig. 2, as shown in fig. 5, the above S202 may be specifically implemented by the following S202a and S202 b.
S202a, the terminal device responds to the first sub-input and displays the target content.
The target content may be a voice assistant interface or target prompt information, and the target prompt information may be used to indicate that the voice assistant function is enabled. It can be understood that the voice assistant interface is an interface of the voice assistant function, and the terminal device may consider that the terminal device has enabled the voice assistant function by displaying the voice assistant interface.
In the embodiment of the present invention, after the terminal device receives the first sub-input of the user (i.e. for triggering the terminal device to enable the voice assistant function), the terminal device may display a voice assistant interface (as shown in fig. 6) on a screen of the terminal device in response to the first sub-input, so as to prompt the user that the terminal device has enabled the voice assistant function; alternatively, the terminal device may also display target prompt information on a screen of the terminal device in response to the first sub-input to prompt the user that the terminal device has enabled the voice assistant function. Generally, after the terminal device enables the voice assistant function, the user can trigger the terminal device to perform corresponding operations through voice input so as to control the terminal device through the voice input. In the embodiment of the invention, after the terminal equipment starts the voice assistant function, the user can control the terminal equipment through voice input and can also control the terminal equipment through selecting the historical command input by the voice of the user. Therefore, the user can use the voice assistant function of the terminal equipment in any scene, and the flexibility of the voice assistant function of the terminal equipment is improved.
S202b, the terminal device responds to the second sub-input and displays at least one piece of information.
Optionally, in this embodiment of the present invention, the second sub-input may be input by a user to a target combination key of the terminal device, may be gesture input by the user on the voice assistant interface (a gesture in the gesture input may be a default gesture of a system of the terminal device, or may also be a user-defined gesture), and may also be input by the user to a "history input" control on the voice assistant interface.
Optionally, in this embodiment of the present invention, the target combination key may be a key for triggering activation of a voice assistant function and a volume key ("volume up" key or "volume down" key), or the target combination key may be a screen lock key and a volume key ("volume up" key or "volume down" key), which may be determined specifically according to actual usage requirements, and this embodiment of the present invention is not limited.
For example, the second sub-input may be a user pressing a key for triggering the voice assistant function and a "volume up" key (or a "volume down" key) at the same time.
It is to be understood that, in the embodiment of the present invention, the second sub-input may be a non-speech input.
It should be noted that, in the embodiment of the present invention, for a method for displaying at least one piece of information by the terminal device in S202b, reference may be specifically made to the related description of S202 in the above embodiment, and details are not described here again.
Optionally, in the embodiment of the present invention, when the target content is a voice assistant interface, the above S202b may be specifically implemented by the following S202b1 or S202b 2.
S202b1, the terminal equipment responds to the second sub-input and displays at least one message on the voice assistant interface.
Optionally, in this embodiment of the present invention, the displaying of the at least one piece of information by the terminal device on the voice assistant interface may be implemented in the following two manners (i.e., the following three manner and the following four manner).
The third method comprises the following steps: in the case where the voice assistant interface includes only one page, the at least one message may be displayed full screen on the voice assistant interface or may be displayed in an area on the voice assistant interface.
The method is as follows: where the voice assistant interface includes multiple pages, at least one piece of information may be displayed on one or more pages in the voice assistant interface.
S202b2, the terminal device responds to the second sub-input, and displays a sub-interface on the voice assistant interface, wherein the sub-interface comprises at least one piece of information.
Optionally, in an embodiment of the present invention, the sub-interface may be displayed on the voice assistant interface in the form of a floating frame (shown as 70 in fig. 7).
Optionally, in an embodiment of the present invention, the sub-interface may be a slidable interface. That is, when the user slides up, down, left, and right within the sub-interface, the content in the sub-interface may also change (e.g., update from some information to other information). It is understood that the contents in the sub-interfaces are all historical instructions input by the user through the voice of the voice assistant.
In the embodiment of the invention, the terminal equipment can directly display at least one message on the voice assistant interface or display at least one message on the sub-interface displayed on the voice assistant interface in a suspension manner. Therefore, the embodiment of the invention can provide a plurality of different modes to enable the terminal equipment to display at least one piece of information, thereby enabling the mode of displaying at least one piece of information by the terminal equipment to be more flexible.
Alternatively, in the embodiment of the present invention, when the second sub input is an input to a target combination key, the step S202b may be specifically implemented by the step S202b3 described below.
And S202b3, in the case that the target combination key is the preset combination key, the terminal equipment responds to the second sub-input and displays at least one piece of information.
It should be noted that, in the embodiment of the present invention, the method for displaying at least one piece of information by the terminal device in S202b3 may specifically refer to the relevant descriptions of S202, S202b1, and S202b2 in the foregoing embodiment, and no further description is given here.
Optionally, in an embodiment of the present invention, the preset combination key may include a voice assistant key and a volume key, where the voice assistant key is a key for triggering and enabling a voice assistant function.
Optionally, in the embodiment of the present invention, the volume key may be a "volume up" key or a "volume down" key, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
In the embodiment of the present invention, when the second sub-input is an input to a target combination key, and the target combination key is a preset combination key, the terminal device may display at least one piece of information (each piece of information in the at least one piece of information is used to indicate an instruction, and the instruction indicated by each piece of information is a history instruction of a voice input of the user, which is obtained by the terminal device through a voice assistant function) on a screen of the terminal device in response to the second sub-input, so that when an environment where the user is located is noisy or the user cannot clearly speak an operation instruction, the user may directly trigger the terminal device to perform a corresponding operation through a non-voice input of the information displayed by the terminal device. Therefore, even if the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the terminal device can still accurately respond to the historical instruction input by the user in the previous voice, so that the corresponding operation is accurately executed.
In the embodiment of the present invention, the information display methods shown in the above drawings are all exemplarily described with reference to one drawing in the embodiment of the present invention. In specific implementation, the information display method shown in each of the above drawings may also be implemented by combining any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
As shown in fig. 8, an embodiment of the present invention provides a terminal device 300, which may include a receiving module 301 and a display module 302. The receiving module 301 is configured to receive a first input of a user, where the first input is at least used to trigger the terminal device to enable the voice assistant function; a display module 302, configured to display at least one piece of information in response to the first input received by the receiving module 301, where each piece of information is used to indicate an instruction, and the instruction indicated by each piece of information is a history instruction of user voice input acquired by the terminal device through the voice assistant function.
Optionally, in the terminal device provided in the embodiment of the present invention, with reference to fig. 8, as shown in fig. 9, the terminal device 300 provided in the embodiment of the present invention may further include an execution module 303. The receiving module 301 is further configured to receive a second input of the target information in the at least one information from the user after the display module 302 displays the at least one information; an executing module 303, configured to execute the instruction indicated by the target information in response to the second input received by the receiving module 301.
Optionally, in the terminal device provided in the embodiment of the present invention, the first input includes a first sub input and a second sub input. The display module 302 is specifically configured to display the target content in response to the first sub-input, and display at least one piece of information in response to the second sub-input. The target content is a voice assistant interface or target prompt information, and the target prompt information is used for indicating that the voice assistant function is enabled.
Optionally, in the terminal device provided in the embodiment of the present invention, when the target content is a voice assistant interface; the display module 302 is specifically configured to display at least one piece of information on the voice assistant interface, or the display module 302 is specifically configured to display a sub-interface on the voice assistant interface, where the sub-interface includes at least one piece of information.
Optionally, in the terminal device provided in the embodiment of the present invention, the second sub-input is an input to a target combination key; the display module 302 is specifically configured to, in a case that the target combination key is a preset combination key, respond to the second sub-input received by the receiving module 301, and display at least one piece of information.
Optionally, in the terminal device provided in the embodiment of the present invention, the preset combination key includes a voice assistant key and a volume key, and the voice assistant key is a key for triggering and enabling a voice assistant function.
The terminal device provided by the embodiment of the present invention can implement each process implemented by the terminal device in the above method embodiments, and is not described herein again to avoid repetition.
The terminal device can receive a first input (at least used for triggering the terminal device to enable the voice assistant function) of a user, and display at least one piece of information (each piece of information in the at least one piece of information is used for indicating an instruction, and the instruction indicated by each piece of information is a historical instruction of the voice input of the user, acquired by the terminal device through the voice assistant function) in response to the first input. By the scheme, since the terminal device can store information indicating the history instructions which are voice-input by the user through the voice assistant function in the terminal device, the terminal device can present the information indicating the history instructions to the user when the user is triggered by the first input. Therefore, when the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the user can directly input the information displayed by the terminal device, and the terminal device can be triggered to execute corresponding operation. In other words, in the embodiment of the invention, even if the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the terminal device can still accurately respond to the historical instruction input by the previous voice of the user, so that the corresponding operation is accurately executed.
Fig. 10 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention. The terminal device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 10 is not intended to be limiting, and that terminal devices may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The user input unit 104 is used for receiving a first input of a user, wherein the first input is used for triggering the terminal equipment to enable the voice assistant function; a display unit 106, configured to display at least one piece of information in response to the first input received by the input unit 104, where each piece of information is used to indicate an instruction, and the instruction indicated by each piece of information is a history instruction of the user voice input acquired by the terminal device through the voice assistant function.
The terminal device can receive a first input (at least used for triggering the terminal device to enable the voice assistant function) of a user, and display at least one piece of information (each piece of information in the at least one piece of information is used for indicating an instruction, and the instruction indicated by each piece of information is a historical instruction of the voice input of the user, acquired by the terminal device through the voice assistant function) in response to the first input. By the scheme, since the terminal device can store information indicating the history instructions which are voice-input by the user through the voice assistant function in the terminal device, the terminal device can present the information indicating the history instructions to the user when the user is triggered by the first input. Therefore, when the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the user can directly input the information displayed by the terminal device, and the terminal device can be triggered to execute corresponding operation. In other words, in the embodiment of the invention, even if the environment where the user is located is noisy or the user cannot clearly speak the operation instruction, the terminal device can still accurately respond to the historical instruction input by the previous voice of the user, so that the corresponding operation is accurately executed.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 10, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which includes the processor 110, the memory 109, and a computer program stored in the memory 109 and capable of being executed on the processor 110 as shown in fig. 10, where the computer program is executed by the processor 110 to implement the processes of the method embodiments, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. An information display method is applied to a terminal device, and the method comprises the following steps:
receiving a first input of a user, wherein the first input is a non-voice input and is at least used for triggering the terminal equipment to enable a voice assistant function;
responding to the first input, displaying at least one piece of information, wherein each piece of information in the at least one piece of information is used for indicating an instruction, and the instruction indicated by each piece of information is a history instruction which is input by the voice of the user and is acquired by the terminal equipment through the voice assistant function;
receiving a second input of a target information in the at least one information by a user, wherein the second input is a non-voice input and comprises two sub-inputs, one sub-input is an input to one or some physical keys on the terminal equipment, and the other sub-input is a touch input to an area where the target information is located or an input to another or other physical keys on the terminal equipment;
in response to the second input, executing the instruction indicated by the target information.
2. The method of claim 1, wherein the first input comprises a first sub-input and a second sub-input;
said displaying at least one information in response to said first input, comprising:
in response to the first sub-input, displaying target content, wherein the target content is a voice assistant interface or target prompt information, and the target prompt information is used for indicating that the voice assistant function is enabled;
displaying the at least one information in response to the second sub-input.
3. The method of claim 2, wherein the targeted content is a voice assistant interface;
the displaying the at least one message comprises:
displaying the at least one message on the voice assistant interface;
alternatively, the first and second electrodes may be,
displaying a sub-interface on the voice assistant interface, wherein the sub-interface comprises the at least one piece of information.
4. A method according to claim 2 or 3, wherein the second sub-input is an input to a target combination key;
the displaying the at least one information in response to the second sub-input includes:
and responding to the second sub-input to display the at least one message under the condition that the target combined key is a preset combined key.
5. The terminal equipment is characterized by comprising a receiving module, a display module and an execution module;
the receiving module is used for receiving a first input of a user, wherein the first input is a non-voice input and is at least used for triggering the terminal equipment to start a voice assistant function;
the display module is configured to display at least one piece of information in response to the first input received by the receiving module, where each piece of information is used to indicate an instruction, and the instruction indicated by each piece of information is a history instruction that is input by the voice of the user and is acquired by the terminal device through the voice assistant function;
the receiving module is further configured to receive a second input of target information in the at least one piece of information by a user after the display module displays the at least one piece of information, where the second input is a non-voice input, and the second input includes two sub-inputs, one sub-input is an input to one or some physical keys on the terminal device, and the other sub-input is a touch input to an area where the target information is located or an input to another or some other physical keys on the terminal device;
the executing module is used for responding to the second input received by the receiving module and executing the instruction indicated by the target information.
6. The terminal device of claim 5, wherein the first input comprises a first sub-input and a second sub-input;
the display module is specifically configured to display a target content in response to the first sub-input, and display the at least one message in response to the second sub-input, where the target content is a voice assistant interface or a target prompt message, and the target prompt message is used to indicate that the voice assistant function is enabled.
7. The terminal device of claim 6, wherein the target content is a voice assistant interface;
the display module is specifically configured to display the at least one message on the voice assistant interface;
alternatively, the first and second electrodes may be,
the display module is specifically configured to display a sub-interface on the voice assistant interface, where the sub-interface includes the at least one piece of information.
8. The terminal device according to claim 6 or 7, wherein the second sub-input is an input to a target combination key;
the display module is specifically configured to respond to the second sub-input and display the at least one piece of information when the target combination key is a preset combination key.
9. Terminal device, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the information display method according to any one of claims 1 to 4.
CN201910132306.5A 2019-02-22 2019-02-22 Information display method and terminal equipment Active CN110012151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910132306.5A CN110012151B (en) 2019-02-22 2019-02-22 Information display method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910132306.5A CN110012151B (en) 2019-02-22 2019-02-22 Information display method and terminal equipment

Publications (2)

Publication Number Publication Date
CN110012151A CN110012151A (en) 2019-07-12
CN110012151B true CN110012151B (en) 2021-08-24

Family

ID=67165957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910132306.5A Active CN110012151B (en) 2019-02-22 2019-02-22 Information display method and terminal equipment

Country Status (1)

Country Link
CN (1) CN110012151B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910872B (en) * 2019-09-30 2023-06-02 华为终端有限公司 Voice interaction method and device
CN112245928A (en) * 2020-10-23 2021-01-22 网易(杭州)网络有限公司 Guiding method and device in game, electronic equipment and storage medium
CN114115620B (en) * 2021-10-27 2023-10-24 青岛海尔科技有限公司 Prompt box response method and device, storage medium and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319485A (en) * 2018-01-29 2018-07-24 出门问问信息科技有限公司 Information interacting method, device, equipment and storage medium
CN108681567A (en) * 2018-05-03 2018-10-19 青岛海信移动通信技术股份有限公司 A kind of information recommendation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460085B2 (en) * 2013-12-09 2016-10-04 International Business Machines Corporation Testing and training a question-answering system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319485A (en) * 2018-01-29 2018-07-24 出门问问信息科技有限公司 Information interacting method, device, equipment and storage medium
CN108681567A (en) * 2018-05-03 2018-10-19 青岛海信移动通信技术股份有限公司 A kind of information recommendation method and device

Also Published As

Publication number Publication date
CN110012151A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN108255378B (en) Display control method and mobile terminal
CN110062105B (en) Interface display method and terminal equipment
CN110058836B (en) Audio signal output method and terminal equipment
US11658932B2 (en) Message sending method and terminal device
CN111142991A (en) Application function page display method and electronic equipment
CN110837327B (en) Message viewing method and terminal
CN108897473B (en) Interface display method and terminal
CN111142723B (en) Icon moving method and electronic equipment
CN109710349B (en) Screen capturing method and mobile terminal
CN109085968B (en) Screen capturing method and terminal equipment
CN111026484A (en) Application sharing method, first electronic device and computer-readable storage medium
CN110233933B (en) Call method, terminal equipment and computer readable storage medium
CN110752981B (en) Information control method and electronic equipment
CN110703972B (en) File control method and electronic equipment
CN111163224B (en) Voice message playing method and electronic equipment
CN108804151B (en) Method and terminal for restarting application program
CN110012151B (en) Information display method and terminal equipment
CN111190517B (en) Split screen display method and electronic equipment
CN110908750B (en) Screen capturing method and electronic equipment
CN109992192B (en) Interface display method and terminal equipment
CN109189514B (en) Terminal device control method and terminal device
CN109829707B (en) Interface display method and terminal equipment
CN111443968A (en) Screenshot method and electronic equipment
CN109068276B (en) Message conversion method and terminal
CN111026454A (en) Function starting method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant