CN109347721B - Information sending method and terminal equipment - Google Patents

Information sending method and terminal equipment Download PDF

Info

Publication number
CN109347721B
CN109347721B CN201811140213.9A CN201811140213A CN109347721B CN 109347721 B CN109347721 B CN 109347721B CN 201811140213 A CN201811140213 A CN 201811140213A CN 109347721 B CN109347721 B CN 109347721B
Authority
CN
China
Prior art keywords
information
voice
expression
input
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811140213.9A
Other languages
Chinese (zh)
Other versions
CN109347721A (en
Inventor
赵鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811140213.9A priority Critical patent/CN109347721B/en
Publication of CN109347721A publication Critical patent/CN109347721A/en
Application granted granted Critical
Publication of CN109347721B publication Critical patent/CN109347721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides an information sending method and terminal equipment, wherein the information sending method comprises the following steps: receiving a first input of a recording button of a chat interface from a user; receiving first voice information input by a user in response to a first input; determining a target information type corresponding to the fingerprint information according to the first input fingerprint information; and determining first associated information corresponding to the first voice information, and sending the first associated information to an information receiving party, wherein the type of the first associated information is the type of the target information. According to the embodiment of the invention, different fingerprint information is acquired, and after the voice information input by the user is received, different information sending forms are determined according to the fingerprint information, so that the information sending process can be optimized, the sending experience of the user is improved, and the interestingness is increased.

Description

Information sending method and terminal equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an information sending method and a terminal device.
Background
Currently, emoticons are becoming an indispensable communication medium in social networks, and people tend to prefer to use emoticons instead of characters to convey their thoughts. But as the number of emoticons increases, it becomes increasingly difficult for a user to find an emoticon. For example, in the current chat tool, a plurality of emoticons are included, and the emoticons to be sent can be found only by sliding for a plurality of times during each search, and in the process of sliding, a spare wheel emoticons is selected for sending because the number of sliding times is too large and is irritated, so that the user cannot accurately convey the idea, and the user experience is poor.
In view of the above problems, the following improvements are proposed in the prior art: 1. the transverse sliding is changed into the longitudinal sliding, so that the sliding times are reduced to a certain extent, and the searching efficiency is accelerated; 2. and when the characters in the expression package input by the user are detected, recommending the corresponding expression package for the user to select and send.
Although the stored emoticons can be seen intuitively by sliding the screen, in many cases, a user does not need to know which emoticons exist and only needs to find the emoticons to be sent quickly, so that the time of the user is wasted to a great extent by the conventional scheme.
Although the expression packages can be recommended to the user quickly, the expression packages are not necessarily preferred by the user, and secondly, the same keywords may correspond to different expression packages, so that the user still needs to select the expression packages. And many emoticons are text-free, so the user does not know what should be entered to find the corresponding emoticon.
Therefore, the existing expression package searching mode has the problem that the expression package is searched slowly or cannot be searched accurately. In the prior art, after the emoticon is found, the emoticon is usually sent directly, and a user cannot send other information in addition under the condition of not canceling the emoticon, so that the sending process is limited.
Disclosure of Invention
The embodiment of the invention provides an information sending method and terminal equipment, and aims to solve the problems that the existing expression information searching mode is slow in searching or cannot accurately search expression information, and the existing information sending process is limited.
In order to solve the above problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides an information sending method, including:
receiving a first input of a recording button of a chat interface from a user;
receiving first voice information input by a user in response to a first input;
determining a target information type corresponding to the fingerprint information according to the first input fingerprint information;
and determining first associated information corresponding to the first voice information, and sending the first associated information to an information receiving party, wherein the type of the first associated information is the type of the target information.
In a second aspect, an embodiment of the present invention provides a terminal device, including:
the first receiving module is used for receiving first input of a recording button of the chat interface from a user;
the second receiving module is used for responding to the first input and receiving the first voice information input by the user;
the determining module is used for determining a target information type corresponding to the fingerprint information according to the first input fingerprint information;
and the processing module is used for determining first associated information corresponding to the first voice information and sending the first associated information to the information receiving party, wherein the type of the first associated information is the type of the target information.
In a third aspect, an embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the information sending method described above.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the information sending method are implemented.
In the embodiment of the invention, the first input of the recording button of the chat interface by the user is received, the first voice information of the user is acquired in response to the first input, the corresponding target information type is determined according to the fingerprint of the first input, the first associated information corresponding to the first voice information and belonging to the same type with the target information type is determined, the first associated information is sent to the receiving party, different information sending forms can be determined through screen fingerprint identification, the sending flow is optimized, the user experience is improved, and the sending interestingness is increased.
Drawings
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram illustrating an information sending method according to an embodiment of the present invention;
fig. 2 is a schematic overall flow chart of an information sending method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides an information sending method, as shown in fig. 1, including:
step 101, receiving a first input of a recording button of a chat interface from a user.
The method comprises the steps that the terminal equipment receives first input of a recording button displayed on a certain chat interface from a user in a state that the social application is opened. The first input here may be a long press input, but may of course be other types of input. After receiving the first input of the record button from the user, step 102 may be performed in response to the first input.
Step 102, responding to the first input, and receiving first voice information input by a user.
After the first input of the user is received, first voice information input by the user after the recording button is started is obtained according to the first input of the user. Namely, after the user performs the first input on the recording button, the recording button is in an on state, and the terminal device can acquire the first voice information through the recording button.
And 103, determining a target information type corresponding to the fingerprint information according to the first input fingerprint information.
After the terminal device receives the first input of the recording button, fingerprint information corresponding to the first input can be acquired, and then according to the acquired fingerprint information, a target information type corresponding to the first input fingerprint information is determined in a preset corresponding relation between the fingerprint information and the information type.
And step 104, determining first associated information corresponding to the first voice information, and sending the first associated information to an information receiving party, wherein the type of the first associated information is the type of the target information.
After first voice information of a user is obtained and a target information type is determined according to first input fingerprint information of the user, first associated information which corresponds to the first voice information and is of the same type as the target information type is determined, and the determined first associated information is sent to an information receiving party. The first associated information may be voice type information or expression type information, and may also be voice type information and expression type information.
When the first associated information is expression type information, the associated expression information can be determined through voice information input by a user, and the finding efficiency of the expression information can be improved. Through identifying the fingerprint information of the user, different information sending forms can be determined, the sending flow is optimized, the user experience is improved, and the sending interestingness is increased.
In the embodiment of the invention, the target information type is an expression type; the step of determining first associated information corresponding to the first voice information and sending the first associated information to the information receiver comprises the following steps:
determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information; and sending the first expression associated information to an information receiver.
If the fingerprint information input by the user is first fingerprint information, determining that the target information type is an expression type, and when determining the first associated information, searching for first expression associated information corresponding to the first voice information according to a pre-established corresponding relationship between the voice information and the expression information, wherein the first associated information is the first expression associated information. And after the first expression associated information is determined, sending the first expression associated information to an information receiving party.
By establishing the corresponding relation between the voice information and the expression information in advance, when the target information type is the expression type, the corresponding first expression associated information is searched according to the first voice information input by the user, so that the voice information and the expression information of the user can be bound, the user can quickly and conveniently find the corresponding first expression associated information, the user experience is greatly improved, and the interestingness is increased. Meanwhile, the fingerprint information of the user is identified, and the expression type information is determined to be sent to the information receiving party, so that the sending flow can be optimized, and the sending experience of the user is improved.
In the embodiment of the invention, the target information types are expression types and voice types; the step of determining first associated information corresponding to the first voice information and sending the first associated information to the information receiver comprises the following steps:
determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information; and sending the first expression associated information and the first voice information to an information receiving party.
If the fingerprint information input by the user is the second fingerprint information, determining that the target information type is the expression type and the voice type, searching first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information when determining the first associated information, and then determining the first expression associated information and the first voice information as the first associated information. And after the first associated information is determined, sending the first expression associated information and the first voice information to an information receiving party.
By establishing the corresponding relation between the voice information and the expression information in advance, the corresponding first expression associated information can be found more quickly and conveniently, so that the user experience is greatly improved, and the interestingness is increased. Meanwhile, the fingerprint information of the user is identified, and the expression type information and the voice type information are determined to be sent to the information receiving party at the same time, so that the sending process can be optimized, and the sending experience of the user is improved.
In the embodiment of the invention, the target information type is a voice type; the step of determining first associated information corresponding to the first voice information and sending the first associated information to the information receiver comprises the following steps: and determining that the first associated information is first voice information, and sending the first voice information to an information receiver.
If the fingerprint information input by the user is the third fingerprint information, the target information type can be determined to be the voice type, the acquired first voice information of the user is directly determined to be the first associated information at the moment, the first voice information is sent to the information receiver, the fact that the voice type information is sent to the information receiver is determined through fingerprint information can be achieved, the sending flow is optimized, and the sending experience of the user is improved.
In the embodiment of the invention, by using the screen fingerprint technology, the user can select to send only the voice information, only the expression information or send the voice information and the expression information at the same time, thereby solving the problem of popping up the expression information prompt when the user only wants to send the voice, ensuring the synchronous sending of the expression information and the voice information, facilitating the information receiver to better understand the expression information, or sending only the expression information according to the requirement of the user. And furthermore, the sending process is optimized, so that the user can select the information sending content according to the requirement, and the use experience of the user is improved.
In the embodiment of the present invention, for a scheme of determining corresponding first expression associated information according to first voice information, a correspondence between voice information and expression information input by a user needs to be established in advance. When establishing the corresponding relationship, the terminal device first needs to acquire the voice information input by the user after selecting one expression information, and then establishes the corresponding relationship between the voice information and the expression information. The content of the voice information can be related to the content of the expression information, and the user can select the appropriate voice information according to the use requirement.
The following describes in detail the process of establishing the correspondence between the speech information and the expression information input by the user.
Before receiving a first input of a user to a recording button of a chat interface, receiving a first touch operation of the user to expression information; receiving a second input of the recording button of the chat interface from the user when the expression information is in a selected state according to the first touch operation; and responding to the second input, acquiring the voice information input by the user, and establishing the corresponding relation between the voice information and the currently selected expression information.
When the user meets favorite facial expression information, first touch operation can be carried out on the facial expression information, the facial expression information can be in a selected state after the first touch operation is received, the user can carry out second input on a recording button on a chat interface and input corresponding voice information under the selected state of the facial expression information, and the terminal equipment can receive the voice information input by the user and establish a corresponding relation with the selected facial expression information according to the received voice information.
For example, when the user encounters favorite emotion information in the chat interface, the user can press the emotion information for a long time to enable the emotion information to be in a selected state, then click a recording button of the chat interface to start recording, and at this time, the user can enter a keyword related to the emotion information. For example, for expression information related to a night rest, the user may enter the keyword "good night"; for expression information of a thing to eat, a user can enter a keyword 'hungry'; for facial expression information, the user may enter the keyword "shy". Of course, the keywords may be determined by the user regardless of the content of the emotion information. And after the recording is finished, storing the recording information, thereby establishing the corresponding relation between the recording information and the selected expression information. So far, the exclusive and exclusive corresponding relation to the user can be formed.
Responding to the second input, acquiring voice information input by a user, and establishing a corresponding relation between the voice information and the currently selected expression information, wherein the step comprises the following steps: responding to the second input to acquire voice and text information input by the user; and establishing a corresponding relation between the voice character information and the currently selected expression information.
When the corresponding relationship is established, the voice and text information input by the user needs to be acquired, and then the corresponding relationship between the voice and text information and the expression information is established according to the expression information. When the user searches for the expression information, the corresponding voice character information can be input, and the expression information to be searched for is obtained according to the corresponding relation between the voice character information and the expression information. By the operation, the problems of complex searching and slow searching efficiency when the user searches the required expression information in the plurality of expression information can be solved.
Responding to the second input, acquiring voice information input by a user, and establishing a corresponding relation between the voice information and the currently selected expression information, wherein the method further comprises the following steps:
responding to the second input to acquire voice character information and voice tone information input by a user; establishing a corresponding relation between the voice character information and the voice tone information and the currently selected expression information; wherein, a voice character information corresponds to at least one expression information, and a voice character information and a voice tone information correspond to an expression information.
When establishing the corresponding relationship, the intonation of the user's voice may be introduced as another feature value as auxiliary information. The voice character information and the voice tone information can be simultaneously acquired when the voice information input by the user is acquired, and the voice character information can correspond to at least one piece of expression information by introducing the voice tone information, so that the burden of memorizing the voice character information by the user can be reduced.
The voice tone information is introduced as the characteristic value, the corresponding relation between the voice character information and the voice tone information and the currently selected expression information can be established, when the voice information input by a user is analyzed, the voice character information and the voice tone information need to be analyzed, so that different expression information can exist for the same voice character information, and then the unique expression information is determined according to the voice tone information.
Some of the expression information can express the same meaning, but the expressed moods are different. For example, facial expression information of laugh is sometimes indicative of cynicism, sometimes indicative of not-scurf, sometimes indicative of lying-on, sometimes indicative of really indicative of laugh coming from the heart, and may be represented by the same language character information "laugh", but is represented acoustically as a change in tone because the emotion conveyed is not the same. Therefore, the user can input the corresponding tone information when inputting the voice character information to determine the corresponding expression information.
The voice tone information of the recording of the user is used as the auxiliary characteristic variable for searching the expression information, so that the emotion of the user can be added as another way for recommending the expression information, one voice character information can correspond to a plurality of expression information, and the user can accurately and quickly find the expression information really wanted by the user when speaking the voice character information in different tones, so that the number of the memorized voice character information can be reduced for the user, different expression information can be bound to the same voice character information, and the practicability of searching the expression information is improved.
After the first associated information corresponding to the first voice information is determined, if the target information type is the voice type, that is, the first associated information is the voice information, the first associated information may be directly sent immediately or sent after a certain time interval, if the target information type is the expression type, that is, the first associated information is the expression information, the first associated information may be directly sent or input into the information input box and sent according to a sending instruction of a user, if the target information type is the voice type and the expression type, that is, the first associated information is the voice information and the expression information, the first associated information may be directly sent or input into the information input box, and the expression information and the voice information are sent together according to the sending instruction of the user.
In the case of transmission according to a transmission instruction of a user, the transmission process may be executed when receiving an operation of a transmission button by the user, and at this time, fingerprint recognition is not required and only the transmission instruction is received.
It should be noted that, in a case that the first associated information is expression information or the first associated information is expression information and voice information, after the expression information is found, if a second touch operation of the user on the expression information is not received within a preset time period, the display of the expression information is cancelled; or if a third touch operation of the user on the non-expression information area of the chat interface is received within the preset time length, the expression information is cancelled.
After the expression information is found, if the user wants to send the expression information, the expression information can be clicked and input into the input box to execute the sending process, if the user does not want to send the expression information immediately, the second touch operation can not be performed on the expression information within the preset time length, and the terminal equipment does not receive the second touch operation of the user on the expression information within the preset time length, so that the display of the expression information is cancelled. Or the user can also perform third touch operation on the chat interface non-expression information area within the preset duration, and if the terminal device receives the third touch operation of the user on the chat interface non-expression information area within the preset duration, the terminal device can also cancel the display of the expression information, and accordingly, the sending of the expression information can be cancelled.
In the embodiment of the present invention, whether the first associated information is voice information, emotion information, or includes both voice information and emotion information, after determining the first associated information corresponding to the first voice information, the first associated information may be sent immediately or after a time interval, and for the case of sending after a time interval, if a fourth touch operation of the user on the non-input area of the chat interface is received before sending, the sending may be cancelled. So as to avoid the situation that the user can not cancel the sending process.
To sum up, in the overall embodiment of the present invention, a first input of a recording button of a chat interface by a user is received, a first voice message of the user is obtained in response to the first input, a corresponding target message type is determined according to a fingerprint of the first input, first associated information corresponding to the first voice message and belonging to the same type as the target message type is determined, and the first associated information is sent to a receiving party.
The corresponding relation between the voice information input by the user and the expression information is established in advance, the corresponding expression information is searched according to the voice information of the user, the voice information and the expression information of the user are bound, the user can quickly and conveniently find the corresponding expression information, and therefore the user experience is greatly improved, and meanwhile interestingness is increased.
Furthermore, the voice tone information is used as an auxiliary characteristic variable for finding the expression information, so that the number of the memorized voice character information can be reduced for a user, different expression information can be bound to the same voice character information, and the practicability of finding the expression information is improved.
The overall flow of the embodiment of the present invention is illustrated in a specific implementation, as shown in fig. 2:
step 201, receiving a first touch operation of a user on expression information.
Step 202, when the expression information is selected, acquiring voice information input by a user, and establishing a corresponding relation between the voice information and the currently selected expression information.
Step 203, receiving a first input of the recording button by the user on the chat interface, identifying whether the fingerprint information input by the user is first fingerprint information, second fingerprint information or third fingerprint information, if the fingerprint information of the user is the third fingerprint information, executing step 204, and if the fingerprint information of the user is the first fingerprint information or the second fingerprint information, executing step 205.
The first fingerprint information is fingerprint information corresponding to the sent expression information, the second fingerprint information is fingerprint information corresponding to the sent expression information and the sent voice information, and the third fingerprint information is fingerprint information corresponding to the sent voice information. Namely, the expression information searching function can be started by inputting the matched fingerprint, and if the expression information searching function is not started, the normal voice information sending process is executed. For example, the fingerprint of the left thumb (first fingerprint information) is used to trigger the functions of facial expression information search and facial expression information transmission, and the fingerprint of the right thumb (second fingerprint information) is used to trigger the functions of facial expression information search and facial expression information and voice information transmission. Then, when the recording button is pressed by the left thumb or the right thumb, expression information can be searched, and if a mismatched fingerprint is input, only a normal voice sending process is triggered.
And step 204, acquiring the first voice information input by the user and directly sending the first voice information to the information receiver. The flow then ends. The step is to enter a normal voice sending process, and after the user speaks a voice, the voice can be directly sent to the other party.
And step 205, acquiring first voice information input by the user, and searching corresponding first expression associated information according to the corresponding relation.
If the expression information needs to be searched, the words and the intonation spoken by the user can be recognized by utilizing a voice recognition technology according to first voice information input by the user, such as 'good night', 'hungry', 'shy', and the like, and then the first expression associated information bound with the words and the intonation can be found out from the database. When the first expression associated information is searched, the first expression associated information can be searched through voice character information and voice tone information, wherein one voice character information corresponds to at least one expression information, and one voice character information and one voice tone information correspond to one expression information.
After the first expression associated information is found, a corresponding execution process is determined according to the first fingerprint information or the second fingerprint information, if the first fingerprint information is input by the user, step 206 is executed, and if the second fingerprint information is input by the user, step 207 is executed.
And step 206, only sending the first expression associated information to the information receiver.
And step 207, sending the first voice information and the first expression associated information to an information receiver at the same time.
According to the embodiment of the invention, the recording of the user is bound with the expression information, so that the user can more quickly and conveniently find the expression information to be sent, thereby greatly improving the user experience and increasing the interestingness; by introducing the voice tone information, the number of the memorized voice character information can be reduced, so that the same voice character information is bound with different expression information, and the practicability of searching the expression information is improved; by using the screen fingerprint identification technology, the sending process can be optimized, and the use experience of a user is improved.
An embodiment of the present invention provides a terminal device, as shown in fig. 3, including:
the first receiving module 10 is configured to receive a first input of a recording button of a chat interface from a user;
a second receiving module 20, configured to receive, in response to the first input, first voice information input by a user;
the determining module 30 is configured to determine, according to the first input fingerprint information, a target information type corresponding to the fingerprint information;
and the processing module 40 is configured to determine first associated information corresponding to the first voice information, and send the first associated information to the information receiving side, where the type of the first associated information is a target information type.
Wherein the target information type is an expression type; the processing module comprises:
the first determining submodule is used for determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information;
and the first sending submodule is used for sending the first expression associated information to the information receiving party.
The target information type is an expression type and a voice type; the processing module comprises:
the second determining submodule is used for determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information;
and the second sending submodule is used for sending the first expression associated information and the first voice information to the information receiving party.
Wherein the target information type is a voice type; the processing module is further to:
and determining that the first associated information is first voice information, and sending the first voice information to an information receiver.
Wherein, terminal equipment still includes:
the third receiving module is used for receiving a first touch operation of the user on the emotion information before the first receiving module receives a first input of the user on the recording button of the chat interface;
the fourth receiving module is used for receiving second input of the recording button of the chat interface from the user when the expression information is in a selected state according to the first touch operation;
and the establishing module is used for responding to the second input, acquiring the voice information input by the user and establishing the corresponding relation between the voice information and the currently selected expression information.
Wherein, the establishing module comprises:
the first acquisition submodule is used for responding to the second input to acquire the voice and text information input by the user;
and the first establishing submodule is used for establishing the corresponding relation between the voice character information and the currently selected expression information.
Wherein, the establishing module comprises:
the second acquisition submodule is used for responding to second input to acquire voice character information and voice tone information input by a user;
the second establishing submodule is used for establishing the corresponding relation between the voice character information and the voice tone information and the currently selected expression information;
wherein, a voice character information corresponds to at least one expression information, and a voice character information and a voice tone information correspond to an expression information.
According to the terminal device provided by the embodiment of the invention, the first input of the recording button of the chat interface by the user is received, the first voice information of the user is obtained in response to the first input, the corresponding target information type is determined according to the fingerprint of the first input, the first associated information corresponding to the first voice information and belonging to the same type as the target information type is determined, the first associated information is sent to the receiving party, different information sending forms can be determined through screen fingerprint identification, the sending flow is optimized, the user experience is improved, and the sending interestingness is increased.
The corresponding relation between the voice information input by the user and the expression information is established in advance, the corresponding expression information is searched according to the voice information of the user, the voice information and the expression information of the user are bound, the user can quickly and conveniently find the corresponding expression information, and therefore the user experience is greatly improved, and meanwhile interestingness is increased.
Furthermore, the voice tone information is used as an auxiliary characteristic variable for finding the expression information, so that the number of the memorized voice character information can be reduced for a user, different expression information can be bound to the same voice character information, and the practicability of finding the expression information is improved.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411.
Those skilled in the art will appreciate that the terminal device configuration shown in fig. 4 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein the user input unit 407 is configured to: receiving a first input of a recording button of a chat interface from a user; receiving first voice information input by a user in response to a first input; the processor 410 is configured to: determining a target information type corresponding to the fingerprint information according to the first input fingerprint information; and determining first associated information corresponding to the first voice information, and sending the first associated information to an information receiving party, wherein the type of the first associated information is the type of the target information.
Wherein the target information type is an expression type; when determining the first associated information corresponding to the first voice information and sending the first associated information to the information receiving party, the processor 410 is further configured to perform the following steps: determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information; and sending the first expression associated information to an information receiver.
The target information type is an expression type and a voice type; when determining the first associated information corresponding to the first voice information and sending the first associated information to the information receiving party, the processor 410 is further configured to perform the following steps: determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information; and sending the first expression associated information and the first voice information to an information receiving party.
Wherein the target information type is a voice type; when determining the first associated information corresponding to the first voice information and sending the first associated information to the information receiving party, the processor 410 is further configured to perform the following steps: and determining that the first associated information is first voice information, and sending the first voice information to an information receiver.
Before receiving a first input of the recording button of the chat interface from the user, the user input unit 407 is further configured to perform the following steps: receiving a first touch operation of a user on expression information; receiving a second input of the recording button of the chat interface from the user when the expression information is in a selected state according to the first touch operation; the processor 410 is configured to: and responding to the second input, acquiring the voice information input by the user, and establishing the corresponding relation between the voice information and the currently selected expression information.
Wherein, the processor 410 is configured to, in response to the second input, obtain the voice information input by the user, and when establishing the corresponding relationship between the voice information and the currently selected expression information, further perform the following steps: responding to the second input to acquire voice and text information input by the user; and establishing a corresponding relation between the voice character information and the currently selected expression information.
Wherein, the processor 410 is configured to, in response to the second input, obtain the voice information input by the user, and when establishing the corresponding relationship between the voice information and the currently selected expression information, further perform the following steps: responding to the second input to acquire voice character information and voice tone information input by a user; establishing a corresponding relation between the voice character information and the voice tone information and the currently selected expression information; wherein, a voice character information corresponds to at least one expression information, and a voice character information and a voice tone information correspond to an expression information.
Therefore, the first input of the user to the recording button of the chat interface is received, the first voice information of the user is obtained in response to the first input, the corresponding target information type is determined according to the fingerprint of the first input, the first associated information corresponding to the first voice information and belonging to the same type as the target information type is determined, the first associated information is sent to the receiving party, different information sending forms can be determined through screen fingerprint identification, the sending flow is optimized, the user experience is improved, and the sending interestingness is increased.
The corresponding relation between the voice information input by the user and the expression information is established in advance, the corresponding expression information is searched according to the voice information of the user, the voice information and the expression information of the user are bound, the user can quickly and conveniently find the corresponding expression information, and therefore the user experience is greatly improved, and meanwhile interestingness is increased.
Furthermore, the voice tone information is used as an auxiliary characteristic variable for finding the expression information, so that the number of the memorized voice character information can be reduced for a user, different expression information can be bound to the same voice character information, and the practicability of finding the expression information is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 402, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the terminal apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The terminal device 400 further comprises at least one sensor 405, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the terminal apparatus 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 4, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 408 is an interface for connecting an external device to the terminal apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 400 or may be used to transmit data between the terminal apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the terminal device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The terminal device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 400 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the above-mentioned information sending method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned information sending method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (16)

1. An information transmission method, comprising:
receiving a first input of a recording button of a chat interface from a user;
receiving first voice information input by a user in response to the first input;
determining a target information type corresponding to the fingerprint information according to the first input fingerprint information; the target information type comprises at least one of a voice type and an expression type;
determining first associated information corresponding to the first voice information, and sending the first associated information to an information receiving party, wherein the type of the first associated information is the type of the target information;
the first associated information is voice type information and/or expression type information.
2. The information transmission method according to claim 1, wherein the target information type is an expression type; the step of determining first associated information corresponding to the first voice information and sending the first associated information to an information receiving party includes:
determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information;
and sending the first expression associated information to an information receiver.
3. The information transmission method according to claim 1, wherein the target information type is an expression type and a voice type; the step of determining first associated information corresponding to the first voice information and sending the first associated information to an information receiving party includes:
determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information;
and sending the first expression associated information and the first voice information to an information receiver.
4. The method according to claim 1, wherein the target information type is a voice type; the step of determining first associated information corresponding to the first voice information and sending the first associated information to an information receiving party includes:
and determining that the first associated information is the first voice information, and sending the first voice information to an information receiver.
5. The method as claimed in claim 2 or 3, wherein before receiving the first input of the recording button of the chat interface from the user, the method further comprises:
receiving a first touch operation of a user on expression information;
receiving a second input of the recording button of the chat interface from the user when the expression information is in a selected state according to the first touch operation;
and responding to the second input, acquiring the voice information input by the user, and establishing the corresponding relation between the voice information and the currently selected expression information.
6. The information sending method according to claim 5, wherein the step of acquiring the voice information input by the user in response to the second input and establishing the corresponding relationship between the voice information and the currently selected expression information includes:
responding to the second input to acquire voice and text information input by a user;
and establishing a corresponding relation between the voice character information and the currently selected expression information.
7. The information sending method according to claim 5, wherein the step of acquiring the voice information input by the user in response to the second input and establishing the corresponding relationship between the voice information and the currently selected expression information includes:
responding to the second input to acquire voice character information and voice tone information input by a user;
establishing a corresponding relation between the voice character information and the voice tone information and the currently selected expression information;
wherein, a voice character information corresponds to at least one expression information, and a voice character information and a voice tone information correspond to an expression information.
8. A terminal device, comprising:
the first receiving module is used for receiving first input of a recording button of the chat interface from a user;
the second receiving module is used for responding to the first input and receiving first voice information input by a user;
the determining module is used for determining a target information type corresponding to the fingerprint information according to the first input fingerprint information; the target information type comprises at least one of a voice type and an expression type;
the processing module is used for determining first associated information corresponding to the first voice information and sending the first associated information to an information receiving party, wherein the type of the first associated information is the type of the target information;
the first associated information is voice type information and/or expression type information.
9. The terminal device according to claim 8, wherein the target information type is an expression type; the processing module comprises:
the first determining submodule is used for determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information;
and the first sending submodule is used for sending the first expression associated information to an information receiving party.
10. The terminal device according to claim 8, wherein the target information type is an expression type and a voice type; the processing module comprises:
the second determining submodule is used for determining first expression associated information corresponding to the first voice information according to a pre-established corresponding relation between the voice information and the expression information;
and the second sending submodule is used for sending the first expression associated information and the first voice information to an information receiving party.
11. The terminal device according to claim 8, wherein the target information type is a voice type; the processing module is further to:
and determining that the first associated information is the first voice information, and sending the first voice information to an information receiver.
12. The terminal device according to claim 9 or 10, further comprising:
the third receiving module is used for receiving a first touch operation of the user on the emotion information before the first receiving module receives a first input of the user on the recording button of the chat interface;
the fourth receiving module is used for receiving second input of the recording button of the chat interface from the user when the expression information is in a selected state according to the first touch operation;
and the establishing module is used for responding to the second input, acquiring the voice information input by the user and establishing the corresponding relation between the voice information and the currently selected expression information.
13. The terminal device of claim 12, wherein the establishing module comprises:
the first acquisition submodule is used for responding to the second input to acquire voice and text information input by a user;
and the first establishing submodule is used for establishing the corresponding relation between the voice character information and the currently selected expression information.
14. The terminal device of claim 12, wherein the establishing module comprises:
the second acquisition submodule is used for responding to the second input to acquire the voice character information and the voice tone information input by the user;
the second establishing submodule is used for establishing the corresponding relation between the voice character information and the voice tone information and the currently selected expression information;
wherein, a voice character information corresponds to at least one expression information, and a voice character information and a voice tone information correspond to an expression information.
15. A terminal device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the information transmission method according to any one of claims 1 to 7.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the information transmission method according to any one of claims 1 to 7.
CN201811140213.9A 2018-09-28 2018-09-28 Information sending method and terminal equipment Active CN109347721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811140213.9A CN109347721B (en) 2018-09-28 2018-09-28 Information sending method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811140213.9A CN109347721B (en) 2018-09-28 2018-09-28 Information sending method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109347721A CN109347721A (en) 2019-02-15
CN109347721B true CN109347721B (en) 2021-12-24

Family

ID=65307313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811140213.9A Active CN109347721B (en) 2018-09-28 2018-09-28 Information sending method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109347721B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830368B (en) * 2019-11-22 2022-05-06 维沃移动通信有限公司 Instant messaging message sending method and electronic equipment
CN115400427A (en) * 2022-08-26 2022-11-29 网易(杭州)网络有限公司 Information processing method and device in game, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072207A (en) * 2007-06-22 2007-11-14 腾讯科技(深圳)有限公司 Exchange method for instant messaging tool and instant messaging tool
CN106789543A (en) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 The method and apparatus that facial expression image sends are realized in session
CN106888158A (en) * 2017-02-28 2017-06-23 努比亚技术有限公司 A kind of instant communicating method and device
CN107610432A (en) * 2017-10-16 2018-01-19 李修球 A kind of intelligent alarm method, system and intelligent mobile terminal
CN108363536A (en) * 2018-02-27 2018-08-03 维沃移动通信有限公司 A kind of expression packet application method and terminal device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100192225A1 (en) * 2009-01-28 2010-07-29 Juniper Networks, Inc. Efficient application identification with network devices
IL226047A (en) * 2013-04-29 2017-12-31 Hershkovitz Reshef May Method and system for providing personal emoticons
CN106034063A (en) * 2015-03-13 2016-10-19 阿里巴巴集团控股有限公司 Method and device for starting service in communication software through voice
US10122660B2 (en) * 2015-03-27 2018-11-06 MINDBODY, Inc. Contextual mobile communication platform
CN104933340B (en) * 2015-06-18 2017-10-24 广东欧珀移动通信有限公司 The sending method and mobile terminal of a kind of message
CN105574480B (en) * 2015-06-30 2019-02-01 宇龙计算机通信科技(深圳)有限公司 A kind of information processing method, device and terminal
CN106570106A (en) * 2016-11-01 2017-04-19 北京百度网讯科技有限公司 Method and device for converting voice information into expression in input process
US10916243B2 (en) * 2016-12-27 2021-02-09 Amazon Technologies, Inc. Messaging from a shared device
CN106850080A (en) * 2017-01-19 2017-06-13 努比亚技术有限公司 The sending method and mobile terminal of a kind of associated person information
CN107453986A (en) * 2017-09-30 2017-12-08 努比亚技术有限公司 Voice-enabled chat processing method and corresponding mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072207A (en) * 2007-06-22 2007-11-14 腾讯科技(深圳)有限公司 Exchange method for instant messaging tool and instant messaging tool
CN106789543A (en) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 The method and apparatus that facial expression image sends are realized in session
CN106888158A (en) * 2017-02-28 2017-06-23 努比亚技术有限公司 A kind of instant communicating method and device
CN107610432A (en) * 2017-10-16 2018-01-19 李修球 A kind of intelligent alarm method, system and intelligent mobile terminal
CN108363536A (en) * 2018-02-27 2018-08-03 维沃移动通信有限公司 A kind of expression packet application method and terminal device

Also Published As

Publication number Publication date
CN109347721A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN108415652B (en) Text processing method and mobile terminal
WO2020011077A1 (en) Notification message displaying method and terminal device
CN108279948B (en) Application program starting method and mobile terminal
CN109284144B (en) Fast application processing method and mobile terminal
CN108334196B (en) File processing method and mobile terminal
CN108074574A (en) Audio-frequency processing method, device and mobile terminal
CN110096203B (en) Screenshot method and mobile terminal
CN109446775A (en) A kind of acoustic-controlled method and electronic equipment
CN110827826A (en) Method for converting words by voice and electronic equipment
CN110989847A (en) Information recommendation method and device, terminal equipment and storage medium
CN109215660A (en) Text error correction method and mobile terminal after speech recognition
CN108391253B (en) application program recommendation method and mobile terminal
CN110830368A (en) Instant messaging message sending method and electronic equipment
CN110990679A (en) Information searching method and electronic equipment
CN107728920B (en) Copying method and mobile terminal
CN110780751B (en) Information processing method and electronic equipment
CN109982273B (en) Information reply method and mobile terminal
CN109347721B (en) Information sending method and terminal equipment
CN108520760B (en) Voice signal processing method and terminal
CN108270928B (en) Voice recognition method and mobile terminal
CN111061446A (en) Display method and electronic equipment
CN108632465A (en) A kind of method and mobile terminal of voice input
CN110032320B (en) Page rolling control method and device and terminal
CN109714462B (en) Method for marking telephone number and mobile terminal thereof
CN110880330A (en) Audio conversion method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant