CN112425144A - Information prompting method and related product - Google Patents
Information prompting method and related product Download PDFInfo
- Publication number
- CN112425144A CN112425144A CN201880095663.2A CN201880095663A CN112425144A CN 112425144 A CN112425144 A CN 112425144A CN 201880095663 A CN201880095663 A CN 201880095663A CN 112425144 A CN112425144 A CN 112425144A
- Authority
- CN
- China
- Prior art keywords
- information
- instant messaging
- messaging information
- identifier
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/725—Cordless telephones
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses an information prompting method and a related product, wherein the method comprises the following steps: receiving first instant messaging information of a first application of electronic equipment when an applet of the first application is operated; converting the first instant messaging information into voice information; and controlling an audio component of the electronic equipment to play the voice information. By the aid of the method and the device, instantaneity of information can be improved.
Description
The present application relates to the field of electronic devices, and in particular, to an information prompting method and related products.
With the rapid development of electronic device technology, various electronic devices have appeared, such as: cell-phone, panel computer, intelligent bracelet etc.. In the prior art, third party applications may add applets, such as: various applets such as ticket robbing application and game application do not need to be installed independently.
Content of application
In a first aspect, an embodiment of the present application provides an information prompting method, including:
receiving first instant messaging information of a first application of electronic equipment when an applet of the first application is operated;
converting the first instant messaging information into voice information;
and controlling an audio component of the electronic equipment to play the voice information.
In a second aspect, an embodiment of the present application provides an information prompting apparatus, including:
the receiving unit is used for receiving first instant messaging information of a first application when the applet of the first application of the electronic equipment is operated;
the processing unit is used for converting the first instant messaging information into voice information;
and the playing unit is used for controlling an audio component of the electronic equipment to play the voice information.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for some or all of the steps described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, where the computer program makes a computer perform some or all of the steps as described in the first aspect of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of an information prompting method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an information prompt apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiment of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, which have wireless communication functions, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device includes a control circuit and an input-output circuit, and the input-output circuit is connected to the control circuit.
The control circuitry may include, among other things, storage and processing circuitry. The storage circuit in the storage and processing circuit may be a memory, such as a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable read only memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. Processing circuitry in the storage and processing circuitry may be used to control the operation of the electronic device. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry may be used to run software in the electronic device, such as play incoming call alert ringing application, play short message alert ringing application, play alarm alert ringing application, play media file application, Voice Over Internet Protocol (VOIP) phone call application, operating system functions, and so forth. The software may be used to perform some control operations, such as playing an incoming alert ring, playing a short message alert ring, playing an alarm alert ring, playing a media file, making a voice phone call, and performing other functions in the electronic device, and the embodiments of the present application are not limited.
The input-output circuit can be used for enabling the electronic device to input and output data, namely allowing the electronic device to receive data from the external device and allowing the electronic device to output data from the electronic device to the external device.
The input-output circuit may further include a sensor. The sensors may include ambient light sensors, optical and capacitive based infrared proximity sensors, ultrasonic sensors, touch sensors (e.g., optical based touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or may be used independently as a touch sensor structure), acceleration sensors, gravity sensors, and other sensors, etc.
The input-output circuit may further include audio components that may be used to provide audio input and output functionality for the electronic device. The audio components may also include a tone generator and other components for generating and detecting sound.
The input-output circuitry may also include one or more display screens. The display screen can comprise one or a combination of a liquid crystal display screen, an organic light emitting diode display screen, an electronic ink display screen, a plasma display screen and a display screen using other display technologies. The display screen may include an array of touch sensors (i.e., the display screen may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The input-output circuitry may further include communications circuitry that may be used to provide the electronic device with the ability to communicate with external devices. The communication circuitry may include analog and digital input-output interface circuitry, and wireless communication circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in the communication circuitry may include radio frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless communication circuitry in the communication circuitry may include circuitry to support Near Field Communication (NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuit may include a near field communication antenna and a near field communication transceiver. The communications circuitry may also include cellular telephone transceiver and antennas, wireless local area network transceiver circuitry and antennas, and so forth.
The input-output circuit may further include an input-output unit. Input-output units may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
The electronic device may further include a battery (not shown) for supplying power to the electronic device.
The following describes embodiments of the present application in detail.
The embodiment of the application provides an information prompting method and a related product, and the instantaneity of information can be improved.
Referring to fig. 2, an embodiment of the present application provides a flowchart illustrating an information prompting method, where the method is applied to an electronic device. Specifically, as shown in fig. 2, an information prompting method includes:
s201: when an applet of a first application of an electronic device is operated, first instant messaging information of the first application is received.
In the embodiment of the present application, the first application is not limited, and may be any application capable of loading an applet and receiving an instant messaging message.
The information type of the first instant messaging information is not limited, the first instant messaging information can be a text type, a voice type, a video type, an expression image type and the like, can be edited and sent in the first application, and can also be transferred from other applications.
S202: and converting the first instant messaging information into voice information.
In this embodiment of the application, the voice information is of an audio type, and a conversion method is not limited, and in one possible example, the converting the first instant messaging information into the voice information includes: acquiring a sender identifier, an information type and an editing type identifier corresponding to the first instant messaging information; determining tone characteristics corresponding to the sender identification; and converting the first instant messaging information according to the tone characteristic, the information type and the editing type identifier to obtain the voice information.
The editing type identifier comprises a first application identifier corresponding to the first application and a second application identifier of a second application except the first application; the information types include text type, voice type, video type, expression image type, and the like.
The sender identifier is identification information of a sending object corresponding to the first instant messaging information, the identification information may be an account number, a nickname, a telephone number, a contact mailbox, and the like, which is not limited herein, and the sender identifier should have uniqueness.
For example, if the first application registers the first account and the second account, wherein: the first account number is 12345678987 and the second account number is 1235674562. If the user corresponding to the first account sends the first instant messaging message to the user corresponding to the second account, the sender identifier is 12345678987.
Timbre refers to the characteristic that the frequency representation of different sounds is always distinctive in terms of waveform. Different objects vibrate with different characteristics. In the present application, the tone characteristic corresponding to the sender identifier is a sound characteristic of the sender corresponding to the sender identifier.
It can be understood that the sender identification, the information type and the editing type identification corresponding to the first instant messaging information are obtained, the tone characteristic corresponding to the sender identification is determined, and then the first instant messaging information is converted according to the tone characteristic, the information type and the editing type identification to obtain the voice information. Therefore, the first instant messaging information is converted to obtain the voice information based on the tone color characteristic corresponding to the sender identification and the information type and the editing type identification of the first instant messaging information, and the accuracy of tone color characteristic and voice information conversion of the voice information close to the sender can be improved.
The method for acquiring the edit type identifier is not limited in the present application, and in a possible example, the acquiring the edit type identifier corresponding to the first instant messaging information includes: determining editing habit data corresponding to the sender identification; if the first instant messaging information meets the editing habit data, determining that the editing type identifier is the first application; if the first instant messaging information does not meet the editing habit data, at least one piece of identification information corresponding to the first instant messaging information is searched; and determining the editing type identifier according to the at least one piece of identification information.
Wherein, the editing habit data includes the habit expressions of the sender, such as: the common prefix words include "a", the name of a person, "where", etc., and the suffix words include "how", "do", "good", "row", etc., and the customary punctuation marks ". ",", ","! "and the like.
When the file is transferred, identification information of the second application is often carried, and the identification information is used for indicating that the file is transferred from the second application. It can be understood that the editing habit data corresponding to the sender identifier is determined, if the first instant messaging information meets the editing habit data, it is indicated that the first instant messaging information is sent by the inventor corresponding to the inventor identifier, otherwise, at least one identifier in the first instant messaging information is searched, and the second application identifier is determined according to the at least one identifier. Therefore, the editing habit data is determined according to the editing habit data identified by the sender and the identification information in the first instant messaging information, and the accuracy of determining the editing habit data is improved.
The method for determining editing habit data is not limited in the present application, and in a possible example, the determining editing habit data corresponding to the sender identifier includes: acquiring a plurality of pieces of second instant messaging information corresponding to the sender identification in a specified time period; performing sentence pattern analysis on each piece of second instant messaging information in the plurality of pieces of second instant messaging information to obtain the editing habit data;
the designated time interval is not limited, the time interval can be one week, one month or three months away from the current time, different popular network words can appear in different time intervals due to different vocabularies in different time intervals, so that a plurality of pieces of second instant messaging information corresponding to the sender identification in the designated time interval are obtained, and the accuracy of sentence pattern analysis on the second instant messaging information can be improved.
In the present application, the sentence analysis method is not limited, and the constructs of the subject, the predicate, and the object, the common vocabulary, the prefix, and the suffix in the second instant messaging information may be analyzed.
It can be understood that the plurality of pieces of second instant messaging information corresponding to the sender identification in the specified time period are obtained first, so that the analysis data are reduced conveniently, and the accuracy of determining the editing habit data is improved conveniently.
The method for determining the tone color feature is not limited in the present application, and in one possible example, the determining the tone color feature corresponding to the sender identifier includes: selecting instant messaging information with the information type of voice from the second instant messaging information to obtain third instant messaging information; and performing tone analysis on each piece of second instant messaging information in the plurality of pieces of third instant messaging information to obtain the tone characteristic.
Wherein the third instant messaging message is an instant messaging message corresponding to the voice type in the second instant messaging message, for example, if the designated time period includes 100 pieces of second instant messaging messages, wherein: and if the voice type instant messaging information comprises 40 pieces of instant messaging information, the text type instant messaging information comprises 40 pieces of instant messaging information, the video type instant messaging information comprises 10 pieces of instant messaging information, and the expression image type comprises 10 pieces of instant messaging information, the 40 pieces of voice type instant messaging information are determined to be a plurality of pieces of third instant messaging information.
In this application, the method of sound color analysis is not limited, and the initial consonant, the final, and the tone in the third instant messaging information may be analyzed, or a plurality of feature information such as a single tone byte, a multi-tone byte, a tight throat vowel, a loose throat vowel, a unit tone, a vocal cord, and the like may be analyzed, and then the sound color feature may be determined according to each feature information.
It can be understood that the instant messaging information with the information type being the voice type is selected from the second instant messaging information to obtain third instant messaging information, and the tone color analysis is performed on each second instant messaging information in the third instant messaging information to obtain the tone color feature. Therefore, the tone analysis is carried out aiming at the instant messaging information of the voice type, and the accuracy of determining the tone characteristics is improved.
The method and the device have the advantages that the voice information obtained by converting the first instant messaging information according to the tone features, the information type and the editing type identification is not limited, and when the editing type identification is the first application and the information type is the voice type, the first instant messaging information is directly subjected to noise processing to obtain the voice information; when the editing type identifier is a first application and the information type is an expression image type, identifying the expression content corresponding to the first instant messaging information, and generating the voice information according to the expression content and the tone characteristics; and when the editing type is marked as the second application, abstract information of the first instant messaging information can be extracted, and the voice information and the like can be generated according to the abstract information and the tone characteristics.
In a possible example, if the edit type identifier is the first application identifier and the information type is a text type, the converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier to obtain the voice information includes: determining a first language corresponding to the first instant messaging information and a second language corresponding to the electronic equipment; converting the first instant messaging information to obtain target characters according to semantic rules between the first language and the second language; determining a target emotion corresponding to the target characters; and generating the voice information according to the target characters, the target emotion and the tone characteristics.
The first language corresponding to the first instant messaging information is a language or dialect corresponding to the text, the second language is a voice commonly used by a target user corresponding to the electronic device, and the first language can be determined according to the instant messaging information of the voice type recorded in the first application, can also be determined according to the language set in the electronic device, and can also be determined according to historical instant messaging information between a sender and a receiver corresponding to the first instant messaging information, which is not limited herein.
The semantic rules between the first language and the second language are not limited in the present application, for example: there are translation rules between english and chinese, translation rules between Chongqing dialect and Mandarin, etc.
The emotion is a general term for a series of subjective cognitive experiences, is the psychological and physiological state generated by the synthesis of various feelings, ideas and behaviors, and the most common and popular emotions are happiness, anger, grief, surprise, terrorism, love and the like, and also have some exquisite and subtle emotions such as jealous, fifth and sixth, shame, self-luxury and the like. In the application, the target emotion is an emotion corresponding to the first instant messaging information, the method for determining the target emotion is not limited, and the word and punctuation of the first instant messaging information can be analyzed to determine the target emotion.
It can be understood that, when the embodiment is applied to an application scenario in which the edit type identifier is the first application identifier and the information type is the text type, the first language corresponding to the first instant messaging information and the second language corresponding to the electronic device are determined, the first instant messaging information is converted according to the semantic rule between the first language and the second language to obtain the target character, the target emotion corresponding to the target character is determined, and the voice information is generated according to the target character, the target emotion and the tone color feature. Therefore, the first instant messaging information is converted into the second language of the target user corresponding to the electronic equipment, the target emotion of the first instant messaging information is close to the tone characteristic corresponding to the sender, and the user experience is improved conveniently.
In a possible example, if the edit type identifier is the second application identifier, or the information type is a video type, the converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier to obtain the voice information includes: extracting abstract information corresponding to the first instant messaging information; and generating the voice information according to the tone characteristics and the abstract information.
The summary information is key information of the first instant messaging information, and may be the title, or may be extracted according to specific content, which is not limited herein.
It can be understood that the embodiment is applied to an application scenario in which the edit type identifier is the second application identifier or the information type is a video type, and the first instant messaging information corresponding to the second application identifier is the information that is transferred, that is, the priority is not high, so that if the first instant messaging information corresponding to the second application identifier is the first instant messaging information corresponding to the second application identifier, the summary information of the first instant messaging information is extracted, and the voice information is generated according to the summary information and the tone features of the first instant messaging information. Therefore, in the application, if the first instant messaging information is the video type, the summary information is extracted, and the voice information is generated according to the summary information and the tone characteristics, so that the summary information is played, a target user corresponding to the electronic equipment can determine whether to return to a chat interface according to the summary information, the target user is prevented from influencing the small program running, and the user experience is improved.
In one possible example, before the converting the first instant messaging information into voice information, the method further comprises: determining a first priority corresponding to the first instant messaging information; determining a second priority corresponding to the sender identification; determining a target priority according to the first priority and the second priority; and if the target priority is greater than the preset priority, executing the step of converting the first instant messaging information into voice information.
The first priority is the priority of the first instant messaging information, namely the priority of the information; the second priority is the priority of the sender identification; and presetting a priority for starting and executing the step of converting the first instant messaging information into voice information.
The method for determining the target priority is not limited, and may be an average value, a maximum value or a minimum value between the first priority and the second priority, or may be a weighted average value corresponding to the first priority and the second priority, that is, preset weights corresponding to the first instant messaging information and the sender identifier are preset, and then the preset weights are weighted with the first priority and the second priority respectively to obtain the target priority.
It can be understood that a first priority corresponding to the first instant messaging information and a second priority corresponding to the sender identification are determined, then a target priority is determined according to the first priority and the second priority, and when the target priority is greater than a preset priority, the step of converting the first instant messaging information into the voice information is executed. Otherwise, the step is not executed, so that the situation that the target user corresponding to the electronic equipment runs the applet is prevented from being influenced, and the user experience is improved.
The method for determining the first priority is not limited in the present application, and in one possible example, the determining the first priority corresponding to the first instant messaging information includes: determining a first sub-priority corresponding to the editing type identifier; determining a second sub-priority corresponding to the information type; determining a third sub-priority corresponding to the summary information; determining the first priority according to the first sub-priority, the second sub-priority, and the third sub-priority.
In this application, priorities corresponding to different editing type identifiers are preset, where the priority of the first application identifier is higher than that of the second application identifier, for example: the priority of the editing type identifier as the first application identifier is 5, and the priority of the editing type identifier as the second application identifier is 3.
The priorities corresponding to different information types are preset, for example: the expression image type is 1, the video type is 2, the voice type and the text type are 3, the video request and the voice request are 4, and the like. In this manner, a first sub-priority corresponding to the edit type identifier can be determined.
The method for determining the first priority according to the first sub-priority, the second sub-priority and the third sub-priority is not limited, and may be an average value, a maximum value or a minimum value among the first sub-priority, the second sub-priority and the third sub-priority, or a weighted average value corresponding to the first sub-priority, the second sub-priority and the third sub-priority, that is, preset weights corresponding to the edit type identifier and the summary information are preset, and then the preset weights are weighted with the first sub-priority, the second sub-priority and the third sub-priority to obtain the first priority and the like.
It is understood that, since the summary information may embody the main content of the first instant messaging information, the third sub-priority is determined according to the summary information. And then the third sub-priority, the first sub-priority determined according to the editing type identifier and the second sub-priority determined according to the information type determine the first priority, namely the accuracy of determining the first priority can be improved from the main content, the information type and the editing type identifier corresponding to the first instant messaging information.
The application does not limit how to determine the second priority, and in one possible example, the determining the second priority corresponding to the sender identifier includes: acquiring a plurality of historical contact records between the sender identification and a target user identification corresponding to the electronic equipment; determining a contact frequency between the sender identification and the target user identification according to the plurality of historical contact records; determining an incidence relation between a sender corresponding to the sender identification and a target user corresponding to the target user identification; and determining the second priority according to the incidence relation and the contact frequency.
The method for determining the contact frequency is not limited, and the contact frequency can be determined according to the time interval and the continuous communication time length corresponding to the historical notification information.
The association relationship is used to describe a relationship between a sender corresponding to the sender identifier and a target user corresponding to the target user identifier, for example: friends, relatives, co-workers, etc. In the present application, values corresponding to different association relationships may be predefined, for example: the value corresponding to the stranger is less than 1, the value corresponding to the colleague is greater than 1 and less than or equal to 3, the value corresponding to the friend is greater than 3 and less than or equal to 5, the value corresponding to the parent is greater than 5 and less than or equal to 10, further, the spouse is 10, the parent is 8, the child is 9, the brother sister is 7, and the uncle is 6.
It can be understood that a plurality of historical contact records for communication between the sender identifier and the target user identifier corresponding to the electronic device are determined, then the plurality of historical contact records are analyzed to determine the contact frequency between the sender identifier and the target user identifier, the association relationship between the sender corresponding to the sender identifier and the target user corresponding to the target user identifier is determined, and then the second priority is determined according to the association relationship and the contact frequency.
S203: and controlling an audio component of the electronic equipment to play the voice information.
The audio component may be used to provide audio input and output functionality for the electronic device as previously described. The audio components may also include a tone generator and other components for generating and detecting sound. In an electronic device, an audio caller comprises a microphone, a speaker, and an earpiece, wherein: the microphone is used for collecting sound, the loudspeaker is used for playing sound, and the receiver is used for being externally connected with an earphone and playing opposite-end voice during communication.
In the embodiment of the application, the audio component for playing the voice information is not limited, and when the earphone is connected, the audio component is an earphone connected with a receiver or an earphone wirelessly connected with the electronic device; when the earphone is not connected, the audio component can be determined according to the distance between the target user and the electronic equipment, for example, when the distance is smaller than the preset distance, the audio component is played by using the earphone; and when the distance is larger than or equal to the preset distance, the loudspeaker is adopted for playing.
In the information prompting method shown in fig. 2, when an applet of a first application of an electronic device is running, if a first instant messaging message of the first application is received, the first instant messaging message is converted into a voice message, and an audio component of the electronic device is controlled to play the voice message. Therefore, the first instant messaging information is played in an audio mode in real time, and the instantaneity of the information is improved.
In one possible example, after the converting the first instant messaging information into voice information, the method further comprises: and closing the voice pickup function of the electronic equipment.
It can be understood that when the applet of the first application is operated, if the voice information is played, the voice pickup function of the electronic device is turned off, that is, the voice information is not collected, so that the applet is prevented from being uploaded, information leakage is avoided, and privacy safety is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an information prompting device according to an embodiment of the present application, and as shown in fig. 3, the information prompting device 300 includes a receiving unit 301, a processing unit 302, and a playing unit 303, where:
the receiving unit 301 is configured to receive first instant messaging information of a first application of an electronic device when an applet of the first application is running;
the processing unit 302 is configured to convert the first instant messaging information into voice information;
the playing unit 303 is configured to control an audio component of the electronic device to play the voice information.
It can be understood that when an applet of a first application of an electronic device is running, if a first instant messaging message of the first application is received, the first instant messaging message is converted into a voice message, and an audio component of the electronic device is controlled to play the voice message. Therefore, the first instant messaging information is played in an audio mode in real time, and the instantaneity of the information is improved.
In a possible example, in terms of converting the first instant messaging information into the voice information, the processing unit 302 is specifically configured to obtain a sender identifier, an information type, and an editing type identifier corresponding to the first instant messaging information, where the editing type identifier includes a first application identifier corresponding to the first application and a second application identifier of a second application other than the first application; determining tone characteristics corresponding to the sender identification; and converting the first instant messaging information according to the tone characteristic, the information type and the editing type identifier to obtain the voice information.
In a possible example, if the edit type identifier is the first application identifier and the information type is a text type, the processing unit 302 is specifically configured to determine a first language corresponding to the first instant messaging information and a second language corresponding to the electronic device, in terms of obtaining the voice information by converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier; converting the first instant messaging information according to semantic rules between the first language and the second language to obtain target characters; determining a target emotion corresponding to the target characters; and generating the voice information according to the target characters, the target emotion and the tone characteristics.
In a possible example, if the edit type identifier is the second application identifier or the information type is a video type, the processing unit 302 is specifically configured to extract summary information corresponding to the first instant messaging information, in terms of obtaining the voice information by converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier; and generating the voice information according to the tone characteristics and the abstract information.
In a possible example, in the aspect of obtaining the editing type identifier corresponding to the first instant messaging information, the processing unit 302 is specifically configured to determine editing habit data corresponding to the sender identifier; if the first instant messaging information meets the editing habit data, determining that the editing type identifier is the first application; if the first instant messaging information does not meet the editing habit data, at least one piece of identification information corresponding to the first instant messaging information is searched; and determining the editing type identifier according to the at least one piece of identification information.
In a possible example, in the aspect of determining the editing habit data corresponding to the sender identifier, the processing unit 302 is specifically configured to obtain a plurality of pieces of second instant messaging information corresponding to the sender identifier in a specified time period; performing sentence pattern analysis on each second instant messaging information in the plurality of pieces of second instant messaging information to obtain the editing habit data;
in the aspect of determining the tone characteristic corresponding to the sender identifier, the processing unit 302 is specifically configured to select instant messaging information with an information type of voice from the plurality of pieces of second instant messaging information, and obtain a plurality of pieces of third instant messaging information; and performing tone analysis on each piece of second instant messaging information in the plurality of pieces of third instant messaging information to obtain the tone characteristics.
In a possible example, before the converting the first instant messaging information into the voice information, the processing unit 302 is further configured to determine a first priority corresponding to the first instant messaging information; determining a second priority corresponding to the sender identification; determining a target priority according to the first priority and the second priority; and if the target priority is greater than the preset priority, executing the step of converting the first instant messaging information into voice information.
In a possible example, in terms of the determining the second priority corresponding to the sender identifier, the processing unit 302 is specifically configured to obtain a plurality of historical contact records between the sender identifier and a target user identifier corresponding to the electronic device; determining a contact frequency between the sender identification and the target user identification according to the plurality of historical contact records; determining an incidence relation between a sender corresponding to the sender identification and a target user corresponding to the target user identification; and determining the second priority according to the incidence relation and the contact frequency.
In one possible example, after the converting the first instant messaging information into the voice information, the processing unit 302 is further configured to turn off a voice pickup function of the electronic device.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another electronic device according to an embodiment of the present disclosure. As shown in fig. 4, the electronic device 400 includes a processor 410, a memory 420, a communication interface 430, and one or more programs 440, wherein the one or more programs 440 are stored in the memory 420 and configured to be executed by the processor 410, and wherein the programs 440 include instructions for:
receiving first instant messaging information of a first application of electronic equipment when an applet of the first application is operated;
converting the first instant messaging information into voice information;
and controlling an audio component of the electronic equipment to play the voice information.
It can be understood that when an applet of a first application of an electronic device is running, if a first instant messaging message of the first application is received, the first instant messaging message is converted into a voice message, and an audio component of the electronic device is controlled to play the voice message. Therefore, the first instant messaging information is played in an audio mode in real time, and the instantaneity of the information is improved.
In one possible example, in terms of the converting the first instant messaging information into the voice information, the instructions in the program 440 are specifically configured to:
acquiring a sender identifier, an information type and an editing type identifier corresponding to the first instant messaging information, wherein the editing type identifier comprises a first application identifier corresponding to the first application and a second application identifier of a second application except the first application;
determining tone characteristics corresponding to the sender identification;
and converting the first instant messaging information according to the tone characteristic, the information type and the editing type identifier to obtain the voice information.
In a possible example, if the edit type identifier is the first application identifier and the information type is a text type, the instruction in the program 440 is specifically configured to perform the following operations in the aspect of converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier to obtain the voice information:
determining a first language corresponding to the first instant messaging information and a second language corresponding to the electronic equipment;
converting the first instant messaging information according to semantic rules between the first language and the second language to obtain target characters;
determining a target emotion corresponding to the target characters;
and generating the voice information according to the target characters, the target emotion and the tone characteristics.
In a possible example, if the edit type identifier is the second application identifier, or the information type is a video type, in terms of converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier to obtain the voice information, the instruction in the program 440 is specifically configured to perform the following operations:
extracting abstract information corresponding to the first instant messaging information;
and generating the voice information according to the tone characteristics and the abstract information.
In a possible example, in terms of obtaining the edit type identifier corresponding to the first instant messaging information, the instruction in the program 440 is specifically configured to perform the following operations:
determining editing habit data corresponding to the sender identification;
if the first instant messaging information meets the editing habit data, determining that the editing type identifier is the first application;
if the first instant messaging information does not meet the editing habit data, at least one piece of identification information corresponding to the first instant messaging information is searched; and determining the editing type identifier according to the at least one piece of identification information.
In one possible example, in the aspect of determining editing habit data corresponding to the sender identifier, the instructions in the program 440 are specifically configured to perform the following operations:
acquiring a plurality of pieces of second instant messaging information corresponding to the sender identification in a specified time period;
performing sentence pattern analysis on each second instant messaging information in the plurality of pieces of second instant messaging information to obtain the editing habit data;
in the aspect of determining the tone color characteristic corresponding to the sender identifier, the instructions in the program 440 are specifically configured to perform the following operations:
selecting instant messaging information with the information type being a voice type from the second instant messaging information to obtain third instant messaging information;
and performing tone analysis on each piece of second instant messaging information in the plurality of pieces of third instant messaging information to obtain the tone characteristics.
In one possible example, before the converting the first instant messaging information into voice information, the instructions in the program 440 are further configured to:
determining a first priority corresponding to the first instant messaging information;
determining a second priority corresponding to the sender identification;
determining a target priority according to the first priority and the second priority;
and if the target priority is greater than the preset priority, executing the step of converting the first instant messaging information into voice information.
In one possible example, in the determining the second priority corresponding to the sender identification, the instructions in the program 440 are specifically configured to:
acquiring a plurality of historical contact records between the sender identification and a target user identification corresponding to the electronic equipment;
determining a contact frequency between the sender identification and the target user identification according to the plurality of historical contact records;
determining an incidence relation between a sender corresponding to the sender identification and a target user corresponding to the target user identification;
and determining the second priority according to the incidence relation and the contact frequency.
In one possible example, after the converting the first instant messaging information into voice information, the instructions in the program 440 are further configured to:
and closing the voice pickup function of the electronic equipment.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for causing a computer to execute some or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as set out in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and the actual implementation may have another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash memory disks, read only memory, random access memory ("ram"), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (20)
- An information prompting method, comprising:receiving first instant messaging information of a first application of electronic equipment when an applet of the first application is operated;converting the first instant messaging information into voice information;and controlling an audio component of the electronic equipment to play the voice information.
- The method of claim 1, wherein converting the first instant messaging message into a voice message comprises:acquiring a sender identifier, an information type and an editing type identifier corresponding to the first instant messaging information, wherein the editing type identifier comprises a first application identifier corresponding to the first application and a second application identifier of a second application except the first application;determining tone characteristics corresponding to the sender identification;and converting the first instant messaging information according to the tone characteristic, the information type and the editing type identifier to obtain the voice information.
- The method of claim 2, wherein if the edit type identifier is the first application identifier and the information type is a text type, the converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier to obtain the voice information comprises:determining a first language corresponding to the first instant messaging information and a second language corresponding to the electronic equipment;converting the first instant messaging information according to semantic rules between the first language and the second language to obtain target characters;determining a target emotion corresponding to the target characters;and generating the voice information according to the target characters, the target emotion and the tone characteristics.
- The method according to claim 2, wherein if the edit type identifier is the second application identifier or the information type is a video type, the converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier to obtain the voice information comprises:extracting abstract information corresponding to the first instant messaging information;and generating the voice information according to the tone characteristics and the abstract information.
- The method according to any one of claims 2 to 4, wherein the obtaining of the edit type identifier corresponding to the first instant messaging information comprises:determining editing habit data corresponding to the sender identification;if the first instant messaging information meets the editing habit data, determining that the editing type identifier is the first application;if the first instant messaging information does not meet the editing habit data, at least one piece of identification information corresponding to the first instant messaging information is searched;and determining the editing type identifier according to the at least one piece of identification information.
- The method of claim 5, wherein the determining editing habit data corresponding to the sender identification comprises:acquiring a plurality of pieces of second instant messaging information corresponding to the sender identification in a specified time period;performing sentence pattern analysis on each second instant messaging information in the plurality of pieces of second instant messaging information to obtain the editing habit data;the determining the tone characteristic corresponding to the sender identifier includes:selecting instant messaging information with the information type being a voice type from the second instant messaging information to obtain third instant messaging information;and performing tone analysis on each piece of second instant messaging information in the plurality of pieces of third instant messaging information to obtain the tone characteristics.
- The method according to any of claims 1-6, wherein before said converting the first instant messaging information into voice information, the method further comprises:determining a first priority corresponding to the first instant messaging information;determining a second priority corresponding to the sender identification;determining a target priority according to the first priority and the second priority;and if the target priority is greater than the preset priority, executing the step of converting the first instant messaging information into voice information.
- The method of claim 7, wherein determining the second priority level to which the sender identification corresponds comprises:acquiring a plurality of historical contact records between the sender identification and a target user identification corresponding to the electronic equipment;determining a contact frequency between the sender identification and the target user identification according to the plurality of historical contact records;determining an incidence relation between a sender corresponding to the sender identification and a target user corresponding to the target user identification;and determining the second priority according to the incidence relation and the contact frequency.
- The method according to any one of claims 1-8, wherein after said converting the first instant messaging information into voice information, the method further comprises:and closing the voice pickup function of the electronic equipment.
- An information presentation device, comprising:the receiving unit is used for receiving first instant messaging information of a first application when the applet of the first application of the electronic equipment is operated;the processing unit is used for converting the first instant messaging information into voice information;and the playing unit is used for controlling an audio component of the electronic equipment to play the voice information.
- The apparatus according to claim 10, wherein in the aspect of converting the first instant messaging information into voice information, the processing unit is specifically configured to obtain a sender identifier, an information type, and an editing type identifier corresponding to the first instant messaging information, where the editing type identifier includes a first application identifier corresponding to the first application and a second application identifier of a second application other than the first application; determining tone characteristics corresponding to the sender identification; and converting the first instant messaging information according to the tone characteristic, the information type and the editing type identifier to obtain the voice information.
- The apparatus according to claim 11, wherein if the edit type identifier is the first application identifier and the information type is a text type, the processing unit is specifically configured to determine a first language corresponding to the first instant messaging information and a second language corresponding to the electronic device in terms of obtaining the voice information by converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier; converting the first instant messaging information according to semantic rules between the first language and the second language to obtain target characters; determining a target emotion corresponding to the target characters; and generating the voice information according to the target characters, the target emotion and the tone characteristics.
- The apparatus according to claim 11, wherein if the edit type identifier is the second application identifier or the information type is a video type, the processing unit is specifically configured to extract summary information corresponding to the first instant messaging information in terms of converting the first instant messaging information according to the tone feature, the information type, and the edit type identifier to obtain the voice information; and generating the voice information according to the tone characteristics and the abstract information.
- The apparatus according to any one of claims 11 to 13, wherein in the aspect of obtaining the editing type identifier corresponding to the first instant messaging information, the processing unit is specifically configured to determine editing habit data corresponding to the sender identifier; if the first instant messaging information meets the editing habit data, determining that the editing type identifier is the first application; if the first instant messaging information does not meet the editing habit data, at least one piece of identification information corresponding to the first instant messaging information is searched; and determining the editing type identifier according to the at least one piece of identification information.
- The apparatus according to claim 14, wherein in the aspect of determining editing habit data corresponding to the sender id, the processing unit is specifically configured to obtain a plurality of pieces of second instant messaging information corresponding to the sender id in a specified time period; performing sentence pattern analysis on each second instant messaging information in the plurality of pieces of second instant messaging information to obtain the editing habit data;in the aspect of determining the tone characteristic corresponding to the sender identifier, the processing unit is specifically configured to select instant messaging information with an information type of voice from the plurality of pieces of second instant messaging information, and obtain a plurality of pieces of third instant messaging information; and performing tone analysis on each piece of second instant messaging information in the plurality of pieces of third instant messaging information to obtain the tone characteristics.
- The apparatus according to any of claims 10-15, wherein before said converting the first instant messaging information into voice information, the processing unit is further configured to determine a first priority corresponding to the first instant messaging information; determining a second priority corresponding to the sender identification; determining a target priority according to the first priority and the second priority; and if the target priority is greater than the preset priority, executing the step of converting the first instant messaging information into voice information.
- The apparatus according to claim 15, wherein in the determining of the second priority corresponding to the sender identifier, the processing unit is specifically configured to obtain a plurality of historical contact records between the sender identifier and a target user identifier corresponding to the electronic device; determining a contact frequency between the sender identification and the target user identification according to the plurality of historical contact records; determining an incidence relation between a sender corresponding to the sender identification and a target user corresponding to the target user identification; and determining the second priority according to the incidence relation and the contact frequency.
- An electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-9.
- A computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to perform the method according to any one of claims 1-9.
- A computer program product, characterized in that the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform the method according to any one of claims 1-9.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/105737 WO2020051881A1 (en) | 2018-09-14 | 2018-09-14 | Information prompt method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112425144A true CN112425144A (en) | 2021-02-26 |
CN112425144B CN112425144B (en) | 2021-11-30 |
Family
ID=69776604
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880095663.2A Active CN112425144B (en) | 2018-09-14 | 2018-09-14 | Information prompting method and related product |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112425144B (en) |
WO (1) | WO2020051881A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113194024B (en) * | 2021-03-22 | 2023-04-18 | 维沃移动通信(杭州)有限公司 | Information display method and device and electronic equipment |
CN113207025B (en) * | 2021-04-30 | 2023-03-28 | 北京字跳网络技术有限公司 | Video processing method and device, electronic equipment and storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1972478A (en) * | 2005-11-24 | 2007-05-30 | 展讯通信(上海)有限公司 | A novel method for mobile phone reading short message |
US20070274467A1 (en) * | 2006-05-09 | 2007-11-29 | Pearson Larry B | Methods and apparatus to provide voice control of a dial tone and an audio message in the initial off hook period |
US8150003B1 (en) * | 2007-01-23 | 2012-04-03 | Avaya Inc. | Caller initiated undivert from voicemail |
CN103095557A (en) * | 2012-12-18 | 2013-05-08 | 上海量明科技发展有限公司 | Instant messaging information voice output method and system |
CN103257787A (en) * | 2013-05-16 | 2013-08-21 | 北京小米科技有限责任公司 | Method and device for starting voice assistant application |
CN103533519A (en) * | 2012-07-06 | 2014-01-22 | 盛乐信息技术(上海)有限公司 | Short message broadcasting method and system |
CN104184887A (en) * | 2014-07-29 | 2014-12-03 | 小米科技有限责任公司 | Message prompting method and device and terminal equipment |
CN104991894A (en) * | 2015-05-14 | 2015-10-21 | 深圳市万普拉斯科技有限公司 | Instant chat message browsing method and system |
CN105120060A (en) * | 2015-07-07 | 2015-12-02 | 深圳市听八方科技有限公司 | Method for voice playing notification message of smart phone |
WO2015184839A1 (en) * | 2014-11-04 | 2015-12-10 | 中兴通讯股份有限公司 | Information alerting method and device |
CN105376134A (en) * | 2014-08-26 | 2016-03-02 | 腾讯科技(北京)有限公司 | Method and device for displaying communication message |
CN105739831A (en) * | 2016-02-01 | 2016-07-06 | 珠海市魅族科技有限公司 | Display method and device of message contents |
CN106412282A (en) * | 2016-09-26 | 2017-02-15 | 维沃移动通信有限公司 | Real-time message voice prompting method and mobile terminal |
CN107026929A (en) * | 2016-02-01 | 2017-08-08 | 广州市动景计算机科技有限公司 | Reminding method, device and the electronic equipment of applicative notifications |
CN107835117A (en) * | 2017-10-19 | 2018-03-23 | 上海爱优威软件开发有限公司 | A kind of instant communicating method and system |
CN108111678A (en) * | 2017-12-15 | 2018-06-01 | 维沃移动通信有限公司 | A kind of information cuing method and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108566484A (en) * | 2018-03-21 | 2018-09-21 | 努比亚技术有限公司 | Message treatment method, terminal device and computer readable storage medium |
-
2018
- 2018-09-14 WO PCT/CN2018/105737 patent/WO2020051881A1/en active Application Filing
- 2018-09-14 CN CN201880095663.2A patent/CN112425144B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1972478A (en) * | 2005-11-24 | 2007-05-30 | 展讯通信(上海)有限公司 | A novel method for mobile phone reading short message |
US20070274467A1 (en) * | 2006-05-09 | 2007-11-29 | Pearson Larry B | Methods and apparatus to provide voice control of a dial tone and an audio message in the initial off hook period |
US8150003B1 (en) * | 2007-01-23 | 2012-04-03 | Avaya Inc. | Caller initiated undivert from voicemail |
CN103533519A (en) * | 2012-07-06 | 2014-01-22 | 盛乐信息技术(上海)有限公司 | Short message broadcasting method and system |
CN103095557A (en) * | 2012-12-18 | 2013-05-08 | 上海量明科技发展有限公司 | Instant messaging information voice output method and system |
CN103257787A (en) * | 2013-05-16 | 2013-08-21 | 北京小米科技有限责任公司 | Method and device for starting voice assistant application |
CN104184887A (en) * | 2014-07-29 | 2014-12-03 | 小米科技有限责任公司 | Message prompting method and device and terminal equipment |
CN105376134A (en) * | 2014-08-26 | 2016-03-02 | 腾讯科技(北京)有限公司 | Method and device for displaying communication message |
WO2015184839A1 (en) * | 2014-11-04 | 2015-12-10 | 中兴通讯股份有限公司 | Information alerting method and device |
CN104991894A (en) * | 2015-05-14 | 2015-10-21 | 深圳市万普拉斯科技有限公司 | Instant chat message browsing method and system |
CN105120060A (en) * | 2015-07-07 | 2015-12-02 | 深圳市听八方科技有限公司 | Method for voice playing notification message of smart phone |
CN105739831A (en) * | 2016-02-01 | 2016-07-06 | 珠海市魅族科技有限公司 | Display method and device of message contents |
CN107026929A (en) * | 2016-02-01 | 2017-08-08 | 广州市动景计算机科技有限公司 | Reminding method, device and the electronic equipment of applicative notifications |
CN106412282A (en) * | 2016-09-26 | 2017-02-15 | 维沃移动通信有限公司 | Real-time message voice prompting method and mobile terminal |
CN107835117A (en) * | 2017-10-19 | 2018-03-23 | 上海爱优威软件开发有限公司 | A kind of instant communicating method and system |
CN108111678A (en) * | 2017-12-15 | 2018-06-01 | 维沃移动通信有限公司 | A kind of information cuing method and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN112425144B (en) | 2021-11-30 |
WO2020051881A1 (en) | 2020-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107580113B (en) | Reminding method, device, storage medium and terminal | |
CN102117614B (en) | Personalized text-to-speech synthesis and personalized speech feature extraction | |
CN106024014A (en) | Voice conversion method and device and mobile terminal | |
CN108021572B (en) | Reply information recommendation method and device | |
US20200029156A1 (en) | Method for Processing Information and Electronic Device | |
US20060221935A1 (en) | Method and apparatus for representing communication attributes | |
CN109144255A (en) | The increased system and method for tactile for speech-to-text conversion | |
CN108134876A (en) | Dialog analysis method, apparatus, storage medium and mobile terminal | |
CN108694947A (en) | Sound control method, device, storage medium and electronic equipment | |
WO2020052307A1 (en) | Permission configuration method and related product | |
CN112425144B (en) | Information prompting method and related product | |
CN109348467A (en) | Emergency call realization method, electronic device and computer readable storage medium | |
CN107135452A (en) | Audiphone adaptation method and device | |
CN109120781B (en) | Information prompting method, electronic device and computer readable storage medium | |
CN105120061B (en) | The display methods and device of message | |
KR100617756B1 (en) | Method for displaying status information in wireless terminal | |
US20090143049A1 (en) | Mobile telephone hugs including conveyed messages | |
CN108132717A (en) | Recommendation method, apparatus, storage medium and the mobile terminal of candidate word | |
CN107886963A (en) | Voice processing method and device and electronic equipment | |
CN105119815B (en) | The method and device of music is realized in instant communication interface | |
CN110597973A (en) | Man-machine conversation method, device, terminal equipment and readable storage medium | |
CN107707721A (en) | The way of recording, device, storage medium and the mobile terminal of mobile terminal | |
CN109725798A (en) | The switching method and relevant apparatus of Autonomous role | |
CN109951504B (en) | Information pushing method and device, terminal and storage medium | |
CN111600992B (en) | Information processing method, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |