CN107453986A - Voice-enabled chat processing method and corresponding mobile terminal - Google Patents
Voice-enabled chat processing method and corresponding mobile terminal Download PDFInfo
- Publication number
- CN107453986A CN107453986A CN201710915042.1A CN201710915042A CN107453986A CN 107453986 A CN107453986 A CN 107453986A CN 201710915042 A CN201710915042 A CN 201710915042A CN 107453986 A CN107453986 A CN 107453986A
- Authority
- CN
- China
- Prior art keywords
- vocabulary
- voice
- expression
- tone
- mood
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 30
- 230000014509 gene expression Effects 0.000 claims abstract description 147
- 230000036651 mood Effects 0.000 claims abstract description 90
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims description 33
- 230000006870 function Effects 0.000 claims description 26
- 230000007935 neutral effect Effects 0.000 claims description 8
- 230000006854 communication Effects 0.000 abstract description 13
- 238000005516 engineering process Methods 0.000 abstract description 13
- 238000004891 communication Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 17
- 239000002245 particle Substances 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 206010026749 Mania Diseases 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 230000033764 rhythmic process Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/06—Message adaptation to terminal or network requirements
- H04L51/063—Content adaptation, e.g. replacement of unsuitable content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention discloses a kind of voice-enabled chat processing method and corresponding mobile terminal, belong to network communication technology field.This method includes:The voice messaging that user inputs is converted into text information;Judge whether there is positive or passive vocabulary in the text information;When there is positive or passive vocabulary, expression corresponding to the positive or passive vocabulary is sent automatically.Thereby, it is possible to send the expression of expression voice mood automatically, make chat scenario more intelligent.
Description
Technical field
The present invention relates to network communication technology field, more particularly to a kind of voice-enabled chat processing method and corresponding movement are eventually
End.
Background technology
User can attach when inputting word information is usually chatted and input some expression bags to give expression to one's sentiment and mood,
But these are all manually entered, can link up progress simultaneously.It is however, if the user desired that subsidiary when sending voice
Expression is manually entered expression at the moment, it is necessary to which mobile phone is taken into mouth and finishes to take again after voice, or even needs to take mouth again defeated
Enter voice, can so cause user operational discontinuous, Consumer's Experience is bad.
The content of the invention
It is a primary object of the present invention to propose a kind of voice-enabled chat processing method and corresponding mobile terminal, it is intended to solve
The problem of how expression easily being sent in voice-enabled chat.
To achieve the above object, a kind of voice-enabled chat processing method provided by the invention, the method comprising the steps of:By user
One voice messaging of input is converted to text information;Judge whether there is positive or passive vocabulary in the text information;And work as
When having positive or passive vocabulary, expression corresponding to the positive or passive vocabulary is sent automatically.
Preferably, this method also includes step:When not actively or during passive vocabulary, judge in the text information whether
Include tone vocabulary;When comprising tone vocabulary, expression corresponding to the tone vocabulary is sent automatically.
Preferably, this method also includes step:When not comprising tone vocabulary, the waveform according to the voice messaging is judged
Whether corresponding mood can be parsed;It is automatic to send what is judged corresponding to it can be parsed according to the waveform during mood
Expression corresponding to mood.
Preferably, this method also includes step:It is automatic to send corresponding to it can not be parsed according to the waveform during mood
Default neutral expression.
Preferably, this method also includes step:Corresponding to it can not be parsed according to the waveform during mood, do not send and appoint
What expression.
Preferably, this method is gone back before a step of voice messaging by user's input is converted to text information
Including step:Receive user and start the operation that expression function is sent in voice-enabled chat.
Preferably, it is described to judge whether there is the step of positive or passive vocabulary to include in the text information:By the text
Vocabulary included in word information is contrasted with default active vocabulary storehouse and passive vocabulary storehouse, parses the text information
In the active vocabulary or passive vocabulary that include;Setting in the active vocabulary storehouse and passive vocabulary storehouse obtains described positive
Expression corresponding to vocabulary or passive vocabulary.
Preferably, it is described to judge to include the step of whether including tone vocabulary in the text information:The word is believed
Vocabulary included in breath is contrasted with the tone lexicon, parses the modal particle included in the text information
Converge;According to mood and table corresponding to each tone vocabulary set in the tone vocabulary and the tone lexicon parsed
Feelings, the mood expressed by the voice messaging is determined, obtain expression corresponding to the tone vocabulary.
In addition, to achieve the above object, the present invention also proposes a kind of mobile terminal, and the mobile terminal includes:Memory,
Processor, screen and the voice-enabled chat processing routine that can be run on the memory and on the processor is stored in, it is described
The step of voice-enabled chat processing method described above is realized when voice-enabled chat processing routine is by the computing device.
Further, to achieve the above object, the present invention also provides a kind of computer-readable recording medium, the computer
Voice-enabled chat processing routine is stored with readable storage medium storing program for executing, is realized such as when the voice-enabled chat processing routine is executed by processor
The step of above-mentioned voice-enabled chat processing method.
Voice-enabled chat processing method, mobile terminal and computer-readable recording medium proposed by the present invention, can be in voice
It is automatic behind the voice messaging to send the expression that express the voice mood when voice messaging is sent in chat process, make to chat
Its scene is more intelligent, lifts Consumer's Experience.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is a kind of flow chart for voice-enabled chat processing method that first embodiment of the invention proposes;
Fig. 4 is a kind of flow chart for voice-enabled chat processing method that second embodiment of the invention proposes;
Fig. 5 is a kind of flow chart for voice-enabled chat processing method that third embodiment of the invention proposes;
Fig. 6 is a kind of flow chart for voice-enabled chat processing method that fourth embodiment of the invention proposes;
Fig. 7 (a) -7 (c) is the interface schematic diagram that various expressions are sent in the present invention;
Fig. 8 is a kind of module diagram for mobile terminal that fifth embodiment of the invention proposes;
Fig. 9 is a kind of module diagram for voice-enabled chat processing system that sixth embodiment of the invention proposes;
Figure 10 is a kind of module diagram for voice-enabled chat processing system that seventh embodiment of the invention proposes.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be advantageous to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. move
Dynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of each embodiment of the realization present invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,
Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrate
Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need
To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is receiving or
It is sound that the voice data stored in memory 109, which is converted into audio signal and exported,.Moreover, audio output unit 103
The audio output related to the specific function that mobile terminal 100 performs can also be provided (for example, call signal receives sound, disappeared
Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out
Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be deposited
Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound
Caused noise or interference during frequency signal.
Mobile terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
Contact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, can
Input and the output function of mobile terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not done herein specifically
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by running or performing the software program and/or module that are stored in memory 109, and call and be stored in storage
Data in device 109, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) to all parts power supply, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system being based on below to the mobile terminal of the present invention enters
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, the UE that the LTE system includes communicating connection successively (User Equipment, is used
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling between EPC203, there is provided carrying and connection management.HSS2032 is all to manage for providing some registers
Such as the function of attaching position register (not shown) etc, and preserve some and used about service features, data rate etc.
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
A kind of voice-enabled chat processing method proposed by the present invention, for during voice-enabled chat send voice messaging when,
It is automatic behind the voice messaging to send the expression that express the voice mood, make chat scenario more intelligent, lift user
Experience.
Embodiment one
As shown in figure 3, first embodiment of the invention proposes a kind of voice-enabled chat processing method, this method includes following step
Suddenly:
S302, the voice messaging that user inputs is converted into text information.
Specifically, after user inputs a voice messaging, this voice messaging is identified using speech text switch technology,
It is converted into corresponding text information.In the present embodiment, can be turned using existing speech text switch technology
Change, will not be repeated here.
S304, judge whether there is positive or passive vocabulary in the text information.
Specifically, after text information is converted to, the vocabulary included in the text information is parsed, is determined whether bright
The vocabulary of aobvious expression active mood (such as happy, exciting, excited) expresses negative feeling (such as unhappy, sad)
Vocabulary.When parsing active vocabulary, determine the voice messaging expression is positive mood.When parsing passive vocabulary
When, determine the voice messaging expression is passive mood.
In the present embodiment, an active vocabulary storehouse and a passive vocabulary storehouse can be pre-set, the word is believed
Vocabulary included in breath is contrasted with the active vocabulary storehouse and passive vocabulary storehouse, you can is parsed in the text information
Comprising active vocabulary or passive vocabulary, so as to judge the mood expressed by the voice messaging.
In other embodiments, can also be finely divided further directed to the active vocabulary and passive vocabulary, by described in
Active vocabulary, which is divided into, expresses happy vocabulary, the vocabulary for expressing excitement etc., and it is sad that the passive vocabulary is divided into expression
Vocabulary, gloomy vocabulary of the nervous vocabulary of expression, expression etc..These vocabulary, we can assert that user is now in actively
Phychology, mobile phone can send automatically some laugh etc expression;If it find that vocabulary (such as unhappy, the wound of negative feeling
Heart etc.) vocabulary, the expression such as cry can be sent automatically.
S306, when there is positive or passive vocabulary, expression corresponding to the positive or passive vocabulary is sent automatically.
Specifically, table corresponding to each type of vocabulary can be set in the active vocabulary storehouse and passive vocabulary storehouse
Feelings, when there is positive or passive vocabulary in judging the text information, obtain corresponding to the active vocabulary or passive vocabulary
Expression (such as the expression laughed at or the expression cried), sends the expression behind the voice messaging automatically.For example, refer to Fig. 7 (a)
It is shown, to send the interface schematic diagram of the expression of expression active mood behind the voice messaging.Refering to Fig. 7 (b) Suo Shi, it is
The interface schematic diagram of the expression of expression negative feeling is sent behind the voice messaging.
Embodiment two
As shown in figure 4, second embodiment of the invention proposes a kind of voice-enabled chat processing method.In a second embodiment, institute
State voice-enabled chat processing method step S402-S406 and first embodiment step S302-S306 it is similar, difference be this
Method also includes step S408-S416.
This method comprises the following steps:
S402, the voice messaging that user inputs is converted into text information.
Specifically, after user inputs a voice messaging, this voice messaging is identified using speech text switch technology,
It is converted into corresponding text information.In the present embodiment, can be turned using existing speech text switch technology
Change, will not be repeated here.
S404, judge whether there is positive or passive vocabulary in the text information.If there is positive or passive vocabulary, perform
Step S406.If without positive or passive vocabulary, step S408 is performed.
Specifically, after text information is converted to, the vocabulary included in the text information is parsed, is determined whether bright
The vocabulary of aobvious expression active mood (such as happy, exciting, excited) expresses negative feeling (such as unhappy, sad)
Vocabulary.When parsing active vocabulary, determine the voice messaging expression is positive mood.When parsing passive vocabulary
When, determine the voice messaging expression is passive mood.
In the present embodiment, an active vocabulary storehouse and a passive vocabulary storehouse can be pre-set, the word is believed
Vocabulary included in breath is contrasted with the active vocabulary storehouse and passive vocabulary storehouse, you can is parsed in the text information
Comprising active vocabulary or passive vocabulary, so as to judge the mood expressed by the voice messaging.
In other embodiments, can also be finely divided further directed to the active vocabulary and passive vocabulary, by described in
Active vocabulary, which is divided into, expresses happy vocabulary, the vocabulary for expressing excitement etc., and it is sad that the passive vocabulary is divided into expression
Vocabulary, gloomy vocabulary of the nervous vocabulary of expression, expression etc..These vocabulary, we can assert that user is now in actively
Phychology, mobile phone can send automatically some laugh etc expression;If it find that vocabulary (such as unhappy, the wound of negative feeling
Heart etc.) vocabulary, the expression such as cry can be sent automatically.
S406, expression corresponding to the positive or passive vocabulary is sent automatically.
Specifically, table corresponding to each type of vocabulary can be set in the active vocabulary storehouse and passive vocabulary storehouse
Feelings, when there is positive or passive vocabulary in judging the text information, obtain corresponding to the active vocabulary or passive vocabulary
Expression (such as the expression laughed at or the expression cried), sends the expression behind the voice messaging automatically.For example, refer to Fig. 7 (a)
It is shown, to send the interface schematic diagram of the expression of expression active mood behind the voice messaging.Refering to Fig. 7 (b) Suo Shi, it is
The interface schematic diagram of the expression of expression negative feeling is sent behind the voice messaging.
S408, judge whether include tone vocabulary in the text information.If comprising tone vocabulary, step is performed
S410.If not including tone vocabulary, step S412 is performed.
Specifically, if not determined whether in the text information comprising obvious active vocabulary or passive vocabulary
Whether tone vocabulary is included in the text information.After tone vocabulary is parsed, the feelings expressed by the tone vocabulary are judged
Thread.
In the present embodiment, a tone lexicon can be pre-set, by the vocabulary included in the text information
Contrasted with the tone lexicon, you can parse the tone vocabulary included in the text information.The modal particle
Remittance is additionally provided with mood and expression corresponding to each tone vocabulary in storehouse, it is thus possible to be determined according to the tone vocabulary parsed
Mood expressed by the voice messaging.Mood corresponding to the tone vocabulary includes active mood and negative feeling.At other
In embodiment, mood corresponding to the tone vocabulary can also be finely divided, be divided into expression so as to which the modal particle be remitted and transferred
The vocabulary of sighing with feeling mood, the vocabulary for expressing query mood, the vocabulary etc. of the angry mood of expression.
S410, expression corresponding to the tone vocabulary is sent automatically.
Specifically, according to the corresponding expression set in the tone lexicon, there is language in the text information is judged
During gas vocabulary, obtain the tone vocabulary corresponding to expression, the expression is sent behind the voice messaging automatically.
S412, judge whether corresponding mood can be parsed according to the waveform of the voice messaging.If can, perform step
S414.If can not, perform step S416.
Specifically, when in the text information both without it is obvious actively or passive vocabulary, also without the tone vocabulary when, can
To parse the waveform of the voice messaging, the feature such as tone, volume and rhythm in the waveform judges corresponding feelings
Thread.For example, sound of speaking can more hoarse, words and expressions dead time be long, speech rate is slow, intonation is single when sad;It is manic
When sound of speaking is big, word speed is fast, constriction is strong etc..
A kind of language identification platform of a company's research and development of Britain in 2013 can monitor the 5 of user by analyzing tone
Kind mood:Glad, sadness, fear, angry and insensibility, recognition accuracy 70%-80%.One the father-in-law of Israel in 2012
Department, which have developed, to change according to tongue and range, analyze the algorithm of the moods such as indignation, anxiety, happiness or satisfaction, should
Algorithm can analyze 400 kinds of complicated moods of 11 classifications so far.
S414, it is automatic to send expression corresponding to judged mood.
Specifically, expression corresponding to the mood is obtained, automatically in institute after mood corresponding to judge according to the waveform
State and the expression is sent behind voice messaging.
S416, default neutral expression is sent automatically.
Specifically, corresponding to it can not also judge according to the waveform during mood, it is neutral to directly transmit default expression
The expression of mood, such as the expression that smiles fatuously etc..Refering to Fig. 7 (c) Suo Shi, neutral mood is expressed to be sent behind the voice messaging
Expression interface schematic diagram.
Embodiment three
As shown in figure 5, third embodiment of the invention proposes a kind of voice-enabled chat processing method.
This method comprises the following steps:
S500, receive user and start the operation that expression function is sent in voice-enabled chat.
Specifically, when user needs sending the expression that can express voice mood simultaneously during voice-enabled chat, then may be used
To start the function.In the present embodiment, user can be by the physical button or screen that set on the mobile terminal
In virtual key carry out start-up operation.
S502, the voice messaging that user inputs is converted into text information.
Specifically, after user inputs a voice messaging, this voice messaging is identified using speech text switch technology,
It is converted into corresponding text information.In the present embodiment, can be turned using existing speech text switch technology
Change, will not be repeated here.
S504, judge whether there is positive or passive vocabulary in the text information.If there is positive or passive vocabulary, perform
Step S506.If without positive or passive vocabulary, step S508 is performed.
Specifically, after text information is converted to, the vocabulary included in the text information is parsed, is determined whether bright
The vocabulary of aobvious expression active mood (such as happy, exciting, excited) expresses negative feeling (such as unhappy, sad)
Vocabulary.When parsing active vocabulary, determine the voice messaging expression is positive mood.When parsing passive vocabulary
When, determine the voice messaging expression is passive mood.
In the present embodiment, an active vocabulary storehouse and a passive vocabulary storehouse can be pre-set, the word is believed
Vocabulary included in breath is contrasted with the active vocabulary storehouse and passive vocabulary storehouse, you can is parsed in the text information
Comprising active vocabulary or passive vocabulary, so as to judge the mood expressed by the voice messaging.
In other embodiments, can also be finely divided further directed to the active vocabulary and passive vocabulary, by described in
Active vocabulary, which is divided into, expresses happy vocabulary, the vocabulary for expressing excitement etc., and it is sad that the passive vocabulary is divided into expression
Vocabulary, gloomy vocabulary of the nervous vocabulary of expression, expression etc..These vocabulary, we can assert that user is now in actively
Phychology, mobile phone can send automatically some laugh etc expression;If it find that vocabulary (such as unhappy, the wound of negative feeling
Heart etc.) vocabulary, the expression such as cry can be sent automatically.
S506, expression corresponding to the positive or passive vocabulary is sent automatically.
Specifically, table corresponding to each type of vocabulary can be set in the active vocabulary storehouse and passive vocabulary storehouse
Feelings, when there is positive or passive vocabulary in judging the text information, obtain corresponding to the active vocabulary or passive vocabulary
Expression (such as the expression laughed at or the expression cried), sends the expression behind the voice messaging automatically.For example, refer to Fig. 7 (a)
It is shown, to send the interface schematic diagram of the expression of expression active mood behind the voice messaging.Refering to Fig. 7 (b) Suo Shi, it is
The interface schematic diagram of the expression of expression negative feeling is sent behind the voice messaging.
S508, judge whether include tone vocabulary in the text information.If comprising tone vocabulary, step is performed
S510.If not including tone vocabulary, step S512 is performed.
Specifically, if not determined whether in the text information comprising obvious active vocabulary or passive vocabulary
Whether tone vocabulary is included in the text information.After tone vocabulary is parsed, the feelings expressed by the tone vocabulary are judged
Thread.
In the present embodiment, a tone lexicon can be pre-set, by the vocabulary included in the text information
Contrasted with the tone lexicon, you can parse the tone vocabulary included in the text information.The modal particle
Remittance is additionally provided with mood and expression corresponding to each tone vocabulary in storehouse, it is thus possible to be determined according to the tone vocabulary parsed
Mood expressed by the voice messaging.Mood corresponding to the tone vocabulary includes active mood and negative feeling.At other
In embodiment, mood corresponding to the tone vocabulary can also be finely divided, be divided into expression so as to which the modal particle be remitted and transferred
The vocabulary of sighing with feeling mood, the vocabulary for expressing query mood, the vocabulary etc. of the angry mood of expression.
S510, expression corresponding to the tone vocabulary is sent automatically.
Specifically, according to the corresponding expression set in the tone lexicon, there is language in the text information is judged
During gas vocabulary, obtain the tone vocabulary corresponding to expression, the expression is sent behind the voice messaging automatically.
S512, judge whether corresponding mood can be parsed according to the waveform of the voice messaging.If can, perform step
S514.If can not, flow terminates, i.e., does not send any expression.
Specifically, when in the text information both without it is obvious actively or passive vocabulary, also without the tone vocabulary when, can
To parse the waveform of the voice messaging, the feature such as tone, volume and rhythm in the waveform judges corresponding feelings
Thread.For example, sound of speaking can more hoarse, words and expressions dead time be long, speech rate is slow, intonation is single when sad;It is manic
When sound of speaking is big, word speed is fast, constriction is strong etc..
S514, it is automatic to send expression corresponding to judged mood.
Specifically, expression corresponding to the mood is obtained, automatically in institute after mood corresponding to judge according to the waveform
State and the expression is sent behind voice messaging.
Example IV
As shown in fig. 6, fourth embodiment of the invention proposes a kind of voice-enabled chat processing method.In the fourth embodiment, institute
The step of the step of stating voice-enabled chat processing method is with second embodiment and 3rd embodiment is similar, and difference is in step
Also include step S600 before S402 or step S502.Wherein:
S600, receive user and start the operation that expression function is sent in voice-enabled chat.
Specifically, when user needs sending the expression that can express voice mood simultaneously during voice-enabled chat, then may be used
To start the function.In the present embodiment, user can be by the physical button or screen that set on the mobile terminal
In virtual key carry out start-up operation.
That is, when user starts the function, follow-up step is performed, automatic send corresponds to behind voice messaging
Expression;When user does not start the function, follow-up step is not performed, only sends the voice messaging, transmission pair automatically
The expression answered.
The present invention further provides a kind of mobile terminal, the mobile terminal includes memory, processor, screen and voice
Chat processing system.The voice-enabled chat processing system is used for when sending voice messaging during voice-enabled chat, in the voice
It is automatic behind information to send the expression that express the voice mood, make chat scenario more intelligent, lift Consumer's Experience.
Embodiment five
As shown in figure 8, fourth embodiment of the invention proposes a kind of mobile terminal 2.The mobile terminal 2 includes memory
20th, processor 22, screen 26 and voice-enabled chat processing system 28.
Wherein, the memory 20 comprises at least a type of readable storage medium storing program for executing, and the shifting is installed on for storing
The operating system and types of applications software of dynamic terminal 2, such as program code of voice-enabled chat processing system 28 etc..It is in addition, described
Memory 20 can be also used for temporarily storing the Various types of data that has exported or will export.
The processor 22 can be in certain embodiments central processing unit (Central Processing Unit,
CPU), controller, microcontroller, microprocessor or other data processing chips.The processor 22 is generally used for controlling the shifting
The overall operation of dynamic terminal 2.In the present embodiment, the processor 22 is used to run the program code stored in the memory 20
Or processing data, such as run described voice-enabled chat processing system 28 etc..
The screen 26 is used for the touch operation for being shown and being received user.
Embodiment six
As shown in figure 9, fifth embodiment of the invention proposes a kind of voice-enabled chat processing system 28.In the present embodiment, institute
Stating voice-enabled chat processing system 28 includes:
Modular converter 800, a voice messaging for user to be inputted are converted to text information.
Specifically, after user inputs a voice messaging, this voice messaging is identified using speech text switch technology,
It is converted into corresponding text information.In the present embodiment, can be turned using existing speech text switch technology
Change, will not be repeated here.
Judge module 802, for judging whether there is positive or passive vocabulary in the text information.
Specifically, after text information is converted to, the vocabulary included in the text information is parsed, is determined whether bright
The vocabulary of aobvious expression active mood (such as happy, exciting, excited) expresses negative feeling (such as unhappy, sad)
Vocabulary.When parsing active vocabulary, determine the voice messaging expression is positive mood.When parsing passive vocabulary
When, determine the voice messaging expression is passive mood.
In the present embodiment, an active vocabulary storehouse and a passive vocabulary storehouse can be pre-set, the word is believed
Vocabulary included in breath is contrasted with the active vocabulary storehouse and passive vocabulary storehouse, you can is parsed in the text information
Comprising active vocabulary or passive vocabulary, so as to judge the mood expressed by the voice messaging.
In other embodiments, can also be finely divided further directed to the active vocabulary and passive vocabulary, by described in
Active vocabulary, which is divided into, expresses happy vocabulary, the vocabulary for expressing excitement etc., and it is sad that the passive vocabulary is divided into expression
Vocabulary, gloomy vocabulary of the nervous vocabulary of expression, expression etc..
Sending module 804, for when there is positive or passive vocabulary, sending automatically corresponding to the positive or passive vocabulary
Expression.
Specifically, table corresponding to each type of vocabulary can be set in the active vocabulary storehouse and passive vocabulary storehouse
Feelings, when there is positive or passive vocabulary in judging the text information, obtain corresponding to the active vocabulary or passive vocabulary
Expression (such as the expression laughed at or the expression cried), sends the expression behind the voice messaging automatically.For example, refer to Fig. 7 (a)
It is shown, to send the interface schematic diagram of the expression of expression active mood behind the voice messaging.Refering to Fig. 7 (b) Suo Shi, it is
The interface schematic diagram of the expression of expression negative feeling is sent behind the voice messaging.
Further, the judge module 802, it is additionally operable to when not positive or passive vocabulary, judges the word letter
Whether tone vocabulary is included in breath.
Specifically, if not determined whether in the text information comprising obvious active vocabulary or passive vocabulary
Whether tone vocabulary is included in the text information.After tone vocabulary is parsed, the feelings expressed by the tone vocabulary are judged
Thread.
In the present embodiment, a tone lexicon can be pre-set, by the vocabulary included in the text information
Contrasted with the tone lexicon, you can parse the tone vocabulary included in the text information.The modal particle
Remittance is additionally provided with mood and expression corresponding to each tone vocabulary in storehouse, it is thus possible to be determined according to the tone vocabulary parsed
Mood expressed by the voice messaging.Mood corresponding to the tone vocabulary includes active mood and negative feeling.At other
In embodiment, mood corresponding to the tone vocabulary can also be finely divided, be divided into expression so as to which the modal particle be remitted and transferred
The vocabulary of sighing with feeling mood, the vocabulary for expressing query mood, the vocabulary etc. of the angry mood of expression.
The sending module 804, it is additionally operable to when comprising tone vocabulary, send table corresponding to the tone vocabulary automatically
Feelings.
Specifically, according to the corresponding expression set in the tone lexicon, there is language in the text information is judged
During gas vocabulary, obtain the tone vocabulary corresponding to expression, the expression is sent behind the voice messaging automatically.
Further, the judge module 802, it is additionally operable to when not comprising tone vocabulary, judges to be believed according to the voice
Whether the waveform of breath can parse corresponding mood.
Specifically, when in the text information both without it is obvious actively or passive vocabulary, also without the tone vocabulary when, can
To parse the waveform of the voice messaging, the feature such as tone, volume and rhythm in the waveform judges corresponding feelings
Thread.For example, sound of speaking can more hoarse, words and expressions dead time be long, speech rate is slow, intonation is single when sad;It is manic
When sound of speaking is big, word speed is fast, constriction is strong etc..
The sending module 804, when being additionally operable to the mood corresponding to it can be parsed according to the waveform, automatic send is sentenced
Break expression corresponding to the mood.
Specifically, expression corresponding to the mood is obtained, automatically in institute after mood corresponding to judge according to the waveform
State and the expression is sent behind voice messaging.
Further, the sending module 804, when being additionally operable to the mood corresponding to it can not be parsed according to the waveform,
Automatically default neutral expression is sent.
Specifically, corresponding to it can not also judge according to the waveform during mood, it is neutral to directly transmit default expression
The expression of mood, such as the expression that smiles fatuously etc..Refering to Fig. 7 (c) Suo Shi, neutral mood is expressed to be sent behind the voice messaging
Expression interface schematic diagram.
In other embodiments, corresponding to it can not be parsed according to the waveform during mood, the sending module 804
Any expression can not be sent.
Embodiment seven
As shown in Figure 10, sixth embodiment of the invention proposes a kind of voice-enabled chat processing system 28.In the present embodiment, institute
Voice-enabled chat processing system 28 is stated except including the modular converter 800 in sixth embodiment, judge module 802, sending mould
Outside block 804, in addition to receiving module 806.
The receiving module 806, start the operation of the transmission expression function in voice-enabled chat for receiving user.
Specifically, when user needs sending the expression that can express voice mood simultaneously during voice-enabled chat, then may be used
To start the function.In the present embodiment, user can be by the physical button or screen that set on the mobile terminal
In virtual key carry out start-up operation.
That is, when user starts the function, the modular converter 800, judge module 802, sending module are triggered
804 perform follow-up step, the expression corresponding to automatic transmission behind voice messaging;When user does not start the function, do not hold
The follow-up step of row, only sends the voice messaging, do not send automatically corresponding to expression.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence " including one ... ", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal (can be mobile phone, computer, service
Device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art
Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot
Form, these are belonged within the protection of the present invention.
Claims (10)
1. a kind of voice-enabled chat processing method, it is characterised in that the method comprising the steps of:
The voice messaging that user inputs is converted into text information;
Judge whether there is positive or passive vocabulary in the text information;And
When there is positive or passive vocabulary, expression corresponding to the positive or passive vocabulary is sent automatically.
2. voice-enabled chat processing method according to claim 1, it is characterised in that this method also includes step:
When not positive or passive vocabulary, judge whether include tone vocabulary in the text information;
When comprising tone vocabulary, expression corresponding to the tone vocabulary is sent automatically.
3. voice-enabled chat processing method according to claim 2, it is characterised in that this method also includes step:
When not comprising tone vocabulary, judge whether corresponding mood can be parsed according to the waveform of the voice messaging;
It is automatic to send expression corresponding to judged mood corresponding to it can be parsed according to the waveform during mood.
4. voice-enabled chat processing method according to claim 3, it is characterised in that this method also includes step:
Corresponding to it can not be parsed according to the waveform during mood, default neutral expression is sent automatically.
5. voice-enabled chat processing method according to claim 3, it is characterised in that this method also includes step:
Corresponding to it can not be parsed according to the waveform during mood, any expression is not sent.
6. according to the voice-enabled chat processing method described in claim any one of 1-5, it is characterised in that this method will be used described
Also include step before the step of one voice messaging of family input is converted to text information:
Receive user and start the operation that expression function is sent in voice-enabled chat.
7. the voice-enabled chat processing method according to any one of claim 1, it is characterised in that described to judge the word letter
Whether there is the step of positive or passive vocabulary to include in breath:
Vocabulary included in the text information and default active vocabulary storehouse and passive vocabulary storehouse are contrasted, parsed
The active vocabulary or passive vocabulary included in the text information;
Setting in the active vocabulary storehouse and passive vocabulary storehouse obtains table corresponding to the active vocabulary or passive vocabulary
Feelings.
8. voice-enabled chat processing method according to claim 2, it is characterised in that be in the judgement text information
No the step of including tone vocabulary, includes:
Vocabulary included in the text information and the tone lexicon are contrasted, parsed in the text information
Comprising tone vocabulary;
According to mood and table corresponding to each tone vocabulary set in the tone vocabulary and the tone lexicon parsed
Feelings, the mood expressed by the voice messaging is determined, obtain expression corresponding to the tone vocabulary.
9. a kind of mobile terminal, it is characterised in that the mobile terminal includes:Memory, processor, screen and it is stored in described
On memory and the voice-enabled chat processing routine that can run on the processor, the voice-enabled chat processing routine is by the place
Manage the step of realizing the voice-enabled chat processing method as any one of claim 1 to 8 when device performs.
10. a kind of computer-readable recording medium, it is characterised in that be stored with voice on the computer-readable recording medium and chat
Its processing routine, realized when the voice-enabled chat processing routine is executed by processor as any one of claim 1 to 8
The step of voice-enabled chat processing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710915042.1A CN107453986A (en) | 2017-09-30 | 2017-09-30 | Voice-enabled chat processing method and corresponding mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710915042.1A CN107453986A (en) | 2017-09-30 | 2017-09-30 | Voice-enabled chat processing method and corresponding mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107453986A true CN107453986A (en) | 2017-12-08 |
Family
ID=60497602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710915042.1A Pending CN107453986A (en) | 2017-09-30 | 2017-09-30 | Voice-enabled chat processing method and corresponding mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107453986A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109347721A (en) * | 2018-09-28 | 2019-02-15 | 维沃移动通信有限公司 | A kind of method for sending information and terminal device |
CN112365893A (en) * | 2020-10-30 | 2021-02-12 | 上海中通吉网络技术有限公司 | Voice conversion method, device and equipment |
CN113409790A (en) * | 2020-03-17 | 2021-09-17 | Oppo广东移动通信有限公司 | Voice conversion method, device, terminal and storage medium |
WO2022012579A1 (en) * | 2020-07-14 | 2022-01-20 | 维沃移动通信有限公司 | Message display method, apparatus, and electronic device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956003A (en) * | 2016-04-20 | 2016-09-21 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106024014A (en) * | 2016-05-24 | 2016-10-12 | 努比亚技术有限公司 | Voice conversion method and device and mobile terminal |
-
2017
- 2017-09-30 CN CN201710915042.1A patent/CN107453986A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956003A (en) * | 2016-04-20 | 2016-09-21 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106024014A (en) * | 2016-05-24 | 2016-10-12 | 努比亚技术有限公司 | Voice conversion method and device and mobile terminal |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109347721A (en) * | 2018-09-28 | 2019-02-15 | 维沃移动通信有限公司 | A kind of method for sending information and terminal device |
CN113409790A (en) * | 2020-03-17 | 2021-09-17 | Oppo广东移动通信有限公司 | Voice conversion method, device, terminal and storage medium |
WO2022012579A1 (en) * | 2020-07-14 | 2022-01-20 | 维沃移动通信有限公司 | Message display method, apparatus, and electronic device |
CN112365893A (en) * | 2020-10-30 | 2021-02-12 | 上海中通吉网络技术有限公司 | Voice conversion method, device and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107562336A (en) | A kind of method, equipment and computer-readable recording medium for controlling suspension ball | |
CN107027114A (en) | A kind of SIM card switching method, equipment and computer-readable recording medium | |
CN108536481A (en) | A kind of application program launching method, mobile terminal and computer storage media | |
CN107343083A (en) | One kind lifting game experiencing method, apparatus and computer-readable recording medium | |
CN107566635A (en) | Screen intensity method to set up, mobile terminal and computer-readable recording medium | |
CN107436779A (en) | A kind of application management method, equipment and computer-readable recording medium | |
CN107517494A (en) | A kind of display methods of terminal battery electricity quantity, terminal and computer-readable recording medium | |
CN107748645A (en) | Reading method, mobile terminal and computer-readable recording medium | |
CN107528369A (en) | Terminal and its wireless charging control method, computer-readable recording medium | |
CN107277250A (en) | Display is concerned the method, terminal and computer-readable recording medium of chat message | |
CN107340833A (en) | Terminal temperature control method, terminal and computer-readable recording medium | |
CN107181700A (en) | Control method, mobile terminal and the storage medium of application program for mobile terminal | |
CN107682547A (en) | A kind of voice messaging regulation and control method, equipment and computer-readable recording medium | |
CN107707450A (en) | A kind of method, apparatus and computer-readable recording medium for transmitting file | |
CN107453986A (en) | Voice-enabled chat processing method and corresponding mobile terminal | |
CN107729103A (en) | A kind of method for switching theme, mobile terminal and computer-readable storage medium | |
CN107729115A (en) | A kind of display methods, equipment and computer-readable storage medium | |
CN108418948A (en) | A kind of based reminding method, mobile terminal and computer storage media | |
CN108172161A (en) | Display methods, mobile terminal and computer readable storage medium based on flexible screen | |
CN107181865A (en) | Processing method, terminal and the computer-readable recording medium of unread short messages | |
CN107844230A (en) | A kind of advertisement page method of adjustment, mobile terminal and computer-readable recording medium | |
CN107846503A (en) | A kind of display methods of application icon, device, terminal and computer-readable recording medium | |
CN107124513A (en) | Breath screen method, mobile terminal and computer-readable recording medium under talking state | |
CN107818787A (en) | A kind of processing method of voice messaging, terminal and computer-readable recording medium | |
CN107621915A (en) | A kind of message prompt method, equipment and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171208 |