CN104836979A - Information interaction method and device - Google Patents

Information interaction method and device Download PDF

Info

Publication number
CN104836979A
CN104836979A CN201410098843.XA CN201410098843A CN104836979A CN 104836979 A CN104836979 A CN 104836979A CN 201410098843 A CN201410098843 A CN 201410098843A CN 104836979 A CN104836979 A CN 104836979A
Authority
CN
China
Prior art keywords
data
state information
audio frequency
special effect
special effects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410098843.XA
Other languages
Chinese (zh)
Inventor
左洪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Beijing Co Ltd
Original Assignee
Tencent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Beijing Co Ltd filed Critical Tencent Technology Beijing Co Ltd
Priority to CN201410098843.XA priority Critical patent/CN104836979A/en
Publication of CN104836979A publication Critical patent/CN104836979A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information interaction method and a device, and belongs to the technical field of a computer. The method comprises acquiring audio data from a local terminal, determining state information corresponding to the audio data, acquiring audio special effect data and image special effect data corresponding to the state information, sending the audio data to an opposite terminal, sending the audio special effect data and/or the image special effect data to the opposite terminal, playing the audio data by the opposite terminal, and playing the audio special effect data and/or displaying image special effect data. The audio data is acquired from the local terminal, the audio special effect data and/or the image special effect data are acquired according to the state information corresponding to the audio data, and then, after the audio data and the audio special effect data and/or the image special effect data are received at the opposite terminal, the opposite terminal plays the audio data and plays the audio special effect data and/or display the image special effect data. An information interaction content can be increased without extra operation, so the interaction embodiment effect is better.

Description

The method and apparatus of information interaction
Technical field
The present invention relates to field of computer technology, particularly a kind of information interacting method and device.
Background technology
Along with the development of computer technology, the effect requirements that user experiences information interaction is more and more higher.Due to the key that the demand constantly meeting user is Information Technology Development, therefore, how to provide the information interaction of more horn of plenty, become the problem that those skilled in the art comparatively pay close attention to.
Correlation technique is when carrying out information interaction, carry out except video interactive except adopting traditional voice and image in the video mode, also can by the function application in video software, when the user of first terminal side is selected the function application in Video chat software by operations such as mouse clicks in the video mode, word can be obtained, the special-effect information such as animation, the word that first terminal will obtain, the special-effect information such as animation are sent to the second terminal under video mode, and the special-effect information obtained by first terminal and the second terminal demonstration, thus provide the information interaction of more horn of plenty.
Realizing in process of the present invention, inventor finds that correlation technique at least exists following problem:
Correlation technique, when providing the information interaction of more horn of plenty, owing to needing by the function application in Video chat software, and needs user constantly to carry out clicking operation in the video mode, therefore, adds extra operation, cause the poor effect of interactive experience.
Summary of the invention
In order to solve the problem of correlation technique, embodiments provide a kind of method and apparatus of information interaction.Described technical scheme is as follows:
On the one hand, provide a kind of method of information interaction, described method comprises:
Obtain voice data from local terminal, and determine the state information that described voice data is corresponding;
Obtain audio frequency special effects data corresponding to described state information and/or image special effect data;
Described voice data is sent to opposite end, and described audio frequency special effects data and/or image special effect data are sent to opposite end, by opposite end playing audio data, and play described audio frequency special effects data and/or show described image special effect data.
On the other hand, provide a kind of device of information interaction, described device comprises:
First acquisition module, for obtaining voice data from local terminal;
Determination module, for determining the state information that described voice data is corresponding;
Second acquisition module, for obtaining audio frequency special effects data corresponding to described state information and/or image special effect data;
First sending module, for described voice data is sent to opposite end, by opposite end playing audio data;
Second sending module, for described audio frequency special effects data and/or image special effect data are sent to opposite end, plays described audio frequency special effects data and/or display image special effect data by opposite end.
The beneficial effect that the technical scheme that the embodiment of the present invention provides is brought is:
By obtaining voice data from local terminal, and according to state information acquisition audio frequency special effects data corresponding to voice data and image special effect data, and then after opposite end receives voice data and audio frequency special effects data and/or image special effect data, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.Due to without the need to extra operation, can increase the content of information interaction, therefore, mutual embodiment effect is better.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the method flow diagram of a kind of information interaction that the embodiment of the present invention one provides;
Fig. 2 is the method flow diagram of a kind of information interaction that the embodiment of the present invention two provides;
Fig. 3 is the display interface schematic diagram of the terminal A that the embodiment of the present invention two provides;
Fig. 4 is the apparatus structure schematic diagram of the first information interaction that the embodiment of the present invention three provides;
Fig. 5 is the structural representation of the first acquisition module that the embodiment of the present invention three provides;
Fig. 6 is the structural representation of the determination module that the embodiment of the present invention three provides;
Fig. 7 is the apparatus structure schematic diagram of the second information interaction that the embodiment of the present invention three provides;
Fig. 8 is the apparatus structure schematic diagram of the third information interaction that the embodiment of the present invention three provides;
Fig. 9 is the structural representation of a kind of terminal that the embodiment of the present invention four provides.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Embodiment one
Along with the development of computer technology, the content of information interaction is more and more abundanter.In order to meet the demand of the information interaction of user, embodiments provide a kind of method of information interaction.See Fig. 1, the method flow that the embodiment of the present invention provides comprises:
101: obtain voice data from local terminal, and determine the state information that voice data is corresponding.
As a kind of preferred embodiment, obtain voice data from local terminal, comprising:
Set up video session with opposite end, and obtain the voice data video session from local terminal.
As a kind of optional embodiment, determine to comprise the state information that voice data is corresponding:
Voice data is resolved and obtains corresponding Word message, and the state information corresponding according to Word message determination voice data.
102: obtain audio frequency special effects data corresponding to state information and/or image special effect data.
As a kind of optional embodiment, obtain audio frequency special effects data corresponding to state information and/or image special effect data, comprising:
From the corresponding relation of the state information prestored and audio frequency special effects data and image special effect data, obtain audio frequency special effects data corresponding to state information, and/or from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data.
As a kind of optional embodiment, obtain audio frequency special effects data corresponding to state information and/or image special effect data, comprising:
Corresponding audio frequency special effects data and/or image special effect data are generated according to state information, and using the audio frequency special effects data of generation and/or image special effect data as audio frequency special effects data corresponding to the state information got and/or image special effect data.
As a kind of optional embodiment, the method, also comprises:
Arrange and the corresponding relation of storaging state information and audio frequency special effects data, arrange and the corresponding relation of storaging state information and image special effect data.
103: voice data is sent to opposite end, and audio frequency special effects data and/or image special effect data are sent to opposite end, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.
As a kind of optional embodiment, also to be comprised before showing image special effect data by opposite end playing audio-fequency data and audio frequency special effects data:
The mutual exit button of special display effect on the display interface of local terminal;
If do not detect, the mutual exit button of special efficacy is selected, then perform step audio frequency special effects data and/or image special effect data being sent to opposite end.
The method that the present embodiment provides, by obtaining voice data from local terminal, and according to state information acquisition audio frequency special effects data corresponding to voice data and image special effect data, and then after opposite end receives voice data and audio frequency special effects data and/or image special effect data, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.Due to without the need to extra operation, can increase the content of information interaction, therefore, mutual embodiment effect is better.
Embodiment two
Embodiments provide a kind of method of information interaction, for the ease of understanding, now in conjunction with the content of above-described embodiment one, take local terminal as terminal A, opposite end is terminal B, it is example that terminal A sends voice data to terminal B, explains explanation in detail to the method for the information interaction that the embodiment of the present invention provides.See Fig. 2, the method flow that the embodiment of the present invention provides comprises:
201: obtain voice data from terminal A.
Wherein, the mode of voice data is obtained from terminal A, include but not limited to obtain in following scene: between terminal A and terminal B, set up video session, detailed process is: terminal A will send video session to terminal B and invite, terminal B, after the video session that receiving terminal A sends is invited, establishes video session between terminal A and terminal B.In order to can information interaction be carried out, the voice data that terminal A will obtain in video session, and using the voice data got from video session as the voice data got.
Obtain the mode of the voice data in video session about terminal A, include but not limited to obtain voice data by microphone.Wherein, terminal A is the terminal with video capability, and as mobile phone, panel computer etc., the present embodiment does not do concrete restriction to terminal A.Wherein, the form of voice data can WMA(WindowsMedia Audio, Microsoft's audio format), AMR(Adaptive Multi-Rate, adaptive multi-rate coding), MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio frequency aspect 3) etc., the present embodiment does not do concrete restriction to the form of voice data.
Certainly, terminal A not only can obtain voice data in the video mode, can also obtain view data.Obtain the mode of the view data in video session about terminal A, include but not limited to obtain view data by built-in camera.The form of view data can be BMP(bitmap, bitmap file), JPEG (JointPhotographic Experts Group, joint image expert group) etc., the present embodiment does not do concrete restriction to the form of view data equally.
202: determine the state information that voice data is corresponding.
Wherein, state information is the information can reacting user's current state.Because the language of user under normal circumstances can reflect the current state of user, such as, if get the implication representated by voice data of user for " not knowing ", the state that then can reflect user current is had no alternative, if get the implication representated by voice data of user for " unhappy ", then can reflect the current state of user very melancholy, therefore, after obtaining voice data, can determine according to voice data the state information that voice data is corresponding.
Particularly, about the mode determining the state information that voice data is corresponding, include but not limited to:
The first step, resolves voice data and obtains corresponding Word message;
For the first step, voice data is resolved the mode obtaining corresponding Word message, includes but not limited in the following way:
First, voice data is converted into audio signal;
Secondly, utilize the audio signal prestored and each word corresponding relation that audio signal is cut into morpheme sheet;
Again, adopt specific algorithm in the corresponding relation of audio signal with each word, find the word mated with the morpheme sheet after cutting, this word is the Word message of voice data being resolved the correspondence obtained.
Second step, the state information corresponding according to Word message determination voice data.
For second step, about the mode according to state information corresponding to Word message determination voice data, include but not limited to search the state information corresponding with Word message at the Word message prestored with the corresponding relation of state information, and the state information corresponding with Word message found is defined as state information corresponding to voice data.
For said process, for the ease of understanding, explain explanation in detail by with a concrete example below.
Such as, the Word message prestored and the corresponding relation of state information are: happy-happy state, the state etc. of painful-painful state, do not know-helpless.Obtaining corresponding Word message if carried out resolving by voice data for " happily ", then can be " happy state " in the state information that the Word message prestored is corresponding with finding " happily " in the corresponding relation of state information according to Word message; Obtaining corresponding Word message if carried out resolving by voice data for " not knowing ", then can be " helpless state " at the Word message prestored with finding state information corresponding to " not knowing " in the corresponding relation of state information according to Word message.
Further, due to the key that the Word message that prestores and the corresponding relation of state information are according to state information corresponding to Word message determination voice data, therefore, in order to state information that can be corresponding according to Word message determination voice data, the method that the present embodiment provides needs the corresponding relation prestoring Word message and state information.About the mode of corresponding relation storing Word message and state information, include but not limited to Word message to be stored in corresponding storage medium with the corresponding relation of state information.
203: obtain audio frequency special effects data corresponding to state information and/or image special effect data.
Information interaction due to traditional voice data and view data can not meet the demand of user, in order to provide the information interaction of more horn of plenty, the method that the present embodiment provides, after determining the state information that audio-frequency information is corresponding, can also according to audio frequency special effects data corresponding to state information acquisition or image special effect data, certainly, also can according to audio frequency special effects data corresponding to state information acquisition and image special effect data.Wherein, the audio frequency special effects data that state information is corresponding is the voice data that can show user's current state.Such as, if the current state of user is happy, then the audio frequency special effects data that state information is corresponding is cheerful and light-hearted voice data; If the current state of user is helpless, then the audio frequency special effects data that state information is corresponding is helpless voice data.Wherein, the image special effect data that state information is corresponding can, for showing the expression data of user's current state, also can be the advertising message that the state current to user is relevant.Such as, if the current state of user is hungry, then the image special effect data that state information is corresponding can be the expression picture of hunger, also can be the advertising messages of the food picture comprising a certain brand fast food restaurant.
Particularly, about the mode obtaining audio frequency special effects data corresponding to state information and/or image special effect data, the following two kinds mode is included but not limited to:
First kind of way: from the state information prestored with obtain audio frequency special effects data corresponding to state information the corresponding relation of audio frequency special effects data, and/or from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data.
For first kind of way, in order to audio frequency special effects data corresponding to state information can be obtained according to the state information prestored and the corresponding relation of audio frequency special effects data, image special effect data corresponding to state information are obtained according to the state information prestored and the corresponding relation of image special effect data, the method that the present embodiment provides can pre-set and the corresponding relation of storaging state information and audio frequency special effects data, and the corresponding relation of state information and image special effect data.
About the mode of corresponding relation arranging state information and audio frequency special effects data, what include but not limited to provide the corresponding relation of state information and audio frequency special effects data arranges interface, arranges corresponding relation interface being arranged state information and audio frequency special effects data by user according to the hobby of oneself by this corresponding relation.Interactive maintenance device is detecting that user is after arranging the setting operation on interface, arranges the corresponding relation of state information and audio frequency special effects data accordingly.About the mode of corresponding relation prestoring state information and audio frequency special effects data, include but not limited to the state information of setting to be stored in the storage mediums such as corresponding internal memory, flash memory with the corresponding relation of audio frequency special effects data.About the form of the corresponding relation of storaging state information and audio frequency special effects data, include but not limited to store with form, matrix form.
In order to the corresponding relation of the state information and audio frequency special effects data that represent storage more intuitively, below to adopt the corresponding relation of form storaging state information and audio frequency special effects data to be introduced.Specifically as shown in table 1:
Table 1
State information Audio frequency special effects data
Helpless Helpless special efficacy music
Glad Glad special efficacy music
Painful Painful special efficacy music
About the mode of corresponding relation arranging state information and image special effect data, what include but not limited to provide the corresponding relation of state information and image special effect data arranges interface, arranges corresponding relation interface being arranged state information and image special effect data by user according to the hobby of oneself by this corresponding relation.Interactive maintenance device is detecting that user is after arranging the setting operation on interface, arranges the corresponding relation of state information and image special effect data accordingly.About the mode of corresponding relation prestoring state information and image special effect data, include but not limited to the state information of setting to be stored in the storage mediums such as corresponding internal memory, flash memory with the corresponding relation of image special effect data.About the form of the corresponding relation of storaging state information and image special effect data, include but not limited to store with form, matrix form.
Below to adopt the corresponding relation of form storaging state information and image special effect data to be introduced.Specifically as shown in table 2:
Table 2
State information Image special effect data
Helpless Helpless expression
Glad Glad expression
Painful Grimace
It should be noted that, it is not the corresponding relation all needing to arrange state information and audio frequency special effects data when audio frequency special effects data corresponding to each acquisition state information and/or image special effect data, the corresponding relation of state information and image special effect data, this setting up procedure can perform before obtaining audio frequency special effects data corresponding to state information and/or image special effect data first, when obtaining audio frequency special effects data corresponding to state information and/or image special effect data afterwards again, the state information that can directly use this to pre-set and the corresponding relation of audio frequency special effects data, the corresponding relation of state information and image special effect data.Certainly, when needing the corresponding relation of the corresponding relation to the state information pre-set and audio frequency special effects data, state information and image special effect data to upgrade, this setting steps can again be performed.About the number of times performing this setting steps, the present embodiment does not limit this.
Further, after the state information prestored and the corresponding relation of audio frequency special effects data and the corresponding relation of state information and image special effect data, if in the process of this information interaction, need special effects data be audio frequency special effects data, then can from the state information prestored with obtain audio frequency special effects data corresponding to state information the corresponding relation of audio frequency special effects data; If in the process of this information interaction, the special effects data of needs is image special effect data, then can from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data; If in the process of this information interaction, the special effects data needed is audio frequency special effects data and image special effect data, then can from the state information prestored with obtain audio frequency special effects data corresponding to state information the corresponding relation of audio frequency special effects data, and from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data.
For said process, for the ease of understanding, below by need the special effects data of acquisition to explain explanation in detail for audio frequency special effects data and image special effect data instance.
Such as, the corresponding relation of the state information prestored and audio frequency special effects data is: the special efficacy music of helpless-helpless special efficacy music, happiness-cheerful and light-hearted, the special efficacy music etc. of misery-sorrow, the corresponding relation of the state information prestored and image special effect data is: the expression, misery-grimace etc. of helpless-helpless expression, happiness-happiness.If be " helpless " according to the state information that voice data is determined, be then " helpless special efficacy music " from the state information prestored with obtaining audio frequency special effects data corresponding to " helpless " the corresponding relation of audio frequency special effects data, the state information prestored is " helpless expression " with obtaining image special effect data corresponding to " helpless " in the corresponding relation of image special effect data; If be " happiness " according to the state information that voice data is determined, being then " glad special efficacy music " from the state information prestored with obtaining audio frequency special effects data corresponding to " happiness " the corresponding relation of audio frequency special effects data, is " expression of happiness " from the state information prestored with obtaining image special effect data corresponding to " happiness " the corresponding relation of image special effect data.
It should be noted that, above-mentioned first kind of way is audio frequency special effects data corresponding to state information that obtain and image special effect data is the audio frequency special effects data and image special effect data that prestore, and when audio frequency special effects data corresponding to state information and/or image special effect data can not obtain from the state information prestored and the corresponding relation of audio frequency special effects data and the corresponding relation of state information and image special effect data, the following second way can be adopted to obtain.
The second way: generate corresponding audio frequency special effects data and/or image special effect data according to state information, and using the audio frequency special effects data of generation and/or image special effect data as audio frequency special effects data corresponding to the state information got and/or image special effect data.
For the second way, in order to the information interaction of more horn of plenty can be provided, the method that the present embodiment provides also can generate corresponding audio frequency special effects data and/or image special effect data according to state information, and then using the audio frequency special effects data of generation and/or image special effect data as audio frequency special effects data corresponding to the state information got and/or image special effect data.Particularly, in the process of information interaction providing more horn of plenty, special effects data is if desired audio frequency special effects data, then can generate corresponding audio frequency special effects data according to state information; If desired special effects data is image special effect data, then can generate corresponding image special effect data according to state information; If desired special effects data is audio frequency special effects data and image special effect data, then can generate corresponding audio frequency special effects data and image special effect data according to state information.
Further, because continuation also uses by the audio frequency special effects data of correspondence that generates according to state information and/or image special effect data in follow-up information interactive process, therefore, the method that the present embodiment provides, after generating audio frequency special effects data and/or image special effect data according to state information, also will perform the step of the corresponding relation storing state information and audio frequency special effects data and/or the image special effect data generated.About the mode of the corresponding relation of the state information and audio frequency special effects data and/or image special effect data that store generation, include but not limited to the corresponding relation of the state information of generation and audio frequency special effects data and/or image special effect data to be stored in the storage medium such as internal memory, flash memory.It should be noted that, if in specific implementation process, only use audio frequency special effects data, then without the need to the corresponding relation of storaging state information and image special effect data.In like manner, if in specific implementation process, only use image special effect data, then without the need to the corresponding relation of storaging state information and audio frequency special effects data.When only using audio frequency special effects data and image special effect data at the same time, can the corresponding relation of both storaging state information and image special effect data, can also the corresponding relation of storaging state information and audio frequency special effects data.
204: voice data is sent to terminal B, and audio frequency special effects data and/or image special effect data are sent to terminal B, by terminal B playing audio-fequency data, and audio plays special effects data and/or display image special effect data.
Owing to having obtained voice data, audio frequency special effects data and/or image special effect data in above-mentioned steps 201 and step 203, therefore, in order to realize the information interaction between terminal A and terminal B, and the content that abundant information is mutual, voice data, audio frequency special effects data and/or image special effect data are also sent to terminal B by the method that the present embodiment provides, by terminal B playing audio-fequency data, and audio plays special effects data and/or display image special effect data.
Preferably, owing to being not that each user wishes when carrying out information interaction to play special efficacy audio frequency special display effect view data, therefore, in order to meet the demand of different user, the method that the present embodiment provides is before being sent to terminal B by voice data, audio frequency special effects data and/or image special effect data, also by the mutual exit button of special display effect on the display interface of terminal A, so that user can select the information interaction of whether carrying out more horn of plenty.Wherein, content terminal A display interface carried includes but not limited to the viewing area, the mutual exit button of special efficacy etc. of view data.Particularly, if do not detect, the mutual exit button of special efficacy is selected, illustrate that the user of terminal A side wishes to carry out the information interaction of more horn of plenty, now voice data, audio frequency special effects data and/or image special effect data are sent to terminal B, and then by terminal B playing audio-fequency data, and audio plays special effects data and/or display image special effect data; If detect, the mutual exit button of special efficacy is selected, illustrate that the user of terminal A side does not wish to carry out the information interaction of more horn of plenty, now audio frequency special effects data and/or image special effect data can not be sent to terminal B, and voice data be sent to opposite end, and by terminal B playing audio-fequency data.
For said process, for the ease of understanding, explain explanation in detail by with a concrete example below.
Fig. 3 is the display interface of terminal A, and 1 in Fig. 3 is the viewing area of the view data on terminal A display interface, and 2 in Fig. 3 is the mutual exit button of the special efficacy on terminal A display interface.If before voice data, audio frequency special effects data and/or image special effect data are sent to opposite end, detect that whole A holds the mutual exit button 2 on display interface selected, now voice data is sent to terminal B, and then by terminal B playing audio-fequency data; If before voice data, audio frequency special effects data and/or image special effect data are sent to terminal B, do not detect that the mutual exit button 2 on terminal A display interface is selected, now voice data, audio frequency special effects data and/or image special effect data are sent to terminal B, and then by terminal B playing audio-fequency data, and audio plays special effects data and/or display image special effect data.Certainly, terminal A also can play this voice data, and plays this audio frequency special effects data and/or show this image special effect data.
It should be noted that, said process is obtain voice data from terminal A, and according to state information acquisition audio frequency special effects data corresponding to voice data and/or image special effect data, and then in the process of terminal B audio plays special effects data and/or display image special effect data.Certainly, terminal B is after the audio frequency special effects data and/or image special effect data of the terminal A transmission received, also voice data can be obtained in terminal B side, and adopt the above-mentioned method generating special effects data in terminal A side, send audio frequency special effects data and/or image special effect data by terminal B to terminal A.Concrete principle is consistent with said process, repeats no more herein.
The method that the present embodiment provides, by obtaining voice data from local terminal, and according to state information acquisition audio frequency special effects data corresponding to voice data and image special effect data, and then after opposite end receives voice data and audio frequency special effects data and/or image special effect data, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.Due to without the need to extra operation, can increase the content of information interaction, therefore, mutual embodiment effect is better.
Embodiment three
See Fig. 4, embodiments provide a kind of device of information interaction, this device comprises:
First acquisition module 401, for obtaining voice data from local terminal;
Determination module 402, for determining the state information that voice data is corresponding;
Second acquisition module 403, for obtaining audio frequency special effects data corresponding to state information and/or image special effect data;
First sending module 404, for voice data is sent to opposite end, by opposite end playing audio-fequency data;
Second sending module 405, for audio frequency special effects data and/or image special effect data are sent to opposite end, by opposite end audio plays special effects data and/or display image special effect data.
See Fig. 5, the first acquisition module 401, comprising:
Set up unit 4011, for setting up video session with opposite end;
Acquiring unit 4012, for obtaining the voice data in video session from local terminal.
See Fig. 6, determination module 402, comprising:
Resolution unit 4021, obtains corresponding Word message for being resolved by voice data;
Determining unit 4022, for the state information corresponding according to Word message determination voice data.
As a kind of optional embodiment, second acquisition module 403, for from the state information prestored with obtain audio frequency special effects data corresponding to state information in the corresponding relation of audio frequency special effects data, and/or from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data.
As a kind of optional embodiment, second acquisition module 403, for generating corresponding audio frequency special effects data and/or image special effect data according to state information, and using the audio frequency special effects data of generation and/or image special effect data as audio frequency special effects data corresponding to the state information got and/or image special effect data.
See Fig. 7, this device, also comprises:
Processing module 406, for arranging and the corresponding relation of storaging state information and audio frequency special effects data, is arranged and the corresponding relation of storaging state information and image special effect data.
See Fig. 8, this device, also comprises:
Display module 407, for the mutual exit button of special display effect on the display interface of local terminal;
Second sending module 404, for when not detecting that the mutual exit button of special efficacy is selected, performs step audio frequency special effects data and/or image special effect data being sent to opposite end.
The device that the present embodiment provides, by obtaining voice data from local terminal, and according to state information acquisition audio frequency special effects data corresponding to voice data and image special effect data, and then after opposite end receives voice data and audio frequency special effects data and/or image special effect data, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.Due to without the need to extra operation, can increase the content of information interaction, therefore, mutual embodiment effect is better.
Embodiment four
See Fig. 9, it illustrates the structural representation of the terminal involved by the embodiment of the present invention, this terminal may be used for the method implementing the information interaction provided in above-described embodiment.Specifically:
Terminal 900 can comprise RF(Radio Frequency, radio frequency) circuit 110, the memory 120 including one or more computer-readable recording mediums, input unit 130, display unit 140, transducer 150, voicefrequency circuit 160, WiFi(Wireless Fidelity, Wireless Fidelity) module 170, include the parts such as processor 180 and power supply 190 that more than or processes core.It will be understood by those skilled in the art that the restriction of the not structure paired terminal of the terminal structure shown in Fig. 9, the parts more more or less than diagram can be comprised, or combine some parts, or different parts are arranged.Wherein:
RF circuit 110 can be used for receiving and sending messages or in communication process, the reception of signal and transmission, especially, after being received by the downlink information of base station, transfer to more than one or one processor 180 to process; In addition, base station is sent to by relating to up data.Usually, RF circuit 110 includes but not limited to antenna, at least one amplifier, tuner, one or more oscillator, subscriber identity module (SIM) card, transceiver, coupler, LNA(Low Noise Amplifier, low noise amplifier), duplexer etc.In addition, RF circuit 110 can also by radio communication and network and other devices communicatings.Described radio communication can use arbitrary communication standard or agreement, include but not limited to GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General Packet Radio Service, general packet radio service), CDMA (Code Division Multiple Access, code division multiple access), WCDMA (Wideband CodeDivision Multiple Access, Wideband Code Division Multiple Access (WCDMA)), LTE (Long Term Evolution, Long Term Evolution), Email, SMS (Short Messaging Service, Short Message Service) etc.
Memory 120 can be used for storing software program and module, and processor 180 is stored in software program and the module of memory 120 by running, thus performs the application of various function and data processing.Memory 120 mainly can comprise storage program district and store data field, and wherein, storage program district can storage operation system, application program (such as sound-playing function, image player function etc.) etc. needed at least one function; Store data field and can store the data (such as voice data, phone directory etc.) etc. created according to the use of terminal 900.In addition, memory 120 can comprise high-speed random access memory, can also comprise nonvolatile memory, such as at least one disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 120 can also comprise Memory Controller, to provide the access of processor 180 and input unit 130 pairs of memories 120.
Input unit 130 can be used for the numeral or the character information that receive input, and produces and to arrange with user and function controls relevant keyboard, mouse, action bars, optics or trace ball signal and inputs.Particularly, input unit 130 can comprise Touch sensitive surface 131 and other input equipments 132.Touch sensitive surface 131, also referred to as touch display screen or Trackpad, user can be collected or neighbouring touch operation (such as user uses any applicable object or the operations of annex on Touch sensitive surface 131 or near Touch sensitive surface 131 such as finger, stylus) thereon, and drive corresponding jockey according to the formula preset.Optionally, Touch sensitive surface 131 can comprise touch detecting apparatus and touch controller two parts.Wherein, touch detecting apparatus detects the touch orientation of user, and detects the signal that touch operation brings, and sends signal to touch controller; Touch controller receives touch information from touch detecting apparatus, and converts it to contact coordinate, then gives processor 180, and the order that energy receiving processor 180 is sent also is performed.In addition, the polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave can be adopted to realize Touch sensitive surface 131.Except Touch sensitive surface 131, input unit 130 can also comprise other input equipments 132.Particularly, other input equipments 132 can include but not limited to one or more in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
Display unit 140 can be used for the various graphical user interface showing information or the information being supplied to user and the terminal 900 inputted by user, and these graphical user interface can be made up of figure, text, icon, video and its combination in any.Display unit 140 can comprise display floater 141, optionally, the form such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) can be adopted to configure display floater 141.Further, Touch sensitive surface 131 can cover display floater 141, when Touch sensitive surface 131 detects thereon or after neighbouring touch operation, send processor 180 to determine the type of touch event, on display floater 141, provide corresponding vision to export with preprocessor 180 according to the type of touch event.Although in fig .9, Touch sensitive surface 131 and display floater 141 be as two independently parts realize input and input function, in certain embodiments, can by Touch sensitive surface 131 and display floater 141 integrated and realize input and output function.
Terminal 900 also can comprise at least one transducer 150, such as optical sensor, motion sensor and other transducers.Particularly, optical sensor can comprise ambient light sensor and proximity transducer, and wherein, ambient light sensor the light and shade of environmentally light can regulate the brightness of display floater 141, proximity transducer when terminal 900 moves in one's ear, can cut out display floater 141 and/or backlight.As the one of motion sensor; Gravity accelerometer can detect the size of all directions (are generally three axles) acceleration; size and the direction of gravity can be detected time static, can be used for identifying the application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of mobile phone attitude, Vibration identification correlation function (such as pedometer, knock) etc.; As for terminal 900 also other transducers such as configurable gyroscope, barometer, hygrometer, thermometer, infrared ray sensor, do not repeat them here.
Voicefrequency circuit 160, loud speaker 161, microphone 162 can provide the audio interface between user and terminal 900.Voicefrequency circuit 160 can by receive voice data conversion after the signal of telecommunication, be transferred to loud speaker 161, by loud speaker 161 be converted to voice signal export; On the other hand, the voice signal of collection is converted to the signal of telecommunication by microphone 162, voice data is converted to after being received by voicefrequency circuit 160, after again voice data output processor 180 being processed, through RF circuit 110 to send to such as another terminal, or export voice data to memory 120 to process further.Voicefrequency circuit 160 also may comprise earphone jack, to provide the communication of peripheral hardware earphone and terminal 900.
WiFi belongs to short range wireless transmission technology, and by WiFi module 170, terminal 900 can help that user sends and receive e-mail, browsing page and access streaming video etc., and its broadband internet wireless for user provides is accessed.Although Fig. 9 shows WiFi module 170, be understandable that, it does not belong to must forming of terminal 900, can omit in the scope of essence not changing invention as required completely.
Processor 180 is control centres of terminal 900, utilize the various piece of various interface and the whole mobile phone of connection, software program in memory 120 and/or module is stored in by running or performing, and call the data be stored in memory 120, perform various function and the deal with data of terminal 900, thus integral monitoring is carried out to mobile phone.Optionally, processor 180 can comprise one or more process core; Optionally, processor 180 accessible site application processor and modem processor, wherein, application processor mainly processes operating system, user interface and application program etc., and modem processor mainly processes radio communication.Be understandable that, above-mentioned modem processor also can not be integrated in processor 180.
Terminal 900 also comprises the power supply 190(such as battery of powering to all parts), preferably, power supply can be connected with processor 180 logic by power-supply management system, thus realizes the functions such as management charging, electric discharge and power managed by power-supply management system.Power supply 190 can also comprise one or more direct current or AC power, recharging system, power failure detection circuit, power supply changeover device or the random component such as inverter, power supply status indicator.
Although not shown, terminal 900 can also comprise camera, bluetooth module etc., does not repeat them here.Specifically in the present embodiment, the display unit of terminal 900 is touch-screen displays, and terminal 900 also includes memory, and one or more than one program, one of them or more than one program are stored in memory, and are configured to be performed by more than one or one processor.Described more than one or one program package is containing the instruction for performing following operation:
Obtain voice data from local terminal, and determine the state information that voice data is corresponding;
Obtain audio frequency special effects data corresponding to state information and image special effect data;
Voice data is sent to opposite end, and audio frequency special effects data and/or image special effect data are sent to opposite end, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.
Suppose that above-mentioned is the first possible execution mode, in the execution mode that the second then provided based on the execution mode that the first is possible is possible, in the memory of terminal, also comprise the instruction for performing following operation: obtain voice data from local terminal, comprising:
Set up video session with opposite end, and obtain the voice data video session from local terminal.
Then in the third the possible execution mode provided, in the memory of terminal, also comprise the instruction for performing following operation based on the first possible execution mode or the possible execution mode of the second:
Determine to comprise the state information that voice data is corresponding:
Voice data is resolved and obtains corresponding Word message, and the state information corresponding according to Word message determination voice data.
In the 4th kind of possible execution mode then provided based on the execution mode that the first is possible to the third, in the memory of terminal, also comprise the instruction for performing following operation: obtain audio frequency special effects data corresponding to state information and/or image special effect data, comprising:
From the state information prestored with obtain audio frequency special effects data corresponding to state information the corresponding relation of audio frequency special effects data, and/or from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data.
Then in the first the 5th kind of possible execution mode provided to the 4th kind of possible execution mode, in the memory of terminal, also comprise the instruction for performing following operation: obtain audio frequency special effects data corresponding to state information and/or image special effect data, comprising:
Corresponding audio frequency special effects data and/or image special effect data are generated according to state information, and using the audio frequency special effects data of generation and/or image special effect data as audio frequency special effects data corresponding to the state information got and/or image special effect data.
Then in the first the 6th kind of possible execution mode provided to the 5th kind of possible implementation, in the memory of terminal, also comprising the instruction for performing following operation: arrange and the corresponding relation of storaging state information and audio frequency special effects data, arranging and the corresponding relation of storaging state information and image special effect data.
Then in the first the 7th kind of possible execution mode provided to the 6th kind of possible implementation, in the memory of terminal, also comprise the instruction for performing following operation:, also to be comprised before showing image special effect data by opposite end playing audio-fequency data and audio frequency special effects data:
The mutual exit button of special display effect on the display interface of local terminal;
If do not detect, the mutual exit button of special efficacy is selected, then perform step audio frequency special effects data and/or image special effect data being sent to opposite end.
The terminal that the embodiment of the present invention provides, by obtaining voice data from local terminal, and according to state information acquisition audio frequency special effects data corresponding to voice data and image special effect data, and then after opposite end receives voice data and audio frequency special effects data and/or image special effect data, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.Due to without the need to extra operation, can increase the content of information interaction, therefore, mutual embodiment effect is better.
Embodiment five
The embodiment of the present invention additionally provides a kind of computer-readable recording medium, and this computer-readable recording medium can be the computer-readable recording medium comprised in the memory in above-described embodiment; Also can be individualism, be unkitted the computer-readable recording medium allocated in terminal.This computer-readable recording medium stores more than one or one program, and this more than one or one program is used for performing the method for information interaction by one or more than one processor, and the method comprises:
Obtain voice data from local terminal, and determine the state information that voice data is corresponding;
Obtain audio frequency special effects data corresponding to state information and image special effect data;
Voice data is sent to opposite end, and audio frequency special effects data and/or image special effect data are sent to opposite end, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.
Suppose that above-mentioned is the first possible execution mode, in the execution mode that the second then provided based on the execution mode that the first is possible is possible, in the memory of terminal, also comprise the instruction for performing following operation: obtain voice data from local terminal, comprising:
Set up video session with opposite end, and obtain the voice data video session from local terminal.
Then in the third the possible execution mode provided, in the memory of terminal, also comprise the instruction for performing following operation based on the first possible execution mode or the possible execution mode of the second:
Determine to comprise the state information that voice data is corresponding:
Voice data is resolved and obtains corresponding Word message, and the state information corresponding according to Word message determination voice data.
In the 4th kind of possible execution mode then provided based on the execution mode that the first is possible to the third, in the memory of terminal, also comprise the instruction for performing following operation: obtain audio frequency special effects data corresponding to state information and/or image special effect data, comprising:
From the state information prestored with obtain audio frequency special effects data corresponding to state information the corresponding relation of audio frequency special effects data, and/or from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data.
Then in the first the 5th kind of possible execution mode provided to the 4th kind of possible execution mode, in the memory of terminal, also comprise the instruction for performing following operation: obtain audio frequency special effects data corresponding to state information and/or image special effect data, comprising:
Corresponding audio frequency special effects data and/or image special effect data are generated according to state information, and using the audio frequency special effects data of generation and/or image special effect data as audio frequency special effects data corresponding to the state information got and/or image special effect data.
Then in the first the 6th kind of possible execution mode provided to the 5th kind of possible implementation, in the memory of terminal, also comprising the instruction for performing following operation: arrange and the corresponding relation of storaging state information and audio frequency special effects data, arranging and the corresponding relation of storaging state information and image special effect data.
Then in the first the 7th kind of possible execution mode provided to the 6th kind of possible implementation, in the memory of terminal, also comprise the instruction for performing following operation:, also to be comprised before showing image special effect data by opposite end playing audio-fequency data and audio frequency special effects data:
The mutual exit button of special display effect on the display interface of local terminal;
If do not detect, the mutual exit button of special efficacy is selected, then perform step audio frequency special effects data and/or image special effect data being sent to opposite end.
The computer-readable recording medium that the embodiment of the present invention provides, by obtaining voice data from local terminal, and according to state information acquisition audio frequency special effects data corresponding to voice data and image special effect data, and then after opposite end receives voice data and audio frequency special effects data and/or image special effect data, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.Due to without the need to extra operation, can increase the content of information interaction, therefore, mutual embodiment effect is better.
Embodiment six
A kind of graphical user interface is provided in the embodiment of the present invention, this graphical user interface is used on the display terminal of information interaction, and the confirmation terminal of this executable operations comprises touch-screen display, memory and one or more than one processor for performing one or more than one program; This graphical user interface comprises:
Voice data and view data is obtained from local terminal;
Determine the state information that voice data is corresponding;
Obtain audio frequency special effects data corresponding to state information and image special effect data;
Voice data, audio frequency special effects data, view data and image special effect data are sent to opposite end, by local terminal and opposite end playing audio-fequency data and audio frequency special effects data, and display image data and image special effect data.
In sum, the graphical user interface that the embodiment of the present invention provides, by obtaining voice data from local terminal, and according to state information acquisition audio frequency special effects data corresponding to voice data and image special effect data, and then after opposite end receives voice data and audio frequency special effects data and/or image special effect data, by opposite end playing audio-fequency data, and audio plays special effects data and/or display image special effect data.Due to without the need to extra operation, can increase the content of information interaction, therefore, mutual embodiment effect is better.
It should be noted that: the device of the information interaction that above-described embodiment provides is when information interaction, only be illustrated with the division of above-mentioned each functional module, in practical application, can distribute as required and by above-mentioned functions and be completed by different functional modules, internal structure by the device of information interaction is divided into different functional modules, to complete all or part of function described above.In addition, the device of the information interaction that above-described embodiment provides and the embodiment of the method for information interaction belong to same design, and its specific implementation process refers to embodiment of the method, repeats no more here.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can have been come by hardware, the hardware that also can carry out instruction relevant by program completes, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium mentioned can be read-only memory, disk or CD etc.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (14)

1. a method for information interaction, is characterized in that, described method comprises:
Obtain voice data from local terminal, and determine the state information that described voice data is corresponding;
Obtain audio frequency special effects data corresponding to described state information and/or image special effect data;
Described voice data is sent to opposite end, and described audio frequency special effects data and/or image special effect data are sent to opposite end, by opposite end playing audio data, and play described audio frequency special effects data and/or display image special effect data.
2. method according to claim 1, is characterized in that, described from local terminal acquisition voice data, comprising:
Set up video session with opposite end, and obtain the voice data described video session from local terminal.
3. method according to claim 1, is characterized in that, describedly determines the state information that described voice data is corresponding, comprising:
Described voice data is resolved and obtains corresponding Word message, and determine according to described Word message the state information that described voice data is corresponding.
4. method according to claim 1, is characterized in that, the audio frequency special effects data that described acquisition state information is corresponding and/or image special effect data, comprising:
From the state information prestored with obtain audio frequency special effects data corresponding to state information the corresponding relation of audio frequency special effects data, and/or from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data.
5. method according to claim 1, is characterized in that, the audio frequency special effects data that described acquisition state information is corresponding and/or image special effect data, comprising:
Corresponding audio frequency special effects data and/or image special effect data are generated according to described state information, and using the audio frequency special effects data of generation and/or image special effect data as audio frequency special effects data corresponding to the state information got and/or image special effect data.
6. the method according to claim 4 or 5, is characterized in that, described method, also comprises:
Arrange and the corresponding relation of storaging state information and audio frequency special effects data, arrange and the corresponding relation of storaging state information and image special effect data.
7. method according to claim 1, is characterized in that, describedly, also to be comprised before showing described image special effect data by opposite end playing audio data and audio frequency special effects data:
The mutual exit button of special display effect on the display interface of local terminal;
If do not detect, the mutual exit button of described special efficacy is selected, then perform the step described audio frequency special effects data and/or image special effect data being sent to opposite end.
8. a device for information interaction, is characterized in that, described device comprises:
First acquisition module, for obtaining voice data from local terminal;
Determination module, for determining the state information that described voice data is corresponding;
Second acquisition module, for obtaining audio frequency special effects data corresponding to described state information and/or image special effect data;
First sending module, for described voice data is sent to opposite end, by opposite end playing audio data;
Second sending module, for described audio frequency special effects data and/or image special effect data are sent to opposite end, plays described audio frequency special effects data and/or display image special effect data by opposite end.
9. device according to claim 8, is characterized in that, described first acquisition module, comprising:
Set up unit, for setting up video session with opposite end;
Acquiring unit, for obtaining the voice data in described video session from local terminal.
10. device according to claim 8, is characterized in that, described determination module, comprising:
Resolution unit, obtains corresponding Word message for being resolved by described voice data;
Determining unit, for determining according to described Word message the state information that described voice data is corresponding.
11. devices according to claim 8, it is characterized in that, described second acquisition module, for from the state information prestored with obtain audio frequency special effects data corresponding to state information in the corresponding relation of audio frequency special effects data, and/or from the state information prestored with obtain image special effect data corresponding to state information the corresponding relation of image special effect data.
12. devices according to claim 8, it is characterized in that, described second acquisition module, for generating corresponding audio frequency special effects data and/or image special effect data according to described state information, and using the audio frequency special effects data of generation and/or image special effect data as audio frequency special effects data corresponding to the state information got and/or image special effect data.
13. devices according to claim 11 or 12, it is characterized in that, described device, also comprises:
Processing module, for arranging and the corresponding relation of storaging state information and audio frequency special effects data, is arranged and the corresponding relation of storaging state information and image special effect data.
14. devices according to claim 8, is characterized in that, described device, also comprises:
Display module, for the mutual exit button of special display effect on the display interface of local terminal;
Described sending module, for when not detecting that the mutual exit button of described special efficacy is selected, performs the step described audio frequency special effects data and/or image special effect data being sent to opposite end.
CN201410098843.XA 2014-03-17 2014-03-17 Information interaction method and device Pending CN104836979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410098843.XA CN104836979A (en) 2014-03-17 2014-03-17 Information interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410098843.XA CN104836979A (en) 2014-03-17 2014-03-17 Information interaction method and device

Publications (1)

Publication Number Publication Date
CN104836979A true CN104836979A (en) 2015-08-12

Family

ID=53814592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410098843.XA Pending CN104836979A (en) 2014-03-17 2014-03-17 Information interaction method and device

Country Status (1)

Country Link
CN (1) CN104836979A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028119A (en) * 2016-05-30 2016-10-12 徐文波 Multimedia special effect customizing method and device
CN107864357A (en) * 2017-09-28 2018-03-30 努比亚技术有限公司 Video calling special effect controlling method, terminal and computer-readable recording medium
CN113450804A (en) * 2021-06-23 2021-09-28 深圳市火乐科技发展有限公司 Voice visualization method and device, projection equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002479A1 (en) * 2007-06-29 2009-01-01 Sony Ericsson Mobile Communications Ab Methods and terminals that control avatars during videoconferencing and other communications
CN101789990A (en) * 2009-12-23 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Method and mobile terminal for judging emotion of opposite party in conservation process
CN101917585A (en) * 2010-08-13 2010-12-15 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for regulating video information sent from visual telephone to opposite terminal
CN103297742A (en) * 2012-02-27 2013-09-11 联想(北京)有限公司 Data processing method, microprocessor, communication terminal and server

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002479A1 (en) * 2007-06-29 2009-01-01 Sony Ericsson Mobile Communications Ab Methods and terminals that control avatars during videoconferencing and other communications
CN101789990A (en) * 2009-12-23 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Method and mobile terminal for judging emotion of opposite party in conservation process
CN101917585A (en) * 2010-08-13 2010-12-15 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for regulating video information sent from visual telephone to opposite terminal
CN103297742A (en) * 2012-02-27 2013-09-11 联想(北京)有限公司 Data processing method, microprocessor, communication terminal and server

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028119A (en) * 2016-05-30 2016-10-12 徐文波 Multimedia special effect customizing method and device
CN106028119B (en) * 2016-05-30 2019-07-19 徐文波 The customizing method and device of multimedia special efficacy
CN107864357A (en) * 2017-09-28 2018-03-30 努比亚技术有限公司 Video calling special effect controlling method, terminal and computer-readable recording medium
CN113450804A (en) * 2021-06-23 2021-09-28 深圳市火乐科技发展有限公司 Voice visualization method and device, projection equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN104869468A (en) Method and apparatus for displaying screen information
CN104133624B (en) Web animation display packing, device and terminal
CN104978115A (en) Content display method and device
CN104243671A (en) Volume adjustment method and device and electronic device
CN103179026B (en) Communication means in user interactive system, system and server and client side
CN104519404A (en) Graphics interchange format file playing method and device
CN104426962A (en) Multi-terminal binding method, binding server, terminal and multi-terminal binding system
CN104683456A (en) Service processing method, server and terminal
CN103716331A (en) Method, terminal, server and system for numerical value transfer
CN105208056A (en) Information exchange method and terminal
CN104298491A (en) Message processing method and device
CN105187733A (en) Video processing method, device and terminal
CN104869465A (en) Video playing control method and device
CN105094809A (en) Combined picture layout modification method and device and terminal equipment
CN103294442B (en) A kind of method of playing alert tones, device and terminal device
CN105516784A (en) Virtual good display method and device
CN105447124A (en) Virtual article sharing method and device
CN104954159A (en) Network information statistics method and device
CN104602135A (en) Method and device for controlling full screen play
CN103945241A (en) Streaming data statistical method, system and related device
CN104539571A (en) Information interaction method, identity authentication method, server and terminal
CN103607431B (en) Mobile terminal resource processing method, device and equipment
CN105530239A (en) Multimedia data obtaining method and device
CN104378755A (en) Terminal interaction method and device
CN104598542A (en) Display method and device for multimedia information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150812

RJ01 Rejection of invention patent application after publication