CN105357171A - Communication method and terminal - Google Patents

Communication method and terminal Download PDF

Info

Publication number
CN105357171A
CN105357171A CN201410416385.XA CN201410416385A CN105357171A CN 105357171 A CN105357171 A CN 105357171A CN 201410416385 A CN201410416385 A CN 201410416385A CN 105357171 A CN105357171 A CN 105357171A
Authority
CN
China
Prior art keywords
terminal
coded data
opposite end
behavioural information
behavior coded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410416385.XA
Other languages
Chinese (zh)
Inventor
尚国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201410416385.XA priority Critical patent/CN105357171A/en
Priority to PCT/CN2014/089073 priority patent/WO2015117383A1/en
Publication of CN105357171A publication Critical patent/CN105357171A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/25Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service
    • H04M2203/251Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably
    • H04M2203/252Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably where a voice mode is enhanced with visual information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/68Gesture-dependent or behaviour-dependent

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a communication method. The communication method is applied to a terminal having a network communication function. The communication method comprises the following steps: obtaining and displaying a preset head portrait on a communication interactive interface of the terminal in a process that the terminal communicates with an opposite terminal; receiving first behaviour coded data sent by the opposite terminal; and performing a dynamic behaviour corresponding to the first behaviour coded data on the head portrait. The invention further discloses a terminal. According to the invention, the communication cost can be reduced while interaction of users in voice communication is enhanced; and furthermore, the bandwidth is saved.

Description

The method of call and terminal
Technical field
The present invention relates to communication technical field, particularly relate to a kind of method and terminal of call.
Background technology
Along with the development of the network communications technology and the lifting of communicating terminal hardware performance, internet, applications is more and more extensive, and the business related to gets more and more.Traditional voice call mode is set up transmitting audio data after voice channel between two terminals, this talking mode is more single, also lacking alternately between user and user, not intuitively, when social application is more and more diversified now, the requirement of user instantly cannot be met; And the quality of video calling is general, in 3G period, if the quality requirement of video calling is high, then Radio Resource is taken comparatively large, due to maximum in 3G be exactly the bandwidth of 64K, therefore the quality of video calling obviously can not reach the requirement of user, Consumer's Experience is poor, and in addition, cost is also higher.
Foregoing, only for auxiliary understanding technical scheme of the present invention, does not represent and admits that foregoing is prior art.
Summary of the invention
The technical problem of the cost of call is reduced while main purpose of the present invention is the interaction of adding users in voice call.
For achieving the above object, the invention provides a kind of method of call, be applied in the terminal with Internet phone-calling function, the method for described call comprises the following steps:
Carry out, in the process conversed, obtaining head portrait preset on the interactive interface of described terminal call and showing in terminal and opposite end;
Receive the first behavior coded data that described opposite end sends;
Described head portrait is performed the dynamic behaviour corresponding with described first behavior coded data.
Preferably, described step head portrait being performed the dynamic behaviour corresponding with described first behavior coded data comprises:
Described first behavior coded data is decoded, obtains the behavioural information that described first behavior coded data is corresponding;
The behavioural information that described behavioural information and described terminal prestore is mated;
When the match is successful, described head portrait is performed the dynamic behaviour corresponding with described behavioural information.
Preferably, the method for described call also comprises:
Obtain the behavioural information of described terminal, coding is carried out to described behavioural information and obtains the second behavior coded data, send described second behavior coded data to described opposite end.
Preferably, the method for described call also comprises:
Obtain the voice messaging of described opposite end, control described head portrait lip perform the action corresponding to described voice messaging, with synchronous labial.
Preferably, the step of the first behavior coded data of described reception opposite end transmission comprises:
The described first behavior coded data of described opposite end transmission is received by least one data channel set up in advance.
In addition, for achieving the above object, the present invention also provides a kind of terminal, and described terminal comprises:
Display module, for carrying out, in the process conversed, obtaining head portrait preset on the interactive interface of described terminal call and showing in terminal and opposite end;
Receiver module, for receiving the first behavior coded data that described opposite end sends;
Executive Module, for performing the dynamic behaviour corresponding with described first behavior coded data by described head portrait.
Preferably, described Executive Module comprises:
Decoding unit, for decoding to described first behavior coded data, obtains the behavioural information that described first behavior coded data is corresponding;
Matching unit, mates for the behavioural information described behavioural information and described terminal prestored;
Performance element, for when the match is successful, performs the dynamic behaviour corresponding with described behavioural information by described head portrait.
Preferably, described terminal also comprises:
Sending module, for obtaining the behavioural information of described terminal, carrying out coding to described behavioural information and obtaining the second behavior coded data, sends described second behavior coded data to described opposite end.
Preferably, described terminal also comprises:
Synchronization module, for obtaining the voice messaging of described opposite end, control described head portrait lip perform the action corresponding to described voice messaging, with synchronous labial.
Preferably, described receiver module specifically for
The described first behavior coded data of described opposite end transmission is received by least one data channel set up in advance.,
The method of a kind of call of the present invention and terminal, when terminal and opposite end carry out voice call, the interactive interface of terminal shows preset head portrait, when receiving the first behavior coded data that opposite end sends, this head portrait performs the dynamic behaviour corresponding with the first behavior coded data, comprises expression dynamic behaviour and limbs dynamic behaviour, the mode of this call is compared to traditional voice call, what add between user is mutual, can express intuitively, increases the interest of call; Compared to video calling, save bandwidth but to video calling, there is similar effect, reducing the cost of call.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of method first embodiment that the present invention converses;
Fig. 2 is the refinement schematic flow sheet of the step in Fig. 1, described head portrait being performed the dynamic behaviour corresponding with described first behavior coded data;
Fig. 3 is the schematic flow sheet of method second embodiment that the present invention converses;
Fig. 4 is the schematic flow sheet of method the 3rd embodiment that the present invention converses;
Fig. 5 is the high-level schematic functional block diagram of terminal first embodiment of the present invention;
Fig. 6 is the refinement high-level schematic functional block diagram of Executive Module in Fig. 5;
Fig. 7 is the high-level schematic functional block diagram of terminal second embodiment of the present invention;
Fig. 8 is the high-level schematic functional block diagram of terminal of the present invention 3rd embodiment.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The invention provides a kind of method of call, be applied in the terminal with Internet phone-calling function, with reference to Fig. 1, in one embodiment, the method for this call comprises:
Step S101, carries out, in the process conversed, obtaining head portrait preset on the interactive interface of described terminal call and showing in terminal and opposite end;
Terminal in the present embodiment and opposite end all have Internet phone-calling function, can be computer or smart mobile phone etc.
In the present embodiment, terminal and opposite end are called out, and after access success, set up voice channel for voice call, and the mode setting up voice channel can be same as the prior art.
Carry out in the process of voice call in terminal and opposite end, the interactive interface of the current call of terminal and opposite end shows one can have expression shape change, action changes and can the head portrait of audio lip sync, this interactive interface can also have the option of expression or the limb action that can send, can also have and send other message as the option of Word message etc.
In the present embodiment, terminal or opposite end all there is head portrait template library, comprising cartoon head portrait or actual human head sculpture template library, when carrying out voice call, user can transfer one of them head portrait, and the expression that this head portrait can send according to opposite end or limb action make corresponding action.
Step S102, receives the first behavior coded data that described opposite end sends;
In the present embodiment, opposite end can send the first behavior coded data by the option of expression that interactive interface provides or action to terminal.
In the present embodiment, the expression that the user of opposite end provides on interactive interface or Action option are selected, after user chooses a certain expression or action, the expression chosen or action are encoded by opposite end, obtain the first above-mentioned behavior coded data, then the first behavior coded data is sent to local terminal by opposite end.
In the present embodiment, when terminal and opposite end set up voice call, set up data channel while setting up voice channel, this data channel sends to the first behavior coded data of terminal or transmission terminal to send to the behavior coded data of opposite end in order to transmit opposite end.
In the present embodiment, realize the foundation of data channel according to the situation of data actual transmissions or the configuration of user, a data channel can be set up, also can set up many data channel.During as sent message or other data when voice call, then may need to set up many data channel.
In the present embodiment, the first behavior coded data packet draws together expression data, limb action data.First behavior coded data is the data obtained after specific coding, and the present embodiment transport behavior coded data instead of directly transmit picture between terminal and opposite end, in this way, can reduce the transmission quantity of data, saves bandwidth.
Step S103, performs the dynamic behaviour corresponding with described first behavior coded data by described head portrait.
In the present embodiment, dynamic behaviour corresponding to the first behavior coded data comprises expression dynamic behaviour, limbs dynamic behaviour.Such as, opposite end sends a behavior coded data of grimacing to terminal, then the head portrait in terminal performs the action of grimacing, and is expression dynamic behaviour; Opposite end sends a behavior coded data of shaking the head to terminal, then the head portrait in terminal performs the action of shaking the head, and is limbs dynamic behaviour.
Compared with prior art, in the present embodiment when terminal and opposite end carry out voice call, the interactive interface of terminal shows preset head portrait, when receiving the first behavior coded data that opposite end sends, this head portrait performs the dynamic behaviour corresponding with the first behavior coded data, comprises expression dynamic behaviour and limbs dynamic behaviour, the mode of this call is compared to traditional voice call, what add between user is mutual, can express intuitively, increases the interest of call; Compared to video calling, save bandwidth but to video calling, there is similar effect, reducing the cost of call.
In a preferred embodiment, as shown in Figure 2, on the basis of the embodiment of above-mentioned Fig. 1, above-mentioned steps S103 comprises:
Step S1031, decodes to described first behavior coded data, obtains the behavioural information that described first behavior coded data is corresponding;
Step S1032, mates the behavioural information that described behavioural information and described terminal prestore;
Step S1033, when the match is successful, performs the dynamic behaviour corresponding with described behavioural information by described head portrait.
In the present embodiment, user can select in the option of the expression sent on the interactive interface of opposite end or limb action, chooses a certain expression or limb action, this expression or limb action is encoded, obtain the first behavior coded data, send to terminal.Then terminal is decoded to this first behavior coded data, obtains the behavioural information that the first behavior coded data is corresponding: such as expressing one's feelings to laugh, grimace, smile etc. is encoded to 0001,0010,0011, the like; 1000,1001,1010 are encoded to limb actions such as shaking the head, nod, embrace, by that analogy, etc.After terminal obtains the first behavior coded data, decode according to the contrary mode of coding, obtain the behavioural information that the first behavior coded data is corresponding, as decoded 0010, obtaining corresponding behavioural information for grimacing, decoding 1000, obtaining corresponding behavioural information for shaking the head.
The present embodiment transport behavior coded data instead of directly transmit picture between terminal and opposite end, in this way, can reduce the transmission quantity of data, saves bandwidth.
In the present embodiment, can be sub-category to behavioural information, other behavioural information of same class is stored in same database, as database comprises expression ATL, limb action ATL etc.Behavioural information is mated with the behavioural information prestored in terminal database by the present embodiment, if the match is successful, then calls corresponding behavioural information in this database, and drives the head portrait on interactive interface to perform behavior information.
In a preferred embodiment, as shown in Figure 3, on the basis of the embodiment of above-mentioned Fig. 1, the method for this call also comprises:
Step S104, obtains the behavioural information of described terminal, carries out coding obtain the second behavior coded data to described behavioural information, sends described second behavior coded data to described opposite end.
In the present embodiment, opposite end can send behavior coded data to terminal, and terminal also can send behavior coded data to opposite end, comes interactive with this.
In the present embodiment, the expression that the user of local terminal provides on interactive interface or Action option are selected, and after user chooses a certain expression or action, the expression chosen or action are encoded by local terminal, obtain the second behavior coded data.That the second behavior coded data is the coded data that local terminal sends to opposite end, and the first behavioral data is the coded data that opposite end sends to local terminal with above-mentioned first behavior coded data difference.
The present embodiment terminal sends behavior coded data to opposite end can after step s 103, also can after other steps, as long as voice call is set up in terminal and opposite end, then namely terminal can send behavior coded data to opposite end.
In the present embodiment, terminal is carried out coding to behavioural information and is obtained the second behavior coded data and above-mentioned opposite end and carry out coding to behavioural information to obtain the first behavior coded data similar, with reference to above-described embodiment, can repeat no more herein.
In a preferred embodiment, as shown in Figure 4, on the basis of the embodiment of above-mentioned Fig. 1, the method for this call also comprises:
Step S105, obtains the voice messaging of described opposite end, control described head portrait lip perform the action corresponding to described voice messaging, with synchronous labial.
In the present embodiment, head portrait on interactive interface is except can performing expression dynamic behaviour and limbs dynamic behaviour, labial synchronization can also be realized, be specially: terminal obtains voice messaging by voice channel, according to obtained voice messaging, the lip of control head picture performs corresponding action, make the shape of the mouth as one speaks of head portrait and voice messaging basically identical, further close to video calling.And if when user selects the actual human head sculpture of the other side to be presented on interactive interface, then there is the communication effect almost consistent with video calling.
The present invention also provides a kind of terminal, and as shown in Figure 5, in one embodiment, this terminal comprises:
Display module 101, for carrying out, in the process conversed, obtaining head portrait preset on the interactive interface of described terminal call and showing in terminal and opposite end;
Terminal in the present embodiment and opposite end all have Internet phone-calling function, can be computer or smart mobile phone etc.
In the present embodiment, terminal and opposite end are called out, and after access success, set up voice channel for voice call, and the mode setting up voice channel can be same as the prior art.
Carry out in the process of voice call in terminal and opposite end, the interactive interface of the current call of terminal and opposite end shows one can have expression shape change, action changes and can the head portrait of audio lip sync, this interactive interface can also have the option of expression or the limb action that can send, can also have and send other message as the option of Word message etc.
In the present embodiment, terminal or opposite end all there is head portrait template library, comprising cartoon head portrait or actual human head sculpture template library, when carrying out voice call, user can transfer one of them head portrait, and the expression that this head portrait can send according to opposite end or limb action make corresponding action.
Receiver module 102, for receiving the first behavior coded data that described opposite end sends;
In the present embodiment, opposite end can send the first behavior coded data by the option of expression that interactive interface provides or action to terminal.
In the present embodiment, the expression that the user of opposite end provides on interactive interface or Action option are selected, and after user chooses a certain expression or action, the expression chosen or action are encoded by opposite end, obtain the first above-mentioned behavior coded data.
In the present embodiment, when terminal and opposite end set up voice call, set up data channel while setting up voice channel, this data channel sends to the first behavior coded data of terminal or transmission terminal to send to the behavior coded data of opposite end in order to transmit opposite end.
In the present embodiment, realize the foundation of data channel according to the situation of data actual transmissions or the configuration of user, a data channel can be set up, also can set up many data channel.During as sent message or other data when voice call, then may need to set up many data channel.
In the present embodiment, the first behavior coded data packet draws together expression data, limb action data.First behavior coded data is the data obtained after specific coding, and the present embodiment transport behavior coded data instead of directly transmit picture between terminal and opposite end, in this way, can reduce the transmission quantity of data, saves bandwidth.
Executive Module 103, for performing the dynamic behaviour corresponding with described first behavior coded data by described head portrait.
In the present embodiment, dynamic behaviour corresponding to the first behavior coded data comprises expression dynamic behaviour, limbs dynamic behaviour.Such as, opposite end sends a behavior coded data of grimacing to terminal, then the head portrait in terminal performs the action of grimacing, and is expression dynamic behaviour; Opposite end sends a behavior coded data of shaking the head to terminal, then the head portrait in terminal performs the action of shaking the head, and is limbs dynamic behaviour.
In a preferred embodiment, as shown in Figure 6, on the basis of the embodiment of above-mentioned Fig. 5, Executive Module 103 comprises:
Decoding unit 1031, for decoding to described first behavior coded data, obtains the behavioural information that described first behavior coded data is corresponding;
Matching unit 1032, mates for the behavioural information described behavioural information and described terminal prestored;
Performance element 1033, for when the match is successful, performs the dynamic behaviour corresponding with described behavioural information by described head portrait.
In the present embodiment, user can select in the option of the expression sent on the interactive interface of opposite end or limb action, chooses a certain expression or limb action, this expression or limb action is encoded, obtain the first behavior coded data, send to terminal.Then terminal is decoded to this first behavior coded data, obtains the behavioural information that the first behavior coded data is corresponding: such as expressing one's feelings to laugh, grimace, smile etc. is encoded to 0001,0010,0011, the like; 1000,1001,1010 are encoded to limb actions such as shaking the head, nod, embrace, by that analogy, etc.After terminal obtains the first behavior coded data, decode according to the contrary mode of coding, obtain the behavioural information that the first behavior coded data is corresponding, as decoded 0010, obtaining corresponding behavioural information for grimacing, decoding 1000, obtaining corresponding behavioural information for shaking the head.
The present embodiment transport behavior coded data instead of directly transmit picture between terminal and opposite end, in this way, can reduce the transmission quantity of data, saves bandwidth.
In the present embodiment, can be sub-category to behavioural information, other behavioural information of same class is stored in same database, as database comprises expression ATL, limb action ATL etc.Behavioural information is mated with the behavioural information prestored in terminal database by the present embodiment, if the match is successful, then calls corresponding behavioural information in this database, and drives the head portrait on interactive interface to perform behavior information.
In a preferred embodiment, as shown in Figure 7, on the basis of the embodiment of above-mentioned Fig. 5, terminal also comprises:
Sending module 104, for obtaining the behavioural information of described terminal, carrying out coding to described behavioural information and obtaining the second behavior coded data, sends described second behavior coded data to described opposite end.
In the present embodiment, opposite end can send behavior coded data to terminal, and terminal also can send behavior coded data to opposite end, comes interactive with this.
In the present embodiment, the expression that the user of local terminal provides on interactive interface or Action option are selected, and after user chooses a certain expression or action, the expression chosen or action are encoded by local terminal, obtain the second behavior coded data.That the second behavior coded data is the coded data that local terminal sends to opposite end, and the first behavioral data is the coded data that opposite end sends to local terminal with above-mentioned first behavior coded data difference.
As long as voice call is set up in the present embodiment terminal and opposite end, then namely the sending module 104 of terminal can send behavior coded data to opposite end.
In the present embodiment, terminal is carried out coding to behavioural information and to be obtained in the second behavior coded data and above-mentioned Fig. 6 embodiment opposite end and carry out coding to behavioural information to obtain the first behavior coded data similar, with reference to above-described embodiment, can repeat no more herein.
In a preferred embodiment, as shown in Figure 8, on the basis of the embodiment of above-mentioned Fig. 5, terminal also comprises:
Synchronization module 105, for obtaining the voice messaging of described opposite end, control described head portrait lip perform the action corresponding to described voice messaging, with synchronous labial.
In the present embodiment, head portrait on interactive interface is except can performing expression dynamic behaviour and limbs dynamic behaviour, labial synchronization can also be realized, be specially: terminal obtains voice messaging by voice channel, according to obtained voice messaging, the lip of control head picture performs corresponding action, make the shape of the mouth as one speaks of head portrait and voice messaging basically identical, further close to video calling.And if when user selects the actual human head sculpture of the other side to be presented on interactive interface, then there is the communication effect almost consistent with video calling.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize specification of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a method for call, be applied in the terminal with Internet phone-calling function, it is characterized in that, the method for described call comprises the following steps:
Carry out, in the process conversed, obtaining head portrait preset on the interactive interface of described terminal call and showing in terminal and opposite end;
Receive the first behavior coded data that described opposite end sends;
Described head portrait is performed the dynamic behaviour corresponding with described first behavior coded data.
2. the method for call as claimed in claim 1, is characterized in that, described step head portrait being performed the dynamic behaviour corresponding with described first behavior coded data comprises:
Described first behavior coded data is decoded, obtains the behavioural information that described first behavior coded data is corresponding;
The behavioural information that described behavioural information and described terminal prestore is mated;
When the match is successful, described head portrait is performed the dynamic behaviour corresponding with described behavioural information.
3. the method for call as claimed in claim 1, it is characterized in that, the method for described call also comprises:
Obtain the behavioural information of described terminal, coding is carried out to described behavioural information and obtains the second behavior coded data, send described second behavior coded data to described opposite end.
4. the method for the call as described in claim 1 or 3, is characterized in that, the method for described call also comprises:
Obtain the voice messaging of described opposite end, control described head portrait lip perform the action corresponding to described voice messaging, with synchronous labial.
5. the method for call as claimed in claim 1, is characterized in that, the step of the first behavior coded data that described reception opposite end sends comprises:
The described first behavior coded data of described opposite end transmission is received by least one data channel set up in advance.
6. a terminal, is characterized in that, described terminal comprises:
Display module, for carrying out, in the process conversed, obtaining head portrait preset on the interactive interface of described terminal call and showing in terminal and opposite end;
Receiver module, for receiving the first behavior coded data that described opposite end sends;
Executive Module, for performing the dynamic behaviour corresponding with described first behavior coded data by described head portrait.
7. terminal as claimed in claim 6, it is characterized in that, described Executive Module comprises:
Decoding unit, for decoding to described first behavior coded data, obtains the behavioural information that described first behavior coded data is corresponding;
Matching unit, mates for the behavioural information described behavioural information and described terminal prestored;
Performance element, for when the match is successful, performs the dynamic behaviour corresponding with described behavioural information by described head portrait.
8. terminal as claimed in claim 6, it is characterized in that, described terminal also comprises:
Sending module, for obtaining the behavioural information of described terminal, carrying out coding to described behavioural information and obtaining the second behavior coded data, sends described second behavior coded data to described opposite end.
9. the terminal as described in claim 6 or 8, is characterized in that, described terminal also comprises:
Synchronization module, for obtaining the voice messaging of described opposite end, control described head portrait lip perform the action corresponding to described voice messaging, with synchronous labial.
10. terminal as claimed in claim 6, is characterized in that, described receiver module specifically for
The described first behavior coded data of described opposite end transmission is received by least one data channel set up in advance.
CN201410416385.XA 2014-08-21 2014-08-21 Communication method and terminal Pending CN105357171A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410416385.XA CN105357171A (en) 2014-08-21 2014-08-21 Communication method and terminal
PCT/CN2014/089073 WO2015117383A1 (en) 2014-08-21 2014-10-21 Method for call, terminal and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410416385.XA CN105357171A (en) 2014-08-21 2014-08-21 Communication method and terminal

Publications (1)

Publication Number Publication Date
CN105357171A true CN105357171A (en) 2016-02-24

Family

ID=53777201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410416385.XA Pending CN105357171A (en) 2014-08-21 2014-08-21 Communication method and terminal

Country Status (2)

Country Link
CN (1) CN105357171A (en)
WO (1) WO2015117383A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534203A (en) * 2016-12-27 2017-03-22 努比亚技术有限公司 Mobile terminal and communication method
CN110062116A (en) * 2019-04-29 2019-07-26 上海掌门科技有限公司 Method and apparatus for handling information

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012034A (en) * 2021-03-05 2021-06-22 西安万像电子科技有限公司 Method, device and system for image display processing

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1427626A (en) * 2001-12-20 2003-07-02 松下电器产业株式会社 Virtual television telephone device
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
CN101419499A (en) * 2008-11-14 2009-04-29 东南大学 Multimedia human-computer interaction method based on cam and mike
CN101692681A (en) * 2009-09-17 2010-04-07 杭州聚贝软件科技有限公司 Method and system for realizing virtual image interactive interface on phone set terminal
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN103218844A (en) * 2013-04-03 2013-07-24 腾讯科技(深圳)有限公司 Collocation method, implementation method, client side, server and system of virtual image
CN103442137A (en) * 2013-08-26 2013-12-11 苏州跨界软件科技有限公司 Method for allowing a user to look over virtual face of opposite side in mobile phone communication
CN103797761A (en) * 2013-08-22 2014-05-14 华为技术有限公司 Communication method, client, and terminal
CN103856390A (en) * 2012-12-04 2014-06-11 腾讯科技(深圳)有限公司 Instant messaging method and system, messaging information processing method and terminals

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN102404435A (en) * 2011-11-15 2012-04-04 宇龙计算机通信科技(深圳)有限公司 Display method for communication terminal talking interface and communication terminal
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1427626A (en) * 2001-12-20 2003-07-02 松下电器产业株式会社 Virtual television telephone device
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
CN101419499A (en) * 2008-11-14 2009-04-29 东南大学 Multimedia human-computer interaction method based on cam and mike
CN101692681A (en) * 2009-09-17 2010-04-07 杭州聚贝软件科技有限公司 Method and system for realizing virtual image interactive interface on phone set terminal
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN103856390A (en) * 2012-12-04 2014-06-11 腾讯科技(深圳)有限公司 Instant messaging method and system, messaging information processing method and terminals
CN103218844A (en) * 2013-04-03 2013-07-24 腾讯科技(深圳)有限公司 Collocation method, implementation method, client side, server and system of virtual image
CN103797761A (en) * 2013-08-22 2014-05-14 华为技术有限公司 Communication method, client, and terminal
CN103442137A (en) * 2013-08-26 2013-12-11 苏州跨界软件科技有限公司 Method for allowing a user to look over virtual face of opposite side in mobile phone communication

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534203A (en) * 2016-12-27 2017-03-22 努比亚技术有限公司 Mobile terminal and communication method
CN110062116A (en) * 2019-04-29 2019-07-26 上海掌门科技有限公司 Method and apparatus for handling information

Also Published As

Publication number Publication date
WO2015117383A1 (en) 2015-08-13

Similar Documents

Publication Publication Date Title
KR100617183B1 (en) System and method for multiplexing media information over a network using reduced communications resources and prior knowledge/experience of a called or calling party
EP3902272A1 (en) Audio and video pushing method and audio and video stream pushing client based on webrtc protocol
CN101971618A (en) Method for implementing rich video on mobile terminals
CN103442071A (en) Mobile phone screen content real-time sharing method
CN102255827A (en) Video chatting method, device and system
CN101888519A (en) Method for sharing desktop contents and intelligent equipment
WO2014194728A1 (en) Voice processing method, apparatus, and system
CN103780865A (en) Method, devices, control method and control device for video call
US20180139158A1 (en) System and method for multipurpose and multiformat instant messaging
CN105357171A (en) Communication method and terminal
CN104717131A (en) Information interaction method and server
CN102364965A (en) Refined display method of mobile phone communication information
CN107040458B (en) Method and system for realizing intercommunication of video conference
CN101252701A (en) Terminal realizing method of multimedia polychrome calling name card
WO2015117373A1 (en) Method and device for realizing voice message visualization service
CN104735389A (en) Information processing method and equipment
CN103684970A (en) Transmission method and thin terminals for media data streams
CN105516933A (en) Message processing method, message processing device, mobile terminal and server
CN103856395A (en) Method and system for calling friends and making discussion on webpage
CN104283762A (en) Method, system, client-side and server for transmitting instant messaging conversation content
CN106506326A (en) A kind of video call method, terminal and system
CN102264044A (en) Video short message sending method, apparatus thereof and system thereof
CN102075722B (en) Method for transmitting information in visual telephone and communication terminal
CN113014544B (en) Method and device for establishing centerless media link based on webRtc
CN103023746A (en) IM (Instant Messaging) system and drawing board implementation method based on same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160224