WO2015117383A1 - Method for call, terminal and computer storage medium - Google Patents

Method for call, terminal and computer storage medium Download PDF

Info

Publication number
WO2015117383A1
WO2015117383A1 PCT/CN2014/089073 CN2014089073W WO2015117383A1 WO 2015117383 A1 WO2015117383 A1 WO 2015117383A1 CN 2014089073 W CN2014089073 W CN 2014089073W WO 2015117383 A1 WO2015117383 A1 WO 2015117383A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior
terminal
avatar
call
encoded data
Prior art date
Application number
PCT/CN2014/089073
Other languages
French (fr)
Chinese (zh)
Inventor
尚国强
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2015117383A1 publication Critical patent/WO2015117383A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/25Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service
    • H04M2203/251Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably
    • H04M2203/252Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably where a voice mode is enhanced with visual information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/68Gesture-dependent or behaviour-dependent

Definitions

  • the traditional voice call mode is to transmit voice data after establishing a voice channel between the two terminals.
  • This type of call mode is relatively simple, and the interaction between the user and the user is small, which is not intuitive enough, and in the present situation where social applications are increasingly diverse.
  • the quality of the video call is generally satisfactory. In the 3G period, if the quality of the video call is high, the radio resource is occupied. Since the maximum bandwidth of the 3G is 64K, the quality of the video call is obviously Nor can it meet the requirements of users, the user experience is poor, and the cost is also high.
  • Embodiments of the present invention are directed to providing a method, terminal, and computer storage medium for a call to increase user interaction in a voice call while reducing the cost of the call.
  • a first aspect of the embodiments of the present invention provides a method for calling, which is applied to a terminal having a network call function, and the method for the call includes the following steps:
  • the avatar is caused to present a dynamic behavior corresponding to the first behavior encoded data.
  • the step of causing the avatar to present a dynamic behavior corresponding to the first behavior coded data comprises:
  • the method for calling includes:
  • the step of receiving the first behavior coded data sent by the opposite end comprises:
  • the second aspect of the embodiment of the present invention further provides a terminal, where the terminal includes:
  • the display module is configured to acquire an avatar preset on the interactive interface of the terminal and display the terminal during the call between the terminal and the peer;
  • a matching unit configured to match the behavior information with behavior information pre-stored by the terminal
  • the interactive interface of the terminal displays the preset avatar, and when receiving the first behavior coded data sent by the opposite end, The avatar performs dynamic behavior corresponding to the first behavior coded data, including dynamic expression behavior and dynamic behavior of the limb.
  • the manner of the call increases the interaction between the users compared to the traditional voice call, and can be intuitively performed. Express, increase the fun of the call; save bandwidth compared to video calls but has similar effects to video calls, reducing the cost of calls.
  • Both the terminal and the peer end in this embodiment have a network call function, and may be a computer or a smart phone.
  • Step S104 Acquire behavior information of the terminal, encode the behavior information to obtain second behavior encoded data, and send the second behavior encoded data to the opposite end.
  • the peer end may send the behavior coded data to the terminal, and the terminal may also send the action coded data to the peer end to interact.
  • Both the terminal and the peer end in this embodiment have a network call function, and may be a computer or a smart phone.
  • the display module 101 specifically includes a display screen; the display screen may have a display structure such as a liquid crystal display, a plasma display screen, a projection screen, or an electronic ink display screen.
  • the interface displays an avatar that can have facial expression changes, motion changes, and voice lip synchronization.
  • the dynamic behavior corresponding to the first behavior coded data includes an expression dynamic behavior and a limb dynamic behavior.
  • the avatar on the terminal performs the action of the fangs, that is, the dynamic behavior of the expression; the opposite end sends the behavior coded data of the swaying head to the terminal, and the avatar on the terminal executes.
  • the action of shaking the head is the dynamic behavior of the limb.
  • the structure of the execution module 103 may be a processor, such as an image processor that controls the display module 101; the processor may be an application processor AP (AP) in the terminal, and a central processing unit ( CPU, Central Processing Unit), digital signal processor (DSP, Digital Signal Processor), or Field Programmable Gate Array (FPGA), and other electronic components with display control functions.
  • AP application processor
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • the execution module 103 includes:
  • the decoding unit 1031 is configured to decode the first behavior encoded data, and obtain behavior information corresponding to the first behavior encoded data;
  • the matching unit 1032 is configured to match the behavior information with the behavior information pre-stored by the terminal;
  • the user may select an option of a sendable expression or a limb motion on the interactive interface of the opposite end, select an expression or a limb motion, and encode the expression or the limb motion to obtain the first behavior coded data. , sent to the terminal. Then, the terminal decodes the first behavior encoded data to obtain behavior information corresponding to the first behavior encoded data: for example, laughing at the
  • the expressions such as fangs and smiles are 0001, 0010, and 0011, and so on.
  • the body movements such as shaking their heads, nodding their heads, and hugging are coded as 1000, 1001, 1010, and so on.
  • the behavior information may be classified into categories, and the behavior information of the same category may be stored in the same database, for example, the database includes an expression template library, a limb motion template library, and the like.
  • the behavior information is matched with the behavior information pre-stored in the terminal database. If the matching is successful, the corresponding behavior information in the database is invoked, and the avatar on the interaction interface is driven to execute the behavior information.
  • the terminal further includes:
  • the sending module 104 of the terminal can send the behavior coded data to the opposite end.
  • the terminal encodes the behavior information to obtain the second behavior encoded data and the foregoing
  • the peer end encodes the behavior information to obtain the first behavior coded data.
  • the avatar on the interactive interface can perform lip movement synchronization in addition to the dynamic behavior of the expression and the dynamic behavior of the limb.
  • the terminal acquires voice information through the voice channel, and controls the lip of the avatar according to the acquired voice information.
  • the department performs the corresponding action to make the vocal type of the avatar and the voice information substantially the same, and further close to the video call. If the user selects the other person's real person avatar to be displayed on the interactive interface, there is a call effect that is almost identical to the video call.
  • the embodiment of the present invention further describes a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute at least one of the methods of the embodiments of the present invention, as shown in the figure. 1 and / or the method described in Figure 4.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Disclosed is a method for a call, applied in a terminal having a network call function. The method for a call comprises the following steps: during a call between a terminal and an opposite end, a portrait preset on an interaction interface of the terminal call is obtained and displayed; first behaviour encoding data sent by the opposite end is received; the portrait presents a dynamic behaviour corresponding to the first behaviour encoding data. Also disclosed are a terminal and a computer storage medium.

Description

通话的方法、终端和计算机存储介质Method of calling, terminal and computer storage medium 技术领域Technical field
本发明涉及通信技术领域,尤其涉及一种通话的方法、终端和计算机存储介质The present invention relates to the field of communications technologies, and in particular, to a call method, a terminal, and a computer storage medium.
背景技术Background technique
随着网络通信技术的发展和通讯终端硬件性能的提升,互联网应用越来越广泛,涉及的业务越来越多。传统的语音通话方式为两终端之间建立语音通道后传输语音数据,这种通话方式较单一,用户与用户之间的交互也少,不够直观,在现今社交应用越来越多样化的情形下,无法满足当下用户的要求;而视频通话的质量一般,在3G时期,如果视频通话的质量要求高,则对无线资源占用较大,由于3G中最大就是64K的带宽,因此视频通话的质量显然也不能达到用户的要求,用户体验较差,另外,成本也较高。With the development of network communication technology and the improvement of the hardware performance of communication terminals, Internet applications are becoming more and more extensive, and more and more services are involved. The traditional voice call mode is to transmit voice data after establishing a voice channel between the two terminals. This type of call mode is relatively simple, and the interaction between the user and the user is small, which is not intuitive enough, and in the present situation where social applications are increasingly diverse. The quality of the video call is generally satisfactory. In the 3G period, if the quality of the video call is high, the radio resource is occupied. Since the maximum bandwidth of the 3G is 64K, the quality of the video call is obviously Nor can it meet the requirements of users, the user experience is poor, and the cost is also high.
上述内容仅用于辅助理解本发明的技术方案,并不代表承认上述内容是现有技术。The above content is only used to assist in understanding the technical solutions of the present invention, and does not constitute an admission that the above is prior art.
发明内容Summary of the invention
本发明实施例期望提供一种通话的方法、终端和计算机存储介质,以增加用户在语音通话中的互动的同时降低通话的成本。Embodiments of the present invention are directed to providing a method, terminal, and computer storage medium for a call to increase user interaction in a voice call while reducing the cost of the call.
本发明实施例第一方面提供一种通话的方法,应用于具有网络通话功能的终端中,所述通话的方法包括以下步骤:A first aspect of the embodiments of the present invention provides a method for calling, which is applied to a terminal having a network call function, and the method for the call includes the following steps:
在终端与对端进行通话的过程中,获取所述终端通话的交互界面上预置的头像并显示; Obtaining an avatar preset on the interactive interface of the terminal call and displaying it during the process of the terminal performing a call with the opposite end;
接收所述对端发送的第一行为编码数据;Receiving first behavior coded data sent by the opposite end;
使所述头像呈现与所述第一行为编码数据对应的动态行为。The avatar is caused to present a dynamic behavior corresponding to the first behavior encoded data.
优选地,所述使头像呈现与所述第一行为编码数据对应的动态行为的步骤包括:Preferably, the step of causing the avatar to present a dynamic behavior corresponding to the first behavior coded data comprises:
对所述第一行为编码数据进行解码,获取所述第一行为编码数据对应的行为信息;Decoding the first behavior encoded data to obtain behavior information corresponding to the first behavior encoded data;
将所述行为信息与所述终端预存的行为信息进行匹配;Matching the behavior information with behavior information pre-stored by the terminal;
当匹配成功时,使所述头像呈现与所述行为信息对应的动态行为。When the matching is successful, the avatar is caused to present a dynamic behavior corresponding to the behavior information.
优选地,所述通话的方法还包括:Preferably, the method for calling includes:
获取所述终端的行为信息,对所述行为信息进行编码得到第二行为编码数据,向所述对端发送所述第二行为编码数据。Obtaining behavior information of the terminal, encoding the behavior information to obtain second behavior encoded data, and sending the second behavior encoded data to the opposite end.
优选地,所述通话的方法还包括:Preferably, the method for calling includes:
获取所述对端的语音信息,控制所述头像的唇部执行与所述语音信息相应的动作,以同步唇音。Acquiring the voice information of the opposite end, and controlling the lip of the avatar to perform an action corresponding to the voice information to synchronize the lip sound.
优选地,所述接收对端发送的第一行为编码数据的步骤包括:Preferably, the step of receiving the first behavior coded data sent by the opposite end comprises:
通过预先建立的至少一条数据通道接收所述对端发送的所述第一行为编码数据。Receiving, by the pre-established at least one data channel, the first behavior coded data sent by the opposite end.
本发明实施例第二方面还提供一种终端,所述终端包括:The second aspect of the embodiment of the present invention further provides a terminal, where the terminal includes:
显示模块,配置为在终端与对端进行通话的过程中,获取所述终端通话的交互界面上预置的头像并显示;The display module is configured to acquire an avatar preset on the interactive interface of the terminal and display the terminal during the call between the terminal and the peer;
接收模块,配置为接收所述对端发送的第一行为编码数据;a receiving module, configured to receive the first behavior coded data sent by the peer end;
执行模块,配置为将所述头像执行与所述第一行为编码数据对应的动态行为。And an execution module configured to execute the dynamic behavior corresponding to the first behavior encoded data by the avatar.
优选地,所述执行模块包括:Preferably, the execution module comprises:
解码单元,配置为对所述第一行为编码数据进行解码,获取所述第一 行为编码数据对应的行为信息;a decoding unit configured to decode the first behavior encoded data to obtain the first Behavior information corresponding to the behavior coded data;
匹配单元,配置为将所述行为信息与所述终端预存的行为信息进行匹配;a matching unit, configured to match the behavior information with behavior information pre-stored by the terminal;
执行单元,配置为当匹配成功时,使所述头像呈现与所述行为信息对应的动态行为。The execution unit is configured to cause the avatar to present a dynamic behavior corresponding to the behavior information when the matching is successful.
优选地,所述终端还包括:Preferably, the terminal further includes:
发送模块,配置为获取所述终端的行为信息,对所述行为信息进行编码得到第二行为编码数据,向所述对端发送所述第二行为编码数据。And a sending module, configured to acquire behavior information of the terminal, encode the behavior information to obtain second behavior encoded data, and send the second behavior encoded data to the peer end.
优选地,所述终端还包括:Preferably, the terminal further includes:
同步模块,配置为获取所述对端的语音信息,控制所述头像的唇部执行与所述语音信息相应的动作,以同步唇音。The synchronization module is configured to acquire the voice information of the opposite end, and control the lip of the avatar to perform an action corresponding to the voice information to synchronize the lip sound.
优选地,所述接收模块具体用于Preferably, the receiving module is specifically configured to
通过预先建立的至少一条数据通道接收所述对端发送的所述第一行为编码数据。Receiving, by the pre-established at least one data channel, the first behavior coded data sent by the opposite end.
本发明实施例第三方面提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本发明实施例第一方面所述方法的至少其中之一。A third aspect of the present invention provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute at least one of the methods of the first aspect of the embodiments of the present invention. One.
本发明实施例提供的通话的方法、终端和计算机存储介质,在终端与对端进行语音通话时,终端的交互界面显示预置的头像,在接收到对端发送的第一行为编码数据时,该头像执行与第一行为编码数据对应的动态行为,包括表情动态行为及肢体动态行为,这种通话的方式相比于传统的语音通话而言,增加了用户之间的交互,能够直观地进行表达,增加通话的趣味性;相比于视频通话而言,节省了带宽但与视频通话具有相似的效果,降低了通话的成本。 The method, the terminal, and the computer storage medium provided by the embodiment of the present invention, when the terminal performs a voice call with the opposite end, the interactive interface of the terminal displays the preset avatar, and when receiving the first behavior coded data sent by the opposite end, The avatar performs dynamic behavior corresponding to the first behavior coded data, including dynamic expression behavior and dynamic behavior of the limb. The manner of the call increases the interaction between the users compared to the traditional voice call, and can be intuitively performed. Express, increase the fun of the call; save bandwidth compared to video calls but has similar effects to video calls, reducing the cost of calls.
附图说明DRAWINGS
图1为本发明实施例所述通话的方法提供的流程示意图;1 is a schematic flowchart of a method for providing a call according to an embodiment of the present invention;
图2为图1中将所述头像执行与所述第一行为编码数据对应的动态行为的步骤的细化流程示意图;2 is a schematic flowchart showing the steps of performing the dynamic behavior of the avatar corresponding to the first behavior encoded data in FIG. 1;
图3为本发明实施例所述通话的方法第提供的流程示意图;3 is a schematic flowchart of a method for providing a call according to an embodiment of the present invention;
图4为本发明实施例所述通话的方法提供的流程示意图;4 is a schematic flowchart of a method for providing a call according to an embodiment of the present invention;
图5为本发明实施例所述终端第提供的结构示意图;FIG. 5 is a schematic structural diagram of a terminal provided according to an embodiment of the present disclosure;
图6为图5中执行模块的结构示意图;Figure 6 is a schematic structural view of the execution module of Figure 5;
图7为本发明实施例所述终端提供的结构示意图;FIG. 7 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure;
图8为本发明终端第三实施例所述终端的结构示意图。FIG. 8 is a schematic structural diagram of a terminal according to a third embodiment of a terminal according to the present invention.
具体实施方式detailed description
以下结合附图对本发明的优选实施例进行详细说明,应当理解,以下所说明的优选实施例仅用于说明和解释本发明,并不用于限定本发明。The preferred embodiments of the present invention are described in detail below with reference to the accompanying drawings.
本发明提供一种通话的方法,应用于具有网络通话功能的终端中,参照图1,在一实施例中,该通话的方法包括:The present invention provides a method for calling, which is applied to a terminal having a network call function. Referring to FIG. 1, in an embodiment, the method includes:
步骤S101,在终端与对端进行通话的过程中,获取所述终端通话的交互界面上预置的头像并显示;Step S101: Acquire an avatar preset on the interactive interface of the terminal call and display it during the process of the terminal making a call with the opposite end;
本实施例中的终端与对端均具有网络通话功能,可以是计算机或智能手机等。Both the terminal and the peer end in this embodiment have a network call function, and may be a computer or a smart phone.
本实施例中,终端与对端进行呼叫,呼叫成功后,建立语音通道以供语音通话,建立语音通道的方式可以与现有技术相同。In this embodiment, the terminal and the opposite end make a call, and after the call is successful, a voice channel is established for the voice call, and the manner of establishing the voice channel can be the same as the prior art.
在终端与对端进行语音通话的过程中,终端与对端当前的通话的交互界面上显示一个可以具有表情变化、动作变化以及可以音唇同步的头像,在该交互界面上还可以有可发送的表情或肢体动作的选项,还可以有发送其他消息如文字信息等的选项。 During the voice call between the terminal and the peer end, an interactive interface between the terminal and the current call of the peer end displays an avatar that can have an expression change, an action change, and a voice lip synchronization, and the interface can also be sent on the interaction interface. The options for facial expressions or body movements can also have options for sending other messages such as text messages.
本实施例中,在终端或对端上均有头像模版库,包括卡通头像或者真人头像模版库,在进行语音通话时,用户可调取其中的一个头像,该头像可以根据对端发送的表情或者肢体动作做出相应的动作。In this embodiment, the avatar template library is included on the terminal or the peer end, including a cartoon avatar or a real person avatar template library. When a voice call is made, the user can adjust one of the avatars, and the avatar can be sent according to the opposite end. Or the body movements make the corresponding action.
步骤S102,接收所述对端发送的第一行为编码数据;Step S102, receiving first behavior coded data sent by the opposite end;
本实施例中,对端可以通过交互界面上提供的表情或动作的选项向终端发送第一行为编码数据。In this embodiment, the peer end may send the first behavior coded data to the terminal through an option of an expression or an action provided on the interaction interface.
本实施例中,对端的用户在交互界面上提供的表情或动作选项进行选择,当用户选中某一表情或动作后,对端将选中的表情或动作进行编码,得到上述的第一行为编码数据,然后对端将第一行为编码数据发送给本端。In this embodiment, the user of the opposite end selects an expression or an action option provided on the interactive interface. When the user selects an expression or an action, the opposite end encodes the selected expression or action to obtain the first behavior coded data. Then, the peer sends the first behavior encoded data to the local end.
本实施例中,在终端与对端建立语音通话时,建立语音通道的同时建立数据通道,该数据通道用以传输对端发送给终端的第一行为编码数据或者传输终端发送给对端的行为编码数据。In this embodiment, when a voice call is established between the terminal and the peer end, a voice channel is established, and the data channel is used to transmit the first behavior coded data sent by the peer end to the terminal or the behavior code sent by the transmission terminal to the peer end. data.
本实施例中,根据数据实际传输的情况或用户的配置来实现数据通道的建立,可以建立一条数据通道,也可以建立多条数据通道。如在语音通话时发送消息或者其他数据时,则可能需要建立多条数据通道。In this embodiment, the data channel is established according to the actual transmission of the data or the configuration of the user, and one data channel may be established, or multiple data channels may be established. If you are sending a message or other data during a voice call, you may need to create multiple data channels.
本实施例中,第一行为编码数据包括表情数据、肢体动作数据。第一行为编码数据为经特定编码后得到的数据,本实施例在终端与对端之间传输行为编码数据而不是直接传输图片,通过这种方式,可以减小数据的传输量,节省带宽。In this embodiment, the first behavior encoded data includes expression data and limb motion data. The first behavior coded data is the data obtained by the specific coding. In this embodiment, the behavior coded data is transmitted between the terminal and the opposite end instead of directly transmitting the picture. In this way, the data transmission amount can be reduced, and the bandwidth is saved.
步骤S103,将所述头像执行与所述第一行为编码数据对应的动态行为。所述步骤S103即使所述头像呈现与所述第一行为编码数据对应的动态行为。Step S103, executing the dynamic behavior corresponding to the first behavior encoded data by the avatar. The step S103 displays the dynamic behavior corresponding to the first behavior encoded data even if the avatar.
本实施例中,第一行为编码数据对应的动态行为包括表情动态行为、肢体动态行为。例如,对端向终端发送一呲牙的行为编码数据,则终端上的头像执行呲牙的动作,即为表情动态行为;对端向终端发送一摇头的行 为编码数据,则终端上的头像执行摇头的动作,即为肢体动态行为。In this embodiment, the dynamic behavior corresponding to the first behavior coded data includes an expression dynamic behavior and a limb dynamic behavior. For example, if the peer sends a behavior coded data to the terminal, the avatar on the terminal performs the action of fangs, that is, the dynamic behavior of the expression; the opposite end sends a line of shaking the head to the terminal. In order to encode the data, the avatar on the terminal performs the action of shaking the head, that is, the dynamic behavior of the limb.
具体如何使所述头像呈现与所述第一行为编码数据对应的桶盖行为,具体可以为依据所述第一行为编码数据及所述头像,制作视频和/或动漫,使静态的头像动起来,具体如终端接收到一个指示所述头像摇头的第一行为编码,所述终端可以通过提取所述头像的图像数据,改变所述头像的头部展现的不同侧面,形成多帧连续播放的画面,在播放显示达到使所述头像动态的呈现所述摇头动作。在具体的实现过程中,所述第一行为编码数据也可以是检索索引,通过所述检索索引找到预先存储的所述头像摇头的视频进而播放,从而达到时所述头像呈现与所述第一行为编码数据对应的动态行为的效果。Specifically, the avatar is configured to display the behavior of the bucket cover corresponding to the first behavior coded data, and specifically, the video and/or the animation may be generated according to the first behavior coded data and the avatar, so that the static avatar is moved. Specifically, if the terminal receives a first behavior code indicating that the avatar is shaking, the terminal may change different image sides of the avatar by extracting image data of the avatar to form a multi-frame continuous playback screen. The moving display is such that the avatar is dynamically presented in the presentation. In a specific implementation process, the first behavior coded data may also be a retrieval index, and the pre-stored video of the avatar shaking head is found and played by the retrieval index, so that the avatar is presented and the first The effect of dynamic behavior corresponding to behavioral encoded data.
与现有技术相比,本实施例中在终端与对端进行语音通话时,终端的交互界面显示预置的头像,在接收到对端发送的第一行为编码数据时,该头像执行与第一行为编码数据对应的动态行为,包括表情动态行为及肢体动态行为,这种通话的方式相比于传统的语音通话而言,增加了用户之间的交互,能够直观地进行表达,增加通话的趣味性;相比于视频通话而言,节省了带宽但与视频通话具有相似的效果,降低了通话的成本。Compared with the prior art, in the embodiment, when the terminal performs a voice call with the opposite end, the interactive interface of the terminal displays a preset avatar, and when the first behavior coded data sent by the peer end is received, the avatar performs and The dynamic behavior corresponding to the behavior of the encoded data, including the dynamic behavior of the expression and the dynamic behavior of the limb, the manner of the call is increased compared with the traditional voice call, the interaction between the users is increased, the expression can be intuitively expressed, and the call is increased. Interesting; saves bandwidth compared to video calls but has similar effects to video calls, reducing the cost of calls.
在其中一个实施例中,如图2所示,在上述图1的实施例的基础上,上述步骤S103包括:In one embodiment, as shown in FIG. 2, based on the foregoing embodiment of FIG. 1, the foregoing step S103 includes:
步骤S1031,对所述第一行为编码数据进行解码,获取所述第一行为编码数据对应的行为信息;Step S1031: Decode the first behavior encoded data, and obtain behavior information corresponding to the first behavior encoded data.
步骤S1032,将所述行为信息与所述终端预存的行为信息进行匹配;Step S1032: Match the behavior information with the behavior information pre-stored by the terminal;
步骤S1033,当匹配成功时,将所述头像执行与所述行为信息对应的动态行为。所述步骤S1033即所述的当匹配成功时,使所述头像呈现与所述行为信息对应的动态行为。In step S1033, when the matching is successful, the avatar performs a dynamic behavior corresponding to the behavior information. The step S1033, that is, when the matching is successful, causes the avatar to present a dynamic behavior corresponding to the behavior information.
本实施例中,用户可以在对端的交互界面上的可发送的表情或肢体动 作的选项中进行选择,选取某一表情或肢体动作,将该表情或肢体动作进行编码,得到第一行为编码数据,发送给终端。然后终端对该第一行为编码数据进行解码,得到第一行为编码数据对应的行为信息:例如对大笑、呲牙、微笑等表情编码为0001、0010、0011,依次类推;对摇头、点头、拥抱等肢体动作编码为1000、1001、1010,以此类推,等等。终端获取第一行为编码数据后,按照编码相反的方式进行解码,得到第一行为编码数据对应的行为信息,如解码0010,得到对应的行为信息为呲牙,解码1000,得到对应的行为信息为摇头。In this embodiment, the user can send a facial expression or a limb movement on the interactive interface of the opposite end. Selecting among the options made, selecting an expression or a limb motion, encoding the expression or the limb motion, obtaining the first behavior encoded data, and transmitting the data to the terminal. Then, the terminal decodes the first behavior encoded data to obtain behavior information corresponding to the first behavior encoded data: for example, the expressions such as laughing, fangs, and smile are 0001, 0010, and 0011, and so on; and shaking, nodding, Embrace and other body motion codes are 1000, 1001, 1010, and so on. After acquiring the first behavior coded data, the terminal performs decoding according to the opposite manner of the coding to obtain behavior information corresponding to the first behavior coded data, such as decoding 0010, obtaining corresponding behavior information as fangs, decoding 1000, and obtaining corresponding behavior information as: Shake his head.
本实施例在终端与对端之间传输行为编码数据而不是直接传输图片,通过这种方式,可以减小数据的传输量,节省带宽。In this embodiment, the behavior coded data is transmitted between the terminal and the peer end instead of directly transmitting the picture. In this way, the data transmission amount can be reduced, and the bandwidth is saved.
本实施例中,可对行为信息分类别,将同一类别的行为信息存储在同一数据库中,如数据库包括表情模板库、肢体动作模板库等。本实施例将行为信息与终端数据库中预存的行为信息进行匹配,如果匹配成功,则调用该数据库中相应的行为信息,并驱动交互界面上的头像执行该行为信息。In this embodiment, the behavior information may be classified into categories, and the behavior information of the same category may be stored in the same database, for example, the database includes an expression template library, a limb motion template library, and the like. In this embodiment, the behavior information is matched with the behavior information pre-stored in the terminal database. If the matching is successful, the corresponding behavior information in the database is invoked, and the avatar on the interaction interface is driven to execute the behavior information.
在其中一个实施例中,如图3所示,在上述图1的实施例的基础上,该通话的方法还包括:In one embodiment, as shown in FIG. 3, based on the foregoing embodiment of FIG. 1, the method for calling includes:
步骤S104,获取所述终端的行为信息,对所述行为信息进行编码得到第二行为编码数据,向所述对端发送所述第二行为编码数据。Step S104: Acquire behavior information of the terminal, encode the behavior information to obtain second behavior encoded data, and send the second behavior encoded data to the opposite end.
本实施例中,对端可以向终端发送行为编码数据,而终端也可以向对端发送行为编码数据,以此来互动。In this embodiment, the peer end may send the behavior coded data to the terminal, and the terminal may also send the action coded data to the peer end to interact.
本实施例中,本端的用户在交互界面上提供的表情或动作选项进行选择,当用户选中某一表情或动作后,本端将选中的表情或动作进行编码,得到第二行为编码数据。与上述第一行为编码数据所不同的是,第二行为编码数据为本端向对端发送的编码数据,而第一行为数据是对端向本端发送的编码数据。 In this embodiment, the user of the local end selects an expression or an action option provided on the interactive interface. When the user selects an expression or an action, the local end encodes the selected expression or action to obtain the second behavior encoded data. Different from the first behavior coded data, the second behavior coded data is the coded data sent by the peer end to the opposite end, and the first behavior data is the coded data sent by the peer end to the local end.
本实施例终端向对端发送行为编码数据可以在步骤S103之后,也可以在其他步骤之后,只要终端与对端建立语音通话,则终端即可以向对端发送行为编码数据。In this embodiment, the terminal may send the behavior coded data to the peer end after the step S103, or after the other steps, the terminal may send the action coded data to the peer end as long as the terminal establishes a voice call with the peer end.
本实施例中,终端对行为信息进行编码得到第二行为编码数据与上述对端对行为信息进行编码得到第一行为编码数据类似,可参考上述实施例,此处不再赘述。In this embodiment, the terminal encodes the behavior information to obtain the second behavior coded data, and the peer-to-peer behavior information is encoded to obtain the first behavior coded data. For reference, refer to the foregoing embodiment, and details are not described herein again.
在其中一个实施例中,如图4所示,在上述图1的实施例的基础上,该通话的方法还包括:In one embodiment, as shown in FIG. 4, on the basis of the foregoing embodiment of FIG. 1, the method for calling includes:
步骤S105,获取所述对端的语音信息,控制所述头像的唇部执行与所述语音信息相应的动作,以同步唇音。In step S105, the voice information of the opposite end is acquired, and the lip of the avatar is controlled to perform an action corresponding to the voice information to synchronize the lip sound.
本实施例中,交互界面上的头像除了可以执行表情动态行为及肢体动态行为外,还可以实现唇音同步,具体为:终端通过语音通道获取语音信息,根据所获取的语音信息,控制头像的唇部执行相应的动作,使头像的口型与语音信息基本一致,进一步接近视频通话。而如果用户选择对方的真人头像显示在交互界面上时,则具有与视频通话几乎一致的通话效果。In this embodiment, the avatar on the interactive interface can perform lip movement synchronization in addition to the dynamic behavior of the expression and the dynamic behavior of the limb. Specifically, the terminal acquires voice information through the voice channel, and controls the lip of the avatar according to the acquired voice information. The department performs the corresponding action to make the vocal type of the avatar and the voice information substantially the same, and further close to the video call. If the user selects the other person's real person avatar to be displayed on the interactive interface, there is a call effect that is almost identical to the video call.
本发明还提供一种终端,如图5所示,在一个设备实施例中,该终端包括:The present invention further provides a terminal. As shown in FIG. 5, in an apparatus embodiment, the terminal includes:
显示模块101,配置为在终端与对端进行通话的过程中,获取所述终端通话的交互界面上预置的头像并显示;The display module 101 is configured to acquire an avatar preset on the interactive interface of the terminal call and display the terminal during the call between the terminal and the opposite end;
本实施例中的终端与对端均具有网络通话功能,可以是计算机或智能手机等。所述显示模块101具体包括显示屏;所述显示屏可以液晶显示屏、等离子显示屏、投影屏或电子墨水显示屏等显示结构。Both the terminal and the peer end in this embodiment have a network call function, and may be a computer or a smart phone. The display module 101 specifically includes a display screen; the display screen may have a display structure such as a liquid crystal display, a plasma display screen, a projection screen, or an electronic ink display screen.
本实施例中,终端与对端进行呼叫,呼叫成功后,建立语音通道以供语音通话,建立语音通道的方式可以与现有技术相同。In this embodiment, the terminal and the opposite end make a call, and after the call is successful, a voice channel is established for the voice call, and the manner of establishing the voice channel can be the same as the prior art.
在终端与对端进行语音通话的过程中,终端与对端当前的通话的交互 界面上显示一个可以具有表情变化、动作变化以及可以音唇同步的头像,在该交互界面上还可以有可发送的表情或肢体动作的选项,还可以有发送其他消息如文字信息等的选项。In the process of making a voice call between the terminal and the peer, the interaction between the terminal and the current call of the peer The interface displays an avatar that can have facial expression changes, motion changes, and voice lip synchronization. There can also be options for sending facial expressions or body movements on the interactive interface, and options for sending other messages such as text information.
本实施例中,在终端或对端上均有头像模版库,包括卡通头像或者真人头像模版库,在进行语音通话时,用户可调取其中的一个头像,该头像可以根据对端发送的表情或者肢体动作做出相应的动作。In this embodiment, the avatar template library is included on the terminal or the peer end, including a cartoon avatar or a real person avatar template library. When a voice call is made, the user can adjust one of the avatars, and the avatar can be sent according to the opposite end. Or the body movements make the corresponding action.
接收模块102,配置为接收所述对端发送的第一行为编码数据;The receiving module 102 is configured to receive the first behavior coded data sent by the peer end;
本实施例中,对端可以通过交互界面上提供的表情或动作的选项向终端发送第一行为编码数据。所述接收模块的具体结构可包括外部通信接口,所述外部通信即可为无线通信接口,具体如接收天线,所述接收天线可以WIFI天线、移动互连网技术中的2G、3G和/或4G的接收天线。所述外部通信接口还可以有线通信接口,具体如网络连接接口,如RJ45接口,光纤接口等。In this embodiment, the peer end may send the first behavior coded data to the terminal through an option of an expression or an action provided on the interaction interface. The specific structure of the receiving module may include an external communication interface, and the external communication may be a wireless communication interface, such as a receiving antenna, and the receiving antenna may be a WIFI antenna, 2G, 3G, and/or 4G in the mobile internet technology. Receive antenna. The external communication interface may also be a wired communication interface, such as a network connection interface, such as an RJ45 interface, a fiber interface, or the like.
本实施例中,对端的用户在交互界面上提供的表情或动作选项进行选择,当用户选中某一表情或动作后,对端将选中的表情或动作进行编码,得到上述的第一行为编码数据。In this embodiment, the user of the opposite end selects an expression or an action option provided on the interactive interface. When the user selects an expression or an action, the opposite end encodes the selected expression or action to obtain the first behavior coded data. .
本实施例中,在终端与对端建立语音通话时,建立语音通道的同时建立数据通道,该数据通道用以传输对端发送给终端的第一行为编码数据或者传输终端发送给对端的行为编码数据。In this embodiment, when a voice call is established between the terminal and the peer end, a voice channel is established, and the data channel is used to transmit the first behavior coded data sent by the peer end to the terminal or the behavior code sent by the transmission terminal to the peer end. data.
本实施例中,根据数据实际传输的情况或用户的配置来实现数据通道的建立,可以建立一条数据通道,也可以建立多条数据通道。如在语音通话时发送消息或者其他数据时,则可能需要建立多条数据通道。In this embodiment, the data channel is established according to the actual transmission of the data or the configuration of the user, and one data channel may be established, or multiple data channels may be established. If you are sending a message or other data during a voice call, you may need to create multiple data channels.
本实施例中,第一行为编码数据包括表情数据、肢体动作数据。第一行为编码数据为经特定编码后得到的数据,本实施例在终端与对端之间传输行为编码数据而不是直接传输图片,通过这种方式,可以减小数据的传 输量,节省带宽。In this embodiment, the first behavior encoded data includes expression data and limb motion data. The first behavior coded data is data obtained by specific coding. In this embodiment, the behavior coded data is transmitted between the terminal and the opposite end instead of directly transmitting the picture. In this way, the data transmission can be reduced. The amount of transmission saves bandwidth.
执行模块103,配置为将所述头像执行与所述第一行为编码数据对应的动态行为;所述执行模块103即为配置为使所述头像执行与所述第一行为编码数据对应的动态行为的模块The executing module 103 is configured to execute the dynamic behavior corresponding to the first behavior encoded data by the avatar; the executing module 103 is configured to enable the avatar to perform dynamic behavior corresponding to the first behavior encoded data. Module
本实施例中,第一行为编码数据对应的动态行为包括表情动态行为、肢体动态行为。例如,对端向终端发送一呲牙的行为编码数据,则终端上的头像执行呲牙的动作,即为表情动态行为;对端向终端发送一摇头的行为编码数据,则终端上的头像执行摇头的动作,即为肢体动态行为。In this embodiment, the dynamic behavior corresponding to the first behavior coded data includes an expression dynamic behavior and a limb dynamic behavior. For example, if the peer sends a behavior coded data to the terminal, the avatar on the terminal performs the action of the fangs, that is, the dynamic behavior of the expression; the opposite end sends the behavior coded data of the swaying head to the terminal, and the avatar on the terminal executes. The action of shaking the head is the dynamic behavior of the limb.
所述执行模块103的结构可为处理器,具体如控制所述显示模块101显示的图像处理器;所述处理器可以为终端中的应用处理器AP(AP,Application Processor)、中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或可编程门阵列(FPGA,Field Programmable Gate Array)等具有显示控制功能的电子元器件。The structure of the execution module 103 may be a processor, such as an image processor that controls the display module 101; the processor may be an application processor AP (AP) in the terminal, and a central processing unit ( CPU, Central Processing Unit), digital signal processor (DSP, Digital Signal Processor), or Field Programmable Gate Array (FPGA), and other electronic components with display control functions.
在其中一个实施例中,如图6所示,在上述图5的实施例的基础上,执行模块103包括:In one embodiment, as shown in FIG. 6, on the basis of the above embodiment of FIG. 5, the execution module 103 includes:
解码单元1031,配置为对所述第一行为编码数据进行解码,获取所述第一行为编码数据对应的行为信息;The decoding unit 1031 is configured to decode the first behavior encoded data, and obtain behavior information corresponding to the first behavior encoded data;
匹配单元1032,配置为将所述行为信息与所述终端预存的行为信息进行匹配;The matching unit 1032 is configured to match the behavior information with the behavior information pre-stored by the terminal;
执行单元1033,配置为当匹配成功时,将所述头像执行与所述行为信息对应的动态行为。The executing unit 1033 is configured to perform the dynamic behavior corresponding to the behavior information when the matching is successful.
本实施例中,用户可以在对端的交互界面上的可发送的表情或肢体动作的选项中进行选择,选取某一表情或肢体动作,将该表情或肢体动作进行编码,得到第一行为编码数据,发送给终端。然后终端对该第一行为编码数据进行解码,得到第一行为编码数据对应的行为信息:例如对大笑、 呲牙、微笑等表情编码为0001、0010、0011,依次类推;对摇头、点头、拥抱等肢体动作编码为1000、1001、1010,以此类推,等等。终端获取第一行为编码数据后,按照编码相反的方式进行解码,得到第一行为编码数据对应的行为信息,如解码0010,得到对应的行为信息为呲牙,解码1000,得到对应的行为信息为摇头。In this embodiment, the user may select an option of a sendable expression or a limb motion on the interactive interface of the opposite end, select an expression or a limb motion, and encode the expression or the limb motion to obtain the first behavior coded data. , sent to the terminal. Then, the terminal decodes the first behavior encoded data to obtain behavior information corresponding to the first behavior encoded data: for example, laughing at the The expressions such as fangs and smiles are 0001, 0010, and 0011, and so on. The body movements such as shaking their heads, nodding their heads, and hugging are coded as 1000, 1001, 1010, and so on. After acquiring the first behavior coded data, the terminal performs decoding according to the opposite manner of the coding to obtain behavior information corresponding to the first behavior coded data, such as decoding 0010, obtaining corresponding behavior information as fangs, decoding 1000, and obtaining corresponding behavior information as: Shake his head.
本实施例在终端与对端之间传输行为编码数据而不是直接传输图片,通过这种方式,可以减小数据的传输量,节省带宽。In this embodiment, the behavior coded data is transmitted between the terminal and the peer end instead of directly transmitting the picture. In this way, the data transmission amount can be reduced, and the bandwidth is saved.
本实施例中,可对行为信息分类别,将同一类别的行为信息存储在同一数据库中,如数据库包括表情模板库、肢体动作模板库等。本实施例将行为信息与终端数据库中预存的行为信息进行匹配,如果匹配成功,则调用该数据库中相应的行为信息,并驱动交互界面上的头像执行该行为信息。In this embodiment, the behavior information may be classified into categories, and the behavior information of the same category may be stored in the same database, for example, the database includes an expression template library, a limb motion template library, and the like. In this embodiment, the behavior information is matched with the behavior information pre-stored in the terminal database. If the matching is successful, the corresponding behavior information in the database is invoked, and the avatar on the interaction interface is driven to execute the behavior information.
在其中一个实施例中,如图7所示,在上述图5的实施例的基础上,终端还包括:In one embodiment, as shown in FIG. 7, on the basis of the foregoing embodiment of FIG. 5, the terminal further includes:
发送模块104,配置为获取所述终端的行为信息,对所述行为信息进行编码得到第二行为编码数据,向所述对端发送所述第二行为编码数据。The sending module 104 is configured to acquire behavior information of the terminal, encode the behavior information to obtain second behavior encoded data, and send the second behavior encoded data to the opposite end.
本实施例中,对端可以向终端发送行为编码数据,而终端也可以向对端发送行为编码数据,以此来互动。In this embodiment, the peer end may send the behavior coded data to the terminal, and the terminal may also send the action coded data to the peer end to interact.
本实施例中,本端的用户在交互界面上提供的表情或动作选项进行选择,当用户选中某一表情或动作后,本端将选中的表情或动作进行编码,得到第二行为编码数据。与上述第一行为编码数据所不同的是,第二行为编码数据为本端向对端发送的编码数据,而第一行为数据是对端向本端发送的编码数据。In this embodiment, the user of the local end selects an expression or an action option provided on the interactive interface. When the user selects an expression or an action, the local end encodes the selected expression or action to obtain the second behavior encoded data. Different from the first behavior coded data, the second behavior coded data is the coded data sent by the peer end to the opposite end, and the first behavior data is the coded data sent by the peer end to the local end.
本实施例只要终端与对端建立语音通话,则终端的发送模块104即可以向对端发送行为编码数据。In this embodiment, as long as the terminal establishes a voice call with the opposite end, the sending module 104 of the terminal can send the behavior coded data to the opposite end.
本实施例中,终端对行为信息进行编码得到第二行为编码数据与上述 图6实施例中对端对行为信息进行编码得到第一行为编码数据类似,可参考上述实施例,此处不再赘述。In this embodiment, the terminal encodes the behavior information to obtain the second behavior encoded data and the foregoing In the embodiment of FIG. 6, the peer end encodes the behavior information to obtain the first behavior coded data. For the reference to the foregoing embodiment, details are not described herein.
在其中一个实施例中,如图8所示,在上述图5的实施例的基础上,终端还包括:In one embodiment, as shown in FIG. 8, on the basis of the foregoing embodiment of FIG. 5, the terminal further includes:
同步模块105,配置为获取所述对端的语音信息,控制所述头像的唇部执行与所述语音信息相应的动作,以同步唇音。The synchronization module 105 is configured to acquire voice information of the peer end, and control a lip of the avatar to perform an action corresponding to the voice information to synchronize lip sound.
本实施例中,交互界面上的头像除了可以执行表情动态行为及肢体动态行为外,还可以实现唇音同步,具体为:终端通过语音通道获取语音信息,根据所获取的语音信息,控制头像的唇部执行相应的动作,使头像的口型与语音信息基本一致,进一步接近视频通话。而如果用户选择对方的真人头像显示在交互界面上时,则具有与视频通话几乎一致的通话效果。In this embodiment, the avatar on the interactive interface can perform lip movement synchronization in addition to the dynamic behavior of the expression and the dynamic behavior of the limb. Specifically, the terminal acquires voice information through the voice channel, and controls the lip of the avatar according to the acquired voice information. The department performs the corresponding action to make the vocal type of the avatar and the voice information substantially the same, and further close to the video call. If the user selects the other person's real person avatar to be displayed on the interactive interface, there is a call effect that is almost identical to the video call.
本发明实施例还记载了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本发明实施例所述方法的至少其中之一,如图1和/或图4所述的方法。The embodiment of the present invention further describes a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute at least one of the methods of the embodiments of the present invention, as shown in the figure. 1 and / or the method described in Figure 4.
所述计算机存储介质可为移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。在一些实施例中所述计算机存储介质为非瞬间存储介质,如ROM。The computer storage medium may be a medium that can store program code, such as a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk. In some embodiments the computer storage medium is a non-transitory storage medium such as a ROM.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围。凡按照本发明原理所作的修改,都应当理解为落入本发明的保护范围。 The above are only preferred embodiments of the present invention and are not intended to limit the scope of the invention. Modifications made in accordance with the principles of the invention are understood to fall within the scope of the invention.

Claims (11)

  1. 一种通话的方法,应用于具有网络通话功能的终端中,所述通话的方法包括以下步骤:A method for calling is applied to a terminal having a network call function, and the method for the call includes the following steps:
    在终端与对端进行通话的过程中,获取所述终端通话的交互界面上预置的头像并显示;Obtaining an avatar preset on the interactive interface of the terminal call and displaying it during the process of the terminal performing a call with the opposite end;
    接收所述对端发送的第一行为编码数据;Receiving first behavior coded data sent by the opposite end;
    使所述头像呈现与所述第一行为编码数据对应的动态行为。The avatar is caused to present a dynamic behavior corresponding to the first behavior encoded data.
  2. 如权利要求1所述的通话的方法,其中,所述使头像呈现与所述第一行为编码数据对应的动态行为的步骤包括:The method of calling as claimed in claim 1, wherein the step of causing the avatar to present a dynamic behavior corresponding to the first behavior encoded data comprises:
    对所述第一行为编码数据进行解码,获取所述第一行为编码数据对应的行为信息;Decoding the first behavior encoded data to obtain behavior information corresponding to the first behavior encoded data;
    将所述行为信息与所述终端预存的行为信息进行匹配;Matching the behavior information with behavior information pre-stored by the terminal;
    当匹配成功时,使所述头像呈现与所述行为信息对应的动态行为。When the matching is successful, the avatar is caused to present a dynamic behavior corresponding to the behavior information.
  3. 如权利要求1所述的通话的方法,其中,所述通话的方法还包括:The method of claim 1 as claimed in claim 1, wherein the method of the call further comprises:
    获取所述终端的行为信息,对所述行为信息进行编码得到第二行为编码数据,向所述对端发送所述第二行为编码数据。Obtaining behavior information of the terminal, encoding the behavior information to obtain second behavior encoded data, and sending the second behavior encoded data to the opposite end.
  4. 如权利要求1或3所述的通话的方法,其中,所述通话的方法还包括:The method of calling according to claim 1 or 3, wherein the method of the call further comprises:
    获取所述对端的语音信息,控制所述头像的唇部呈现与所述语音信息相应的动作,以同步唇音。Acquiring the voice information of the opposite end, and controlling the lip of the avatar to present an action corresponding to the voice information to synchronize the lip sound.
  5. 如权利要求1所述的通话的方法,其中,所述接收对端发送的第一行为编码数据的步骤包括:The method of claim 1 as claimed in claim 1, wherein the step of receiving the first behavior encoded data sent by the opposite end comprises:
    通过预先建立的至少一条数据通道接收所述对端发送的所述第一行为编码数据。Receiving, by the pre-established at least one data channel, the first behavior coded data sent by the opposite end.
  6. 一种终端,其中,所述终端包括: A terminal, wherein the terminal comprises:
    显示模块,配置为在终端与对端进行通话的过程中,获取所述终端通话的交互界面上预置的头像并显示;The display module is configured to acquire an avatar preset on the interactive interface of the terminal and display the terminal during the call between the terminal and the peer;
    接收模块,配置为接收所述对端发送的第一行为编码数据;a receiving module, configured to receive the first behavior coded data sent by the peer end;
    执行模块,配置为使所述头像呈现行与所述第一行为编码数据对应的动态行为。And an execution module configured to cause the avatar to present a dynamic behavior corresponding to the first behavior encoded data.
  7. 如权利要求6所述的终端,其中,所述执行模块包括:The terminal of claim 6, wherein the execution module comprises:
    解码单元,配置为对所述第一行为编码数据进行解码,获取所述第一行为编码数据对应的行为信息;a decoding unit, configured to decode the first behavior encoded data, and obtain behavior information corresponding to the first behavior encoded data;
    匹配单元,配置为将所述行为信息与所述终端预存的行为信息进行匹配;a matching unit, configured to match the behavior information with behavior information pre-stored by the terminal;
    执行单元,配置为当匹配成功时,使所述头像呈现与所述行为信息对应的动态行为。The execution unit is configured to cause the avatar to present a dynamic behavior corresponding to the behavior information when the matching is successful.
  8. 如权利要求6所述的终端,其中,所述终端还包括:The terminal of claim 6, wherein the terminal further comprises:
    发送模块,配置为获取所述终端的行为信息,对所述行为信息进行编码得到第二行为编码数据,向所述对端发送所述第二行为编码数据。And a sending module, configured to acquire behavior information of the terminal, encode the behavior information to obtain second behavior encoded data, and send the second behavior encoded data to the peer end.
  9. 如权利要求6或8所述的终端,其中,所述终端还包括:The terminal according to claim 6 or 8, wherein the terminal further comprises:
    同步模块,配置为获取所述对端的语音信息,控制所述头像的唇部执行与所述语音信息相应的动作,以同步唇音。The synchronization module is configured to acquire the voice information of the opposite end, and control the lip of the avatar to perform an action corresponding to the voice information to synchronize the lip sound.
  10. 如权利要求6所述的终端,其中,所述接收模块配置为通过预先建立的至少一条数据通道接收所述对端发送的所述第一行为编码数据。The terminal of claim 6, wherein the receiving module is configured to receive the first behavior encoded data sent by the opposite end through at least one data channel that is pre-established.
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至5所述方法的至少其中之一。 A computer storage medium having stored therein computer executable instructions for performing at least one of the methods of claims 1 to 5.
PCT/CN2014/089073 2014-08-21 2014-10-21 Method for call, terminal and computer storage medium WO2015117383A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410416385.XA CN105357171A (en) 2014-08-21 2014-08-21 Communication method and terminal
CN201410416385.X 2014-08-21

Publications (1)

Publication Number Publication Date
WO2015117383A1 true WO2015117383A1 (en) 2015-08-13

Family

ID=53777201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/089073 WO2015117383A1 (en) 2014-08-21 2014-10-21 Method for call, terminal and computer storage medium

Country Status (2)

Country Link
CN (1) CN105357171A (en)
WO (1) WO2015117383A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012034A (en) * 2021-03-05 2021-06-22 西安万像电子科技有限公司 Method, device and system for image display processing

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534203A (en) * 2016-12-27 2017-03-22 努比亚技术有限公司 Mobile terminal and communication method
CN110062116A (en) * 2019-04-29 2019-07-26 上海掌门科技有限公司 Method and apparatus for handling information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1427626A (en) * 2001-12-20 2003-07-02 松下电器产业株式会社 Virtual television telephone device
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN102404435A (en) * 2011-11-15 2012-04-04 宇龙计算机通信科技(深圳)有限公司 Display method for communication terminal talking interface and communication terminal
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1328908C (en) * 2004-11-15 2007-07-25 北京中星微电子有限公司 A video communication method
CN101419499B (en) * 2008-11-14 2010-06-02 东南大学 Multimedia human-computer interaction method based on camera and mike
CN101692681A (en) * 2009-09-17 2010-04-07 杭州聚贝软件科技有限公司 Method and system for realizing virtual image interactive interface on phone set terminal
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN103856390B (en) * 2012-12-04 2017-05-17 腾讯科技(深圳)有限公司 Instant messaging method and system, messaging information processing method and terminals
CN103218844B (en) * 2013-04-03 2016-04-20 腾讯科技(深圳)有限公司 The collocation method of virtual image, implementation method, client, server and system
EP2866391A4 (en) * 2013-08-22 2015-06-24 Huawei Tech Co Ltd Communication method, client, and terminal
CN103442137B (en) * 2013-08-26 2016-04-13 苏州跨界软件科技有限公司 A kind of method of checking the other side's conjecture face in mobile phone communication

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1427626A (en) * 2001-12-20 2003-07-02 松下电器产业株式会社 Virtual television telephone device
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN102404435A (en) * 2011-11-15 2012-04-04 宇龙计算机通信科技(深圳)有限公司 Display method for communication terminal talking interface and communication terminal
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012034A (en) * 2021-03-05 2021-06-22 西安万像电子科技有限公司 Method, device and system for image display processing

Also Published As

Publication number Publication date
CN105357171A (en) 2016-02-24

Similar Documents

Publication Publication Date Title
US9210372B2 (en) Communication method and device for video simulation image
US9402057B2 (en) Interactive avatars for telecommunication systems
KR100617183B1 (en) System and method for multiplexing media information over a network using reduced communications resources and prior knowledge/experience of a called or calling party
US20180063556A1 (en) Systems and methods for providing guest broadcasting on a live stream video platform
JP2016173830A (en) Selective mirroring of media output
CN110213504B (en) Video processing method, information sending method and related equipment
US11741616B2 (en) Expression transfer across telecommunications networks
CN108932948B (en) Audio data processing method and device, computer equipment and computer readable storage medium
JP2004349851A (en) Portable terminal, image communication program, and image communication method
US11138715B2 (en) Method and apparatus for determining experience quality of VR multimedia
JP2016511837A (en) Voice change for distributed story reading
WO2012105318A1 (en) Input support device, input support method, and recording medium
KR20100136801A (en) Apparatus and method of an user interface in a multimedia system
CN113301355B (en) Video transmission, live broadcast and playing method, equipment and storage medium
WO2015117383A1 (en) Method for call, terminal and computer storage medium
CN112261421A (en) Virtual reality display method and device, electronic equipment and storage medium
CN111773660A (en) Cloud game processing system, method and device
CN104391628A (en) Process switching method and device
CN102364965A (en) Refined display method of mobile phone communication information
CN111787111B (en) Data transmission method and device based on cloud game
CN107493478B (en) Method and device for setting coding frame rate
CN112929704A (en) Data transmission method, device, electronic equipment and storage medium
CN114268626A (en) Window processing system, method and device
CN113596583A (en) Video stream bullet time data processing method and device
CN113965779A (en) Cloud game data transmission method, device and system and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14882093

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14882093

Country of ref document: EP

Kind code of ref document: A1