WO2013152639A1 - 一种视频聊天方法及系统 - Google Patents
一种视频聊天方法及系统 Download PDFInfo
- Publication number
- WO2013152639A1 WO2013152639A1 PCT/CN2013/071793 CN2013071793W WO2013152639A1 WO 2013152639 A1 WO2013152639 A1 WO 2013152639A1 CN 2013071793 W CN2013071793 W CN 2013071793W WO 2013152639 A1 WO2013152639 A1 WO 2013152639A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- client
- client user
- data
- virtual avatar
- avatar model
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
Definitions
- the invention belongs to the technical field of computers, and in particular relates to a video chat method and system.
- the general implementation method is to record voice and video with a microphone and a camera, and perform audio and video data compression and synchronization processing, and then play video images on another client through the network transmission; and some real-time use of a virtual camera to perform video images in real time.
- the conversion is displayed as avatar video data by face recognition technology, and the generated video data is transmitted through the network and then played on another client.
- the data transmitted on the mobile communication network is always video data, and the data transmission traffic is large, however, it is limited by the speed, flow, cost, etc. of the existing mobile communication network.
- the video data transmission process in the video chat process of the mobile terminal is slow and the cost is high.
- the purpose of the embodiments of the present invention is to provide a video chat method and system, which aims to solve the technical problem that the video data transmission speed is slow and the cost is high in the video chat process of the prior art mobile terminal.
- a first video chat method comprising:
- a virtual avatar model and causing the second client to display a virtual avatar model of the first client user, and synchronously playing the sound in the audio data of the first client user;
- the embodiment of the invention further provides a second video chat method, the method comprising:
- the embodiment of the invention further provides a third video chat method, the method comprising:
- the embodiment of the invention further provides a first video chat system, the system comprising:
- a facial video data collecting unit configured to collect facial video data of the first client user, and identify facial vector data of the first client user according to the facial video data;
- a data forwarding unit configured to send the face vector data to the second client, so that the second client renders the face vector data of the first client user, thereby generating a virtual avatar of the first client user Modeling and displaying a virtual avatar model of the first client user.
- the embodiment of the invention further provides a second video chat system, the system further comprising:
- a receiving unit configured to receive facial vector data of the first client user sent by the first client
- a virtual avatar generating unit configured to render facial vector data of the first client user to generate a virtual avatar model of the first client user
- a display unit configured to display a virtual avatar model of the first client user.
- the embodiment of the present invention has the beneficial effects that: the first client sends the face vector data of the first client user to the second client, and generates the virtuality of the first client user by using the second client process.
- the avatar model displays the virtual avatar model of the first client user.
- the face vector data is transmitted on the network, which greatly reduces the network traffic, and can be smoothly used under the ordinary mobile network, and the data transmission speed is fast. Fast and greatly reduce the cost of network traffic, which can attract more users to use the mobile network for video chat.
- the original image of the user can be changed, the communication between users is smoother, and it is easier to get closer to the user.
- the distance between the two; and the virtual avatar model for chatting can hide the real image of the user while maintaining the real scene of the chat, suitable for communication with strangers, both interesting and user privacy.
- FIG. 1 is a schematic structural diagram of an application scenario according to an embodiment of the present disclosure
- FIG. 2 is a flow chart showing the operation of the first preferred embodiment of the video chatting method provided by the present invention
- FIG. 3 is a schematic diagram of state transition of a communication network module according to an embodiment of the present invention.
- FIG. 4 is a flow chart showing the operation of the second preferred embodiment of the video chatting method provided by the present invention.
- FIG. 5 is a schematic structural diagram of a first preferred embodiment of a video chat system provided by the present invention.
- FIG. 6 is a schematic structural diagram of a second preferred embodiment of a video chat system provided by the present invention.
- FIG. 7 is a schematic structural diagram of a third preferred embodiment of a video chat system provided by the present invention.
- the first client identifies the face vector data of the first client user according to the collected facial video data of the first client user, and sends the face vector data of the first client user to the second
- the second client generates and displays a virtual avatar model of the first client user according to the face vector data of the first client user.
- the embodiment of the invention provides a video chat method, and the method includes the following steps:
- a virtual avatar model and causing the second client to display a virtual avatar model of the first client user, and synchronously playing the sound in the audio data of the first client user;
- the embodiment of the invention further provides a video chat method, the method comprising:
- the embodiment of the invention further provides a video chat method, the method comprising:
- the embodiment of the invention further provides a video chat system, the system comprising:
- a facial video data collecting unit configured to collect facial video data of the first client user, and identify facial vector data of the first client user according to the facial video data;
- a data forwarding unit configured to send the face vector data of the first client user to the second client, so that the second client renders the face vector data of the first client user to generate the first A virtual avatar model of the client user, and displaying a virtual avatar model of the first client user.
- the embodiment of the invention further provides a video chat system, the system further comprising:
- a receiving unit configured to receive facial vector data of the first client user sent by the first client.
- a virtual avatar generating unit configured to render facial vector data of the first client user to generate a virtual avatar model of the first client user
- a display unit configured to display a virtual avatar model of the first client user.
- the application scenario includes a first client 11 and a second client 12.
- the first client 11 and the second client 12 may be installed on a mobile phone.
- the instant messaging software on the iPad or the PC, the first client 11 and the second client 12 perform video chat through the communication network, that is, the first client 11 acquires the user who uses the first client 11 (hereinafter referred to as Face vector data and audio data of "first client user"), and transmitting the face vector data and audio data to the second client 12, the second client 12 according to the face vector of the first client user
- the data generates a virtual avatar model and displays the generated virtual avatar model and the sound in the played audio data.
- FIG. 2 is a flowchart of a first preferred embodiment of a video chat method according to the present invention, which is described in detail as follows:
- facial video data of the first client user is collected, and facial vector data of the first client user is identified according to the facial video data of the first client user.
- the first client 11 collects facial video data of the first client user through the camera, performs identification analysis on each frame image, and identifies face vector data of the first client user, where the face vector data includes : The shape of the face, the angle of the head, the size and position of the eye, the size and position of the eyebrows, the size and position of the nose, the shape and opening of the mouth, and changes in facial expressions.
- the face vector data of the first client user is sent to the second client 12, so that the second client 12 renders the face vector data of the first client user to generate the first A virtual avatar model of the client user, and displaying a virtual avatar model of the first client user.
- the first client 11 sends the face vector data to the second client 12 through a communication network module, where the communication network module is responsible for various first client users and the second client 12
- the user (hereinafter referred to as "second client user") transmits data and instructions during the video chat process.
- the communication network module needs to log in to the server, maintain the online through the heartbeat package, and can also query the online status of the friend. Initiate a call request, accept or reject a call request, and maintain your own call state. For details, refer to the schematic diagram of the state transition of the communication network module in FIG.
- the first client 11 sends the face vector data of the first client user to the second client 12, and generates a virtual avatar model of the first client user through the second client 12, and displays
- the virtual avatar model of the first client user, the facial vector data transmitted on the above process network, greatly reducing network traffic, can be smoothly used under the ordinary mobile network, the data transmission speed is fast, and the network traffic cost is greatly reduced. It can attract more users to use the mobile network for video chat.
- the avatar for chatting the original image of the user can be changed, the communication between users is smoother, and the distance between users is easier to be compared; and the virtual avatar model is used for chatting. While hiding the user's real image, it can also maintain the real scene of the chat, suitable for communication with strangers, both interesting and user privacy.
- the first client 11 receives an interactive action button selected by the first client user in the interactive UI component library, and identifies the interactive action button to obtain interaction action information, where the interactive UI component library includes: touching the face interaction action Buttons, as well as specific function interactive action buttons;
- the first client user touches the face interaction action button and clicks the position where the face wants to implement the interaction action, and then generates a corresponding special effect action at the position;
- the specific function interaction action button can directly be in the person The face generates corresponding special effects actions.
- the first client 11 sends the interaction action information to the second client 12, so that the second client 12 merges and renders the interaction action information and the face data information of the second client user to generate a second client. Ending a virtual avatar model of the user and displaying a virtual avatar model of the second client user.
- the first client user selects a touch facial interaction action button in the interactive UI component library, and clicks the position of the touched human face according to the touch facial interaction action button, identifies the meaning of the action button, and touches the click face.
- Position and other interactive action information and send the meaning of the action button and the interactive action information of the location of the touched face to the second client 12, so that the second client 12 is in the face of the person according to the interactive action information.
- the corresponding position generates and outputs interactive actions such as blows, kisses, slaps, or clicks on the forehead.
- the first client user selects a specific function interactive action button in the interactive UI component library, identifies interactive action information such as the meaning of the specific function interactive action button, and sends the meaning of the action button to the second client.
- the terminal 12 is configured to enable the second client 12 to generate and output an interactive action such as a lightning strike or an egg throwing interaction action at a corresponding position of the human face according to the interactive action information.
- the second client 12 may display the avatar model of the second client user while displaying the avatar model of the first client user.
- the first client 11 receives an interaction action button selected by the first client user in the interactive UI component library, and identifies the interaction action button to obtain interaction action information, and sends the interaction action information to the second
- the client 12 adds a virtual avatar model of the second client user with special effects on the display of the second client 12, increasing the interest and interactive experience of the user's chat.
- the first client 11 collects audio data of the first client user.
- the audio data of the first client user may be collected while collecting the facial video data of the first client user.
- the first client 11 time stamps the face vector data and the audio data of the first client user, and sends the data to the second client 12, so that the second client 12 according to the time stamp. Simultaneously displaying the virtual avatar model of the first client user and playing the sound in the corresponding audio data.
- the first client 11 sends the face vector data and the audio data to the second client 12 after being time-stamped, and the second client 12 can synchronously display the first client.
- the user's virtual avatar model and the sound in the corresponding audio data are played to effectively ensure that the facial video data and the audio data are transmitted through the network, and can still be synchronously displayed and played after being received by the other party, thereby solving the problem that the sound and the mouth shape are not synchronized.
- FIG. 4 is a flow chart showing the operation of the second preferred embodiment of the video chatting method provided by the present invention, which is described in detail as follows:
- the second client 12 receives the face vector data of the first client user sent by the first client 11.
- the face vector data of the first client user includes: a face shape, an angle of a swing head, a size and a position of the eye, a size and a position of the eyebrow, a size and a position of the nose, a shape of the mouth, and an open size.
- facial expression changes and other data.
- the second client 12 renders the face vector data of the first client user to generate a virtual avatar model of the first client user
- the second client 12 displays a virtual avatar model of the first client user.
- the second client 12 performs cartoon rendering on the face vector data of the first client user to generate a virtual cartoon avatar model of the first client user, and displays a virtual cartoon of the first client user.
- the avatar model further increases the interest of communication between users and enriches the user interaction experience.
- the second client 12 synchronously displays the virtual avatar model of the first client user and plays the corresponding audio data according to the time stamp added by the first client 11 to the face vector data and the audio data.
- the sound makes the display of the first client user's voice and mouth sync synchronized.
- the embodiment is a method on the second client 12 side corresponding to the first embodiment, and the second client 12 receives the face vector data of the first client user sent by the first client 11 to the first client.
- the user's facial vector data is rendered to generate and display a virtual avatar model of the first client user, so that the face vector data transmitted by the network greatly reduces network traffic, can be smoothly used under the ordinary mobile network, and the data transmission speed is fast. And greatly reduce the cost of network traffic.
- the second client 12 receives the interaction action information sent by the first client 11;
- the interaction action information is an interaction action button that is selected according to an interaction action button selected by the first client user in the interaction UI component library, and the interaction action component information is obtained.
- the interaction UI component library includes: Touch facial interactive action buttons, as well as specific function interactive action buttons.
- the second client 12 merges and renders the interactive action information and the face data information of the second client user to generate a virtual avatar model of the second client user;
- the second client 12 combines the interactive action information and the face data information of the second client user at the same time, and then generates a client virtual avatar model including the interaction action.
- the second client 12 displays a virtual avatar model of the second client user.
- the second client 12 may display the virtual avatar model of the second client user while displaying the virtual avatar model of the first client user.
- the second client 12 receives the interaction action information sent by the first client 11, merges and renders the interaction action information and the face data information of the second client user, and generates and outputs the rendered second client user.
- the virtual avatar model increases the fun and interactive experience of user chat.
- FIG. 5 is a schematic structural diagram of a first preferred embodiment of a video chat system provided by the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, and the device may be a software unit built in the mobile terminal. , hardware unit or combination of hardware and software.
- the video chat system includes a face video data collecting unit 51 and a data forwarding unit 52.
- the facial video data collecting unit 51 is configured to collect facial video data of the first client user, and identify facial vector data of the first client user according to the facial video data;
- a data forwarding unit 52 configured to send the face vector data of the first client user to the second client 12, so that the second client 12 renders the face vector data of the first client user to Generating a virtual avatar model of the first client user and displaying a virtual avatar model of the first client user.
- the video chat system provided in this embodiment may be used in the foregoing corresponding method embodiment 1.
- the video chat system provided in this embodiment may be used in the foregoing corresponding method embodiment 1.
- FIG. 6 is a second schematic structural diagram of a video chat system provided by the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown.
- the device may be a software unit and hardware built in the mobile terminal. Unit or soft and hard unit.
- the video chat system includes a face video data collecting unit 61, an audio data collecting unit 62, an interactive action recognizing unit 63, and a data forwarding unit 64.
- the facial video data collecting unit 61 is configured to collect facial video data of the first client user, and identify facial vector data of the first client user according to the facial video data;
- the data forwarding unit 64 is configured to send the face vector data to the second client 12, so that the second client 12 renders the face vector data of the first client user to generate a first client user. a virtual avatar model and displaying a virtual avatar model of the first client user.
- the system further includes: an audio data collecting unit 62, configured to collect audio data of the first client user;
- the data forwarding unit 64 is further configured to time stamp the facial vector data and the audio data of the first client user, and send the data to the second client 12, so that the second client 12 is configured according to the The time stamp synchronously displays the virtual avatar model of the first client user and the sound in the corresponding audio data.
- the system further includes an interaction action recognition unit 63, configured to receive an interaction action button selected by the first client user in the interaction UI component library, and identify the interaction action button to obtain interaction action information
- the interactive UI component library includes: a touch facial interaction action button, and a specific function interactive action button;
- the data forwarding unit 64 is further configured to send the interaction action information to the second client 12, so that the second client 12 combines and renders the interaction action information and the face data information of the second client user to generate a virtual avatar model of the second client user, and displaying a virtual avatar model of the second client user.
- the video chat system provided in this embodiment can be used in the foregoing second and third embodiments of the method.
- FIG. 7 is a schematic structural diagram of a third preferred embodiment of a video chat system provided by the present invention.
- the device may be a software unit built in the mobile terminal. , hardware unit or combination of hardware and software.
- the video chat system includes a receiving unit 71, a virtual avatar generating unit 72, and a display unit 73.
- the receiving unit 71 is configured to receive facial vector data of the first client user sent by the first client 11 .
- the virtual avatar generating unit 72 is configured to render facial vector data of the first client user to generate a virtual avatar model of the first client user.
- the display unit 73 is configured to display a virtual avatar model of the first client user.
- the receiving unit 71 is further configured to receive interaction action information sent by the first client 11 .
- the virtual avatar generating unit 72 is further configured to merge and render the interactive action information and the facial data information of the second client user to generate a virtual avatar model of the second client user.
- the display unit 73 is further configured to display a virtual avatar model of the second client user.
- the virtual avatar generating unit 72 is configured to perform cartoon rendering on the facial vector data of the first client user to generate a virtual cartoon avatar model of the first client user.
- the display unit 73 is specifically configured to display a virtual cartoon avatar model of the first client user.
- the video chat system provided by the embodiment of the present invention can be used in the foregoing method embodiments 4 and 5.
- each unit included is only divided according to functional logic, but is not limited to the above division, as long as the corresponding function can be implemented; in addition, the specific name of each functional unit It is also for convenience of distinguishing from each other and is not intended to limit the scope of protection of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Transfer Between Computers (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (18)
- 一种视频聊天方法,其中,所述方法包括以下步骤:采集第一客户端用户的面部视频数据、音频数据以及第一客户端在互动UI组件库中接收到的互动动作信息;根据所述面部视频数据,识别第一客户端用户的面部矢量数据;将所述面部矢量数据和音频数据发送至第二客户端,以使所述第二客户端对所述第一客户端用户的面部矢量数据进行渲染,进而生成对应所述第一客户端用户的虚拟头像模型,并使得所述第二客户端显示所述第一客户端用户的虚拟头像模型,且同步播放所述第一客户端用户的音频数据中的声音;及将所述互动动作信息发送至所述第二客户端,以使第二客户端合并渲染所述互动动作信息及第二客户端用户的面部数据信息,以生成对应所述第二客户端用户的虚拟头像模型,并在显示所述第一客户端用户的虚拟头像模型的同时,显示所述第二客户端用户的虚拟头像模型。
- 如权利要求1所述的视频聊天方法,其中,在采集第一客户端在互动UI组件库中接收到的互动动作信息时,首先接收第一客户端用户在互动UI组件库中选择的互动动作按钮,并识别所述互动动作按钮,以获取互动动作信息,其中所述互动UI组件库包括:触摸面部互动动作按钮以及特定功能互动动作按钮。
- 如权利要求1所述的视频聊天方法,其中,在将所述面部矢量数据和音频数据发送至第二客户端时,还包括以下步骤:将所述第一客户端用户的面部矢量数据及所述音频数据加时间戳,并发送至所述第二客户端,以使得所述第二客户端根据所述时间戳显示所述第一客户端用户的虚拟头像模型,且同步播放所述音频数据中的声音。
- 一种视频聊天方法,其中,所述方法包括以下步骤:采集第一客户端用户的面部视频数据,根据所述面部视频数据,识别第一客户端用户的面部矢量数据;及将所述第一客户端用户的面部矢量数据发送至第二客户端,以使所述第二客户端对所述第一客户端用户的面部矢量数据进行渲染,以生成第一客户端用户的虚拟头像模型,并显示所述第一客户端用户的虚拟头像模型。
- 如权利要求4所述的视频聊天方法,其中,所述方法还包括:接收第一客户端用户在互动UI组件库中选择的互动动作按钮,识别所述互动动作按钮,以获取互动动作信息,所述互动UI组件库包括:触摸面部互动动作按钮,以及特定功能互动动作按钮; 及发送所述互动动作信息至第二客户端,以使第二客户端合并渲染所述互动动作信息及第二客户端用户的面部数据信息,以生成第二客户端用户的虚拟头像模型,并显示所述第二客户端用户的虚拟头像模型。
- 如权利要求4所述的视频聊天方法,其中,所述方法还包括:采集第一客户端用户的音频数据;及将所述第一客户端用户的面部矢量数据及所述音频数据加时间戳,并发送至第二客户端,以使第二客户端根据所述时间戳,同步显示所述第一客户端用户的虚拟头像模型及播放对应的音频数据中的声音。
- 一种视频聊天方法,其中,所述方法包括以下步骤:接收第一客户端发送的第一客户端用户的面部矢量数据;对所述第一客户端用户的面部矢量数据进行渲染,以生成第一客户端用户的虚拟头像模型;及显示所述第一客户端用户的虚拟头像模型。
- 如权利要求7所述的视频聊天方法,其中,所述方法还包括:接收第一客户端发送的互动动作信息;合并渲染所述互动动作信息及第二客户端用户的面部数据信息,以生成第二客户端用户的虚拟头像模型;及显示所述第二客户端用户的虚拟头像模型。
- 如权利要求7所述的视频聊天方法,其中,所述对所述第一客户端用户的面部矢量数据进行渲染,以生成第一客户端用户的虚拟头像模型,显示所述第一客户端用户的虚拟头像模型具体为:对所述第一客户端用户的面部矢量数据进行卡通渲染,以生成第一客户端用户的虚拟卡通头像模型;及显示所述第一客户端用户的虚拟卡通头像模型。
- 一种视频聊天系统,其中,所述系统包括:面部视频数据采集单元,用于采集第一客户端用户的面部视频数据,根据所述面部视频数据,识别第一客户端用户的面部矢量数据;及数据转发单元,用于将所述面部矢量数据发送至第二客户端,以使第二客户端对所述第一客户端用户的面部矢量数据进行渲染,进而生成第一客户端用户的虚拟头像模型,并显示所述第一客户端用户的虚拟头像模型。
- 如权利要求10所述的视频聊天系统,其中,所述系统还包括:互动动作识别单元,用于接收第一客户端用户在互动UI组件库中选择的互动动作按钮,识别所述互动动作按钮,以获取互动动作信息,所述互动UI组件库包括:触摸面部互动动作按钮,以及特定功能互动动作按钮;及所述数据转发单元,还用于发送所述互动动作信息至第二客户端,以使第二客户端合并渲染所述互动动作信息及第二客户端用户的面部数据信息,以生成第二客户端用户的虚拟头像模型,并显示所述第二客户端用户的虚拟头像模型。
- 如权利要求10所述的视频聊天系统,其中,所述系统还包括:音频数据采集单元,用于采集第一客户端用户的音频数据;及所述数据转发单元,还用于将所述面部矢量数据及所述音频数据加时间戳,并发送至第二客户端,以使第二客户端根据所述时间戳,同步显示所述第一客户端用户的虚拟头像模型并播放对应的音频数据中的声音。
- 一种视频聊天系统,其中,所述系统包括:接收单元,用于接收第一客户端发送的第一客户端用户的面部矢量数据;虚拟头像生成单元,用于对所述第一客户端用户的面部矢量数据进行渲染,以生成第一客户端用户的虚拟头像模型;及显示单元,用于显示所述第一客户端用户的虚拟头像模型。
- 如权利要求13所述的视频聊天系统,其中,所述接收单元,还用于接收第一客户端发送互动动作信息;所述虚拟头像生成单元,还用于合并渲染所述互动动作信息及第二客户端用户的面部数据信息,以生成第二客户端用户的虚拟头像模型;所述显示单元,还用于显示所述第二客户端用户的虚拟头像模型。
- 如权利要求13所述的视频聊天系统,其中,虚拟头像生成单元,具体用于对所述第一客户端用户的面部矢量数据进行卡通渲染,以生成第一客户端用户的虚拟卡通头像模型;及所述显示单元,具体用于显示所述第一客户端用户的虚拟卡通头像模型。
- 一种存储介质,其内存储有处理器可执行指令,其中该处理器可执行指令用于让处理器完成以下操作:采集第一客户端用户的面部视频数据、音频数据以及第一客户端在互动UI组件库中接收到的互动动作信息;根据所述面部视频数据,识别第一客户端用户的面部矢量数据;将所述面部矢量数据和音频数据发送至第二客户端,以使所述第二客户端对所述第一客户端用户的面部矢量数据进行渲染,进而生成对应所述第一客户端用户的虚拟头像模型,并使得所述第二客户端显示所述第一客户端用户的虚拟头像模型,且同步播放所述第一客户端用户的音频数据中的声音;及将所述互动动作信息发送至所述第二客户端,以使第二客户端合并渲染所述互动动作信息及第二客户端用户的面部数据信息,以生成对应所述第二客户端用户的虚拟头像模型,并在显示所述第一客户端用户的虚拟头像模型的同时,显示所述第二客户端用户的虚拟头像模型。
- 如权利要求16所述的存储介质,其中,其中所述存储介质内存储的处理器可执行指令,还用于让处理器完成以下操作:在采集第一客户端在互动UI组件库中接收到的互动动作信息时,首先接收第一客户端用户在互动UI组件库中选择的互动动作按钮,并识别所述互动动作按钮,以获取互动动作信息,其中所述互动UI组件库包括:触摸面部互动动作按钮以及特定功能互动动作按钮。
- 如权利要求16所述的存储介质,其中,其中所述存储介质内存储的处理器可执行指令,还用于让处理器完成以下操作:在将所述面部矢量数据和音频数据发送至第二客户端时,将所述第一客户端用户的面部矢量数据及所述音频数据加时间戳,并发送至所述第二客户端,以使得所述第二客户端根据所述时间戳显示所述第一客户端用户的虚拟头像模型,且同步播放所述音频数据中的声音。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/239,204 US9094571B2 (en) | 2012-04-11 | 2013-02-22 | Video chatting method and system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210104867.2 | 2012-04-11 | ||
CN201210104867.2A CN103368929B (zh) | 2012-04-11 | 2012-04-11 | 一种视频聊天方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013152639A1 true WO2013152639A1 (zh) | 2013-10-17 |
Family
ID=49327067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/071793 WO2013152639A1 (zh) | 2012-04-11 | 2013-02-22 | 一种视频聊天方法及系统 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9094571B2 (zh) |
CN (1) | CN103368929B (zh) |
WO (1) | WO2013152639A1 (zh) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103647922A (zh) * | 2013-12-20 | 2014-03-19 | 百度在线网络技术(北京)有限公司 | 虚拟视频通话方法和终端 |
CN104735389B (zh) * | 2013-12-23 | 2018-08-31 | 联想(北京)有限公司 | 信息处理方法和信息处理设备 |
CN104301654A (zh) * | 2014-10-29 | 2015-01-21 | 四川智诚天逸科技有限公司 | 一种视频通信系统 |
CN105578108A (zh) * | 2014-11-05 | 2016-05-11 | 爱唯秀股份有限公司 | 电子运算设备、视讯通话系统与其运行方法 |
CN106162042A (zh) * | 2015-04-13 | 2016-11-23 | 中兴通讯股份有限公司 | 一种视频会议的方法、服务器及终端 |
CN106303690A (zh) * | 2015-05-27 | 2017-01-04 | 腾讯科技(深圳)有限公司 | 一种视频处理方法及装置 |
CN105263040A (zh) * | 2015-10-08 | 2016-01-20 | 安徽理工大学 | 一种节省手机流量观看球赛直播的方法 |
CN105554429A (zh) * | 2015-11-19 | 2016-05-04 | 掌赢信息科技(上海)有限公司 | 一种视频通话显示方法及视频通话设备 |
CN105516638B (zh) * | 2015-12-07 | 2018-10-16 | 掌赢信息科技(上海)有限公司 | 一种视频通话方法、装置和系统 |
CN108234276B (zh) * | 2016-12-15 | 2020-01-14 | 腾讯科技(深圳)有限公司 | 一种虚拟形象之间互动的方法、终端及系统 |
CN108076391A (zh) * | 2016-12-23 | 2018-05-25 | 北京市商汤科技开发有限公司 | 用于直播场景的图像处理方法、装置和电子设备 |
CN106937154A (zh) * | 2017-03-17 | 2017-07-07 | 北京蜜枝科技有限公司 | 处理虚拟形象的方法及装置 |
CN109150690B (zh) * | 2017-06-16 | 2021-05-25 | 腾讯科技(深圳)有限公司 | 交互数据处理方法、装置、计算机设备和存储介质 |
CN107438183A (zh) * | 2017-07-26 | 2017-12-05 | 北京暴风魔镜科技有限公司 | 一种虚拟人物直播方法、装置及系统 |
CN109391792B (zh) * | 2017-08-03 | 2021-10-29 | 腾讯科技(深圳)有限公司 | 视频通信的方法、装置、终端及计算机可读存储介质 |
WO2019024068A1 (en) * | 2017-08-04 | 2019-02-07 | Xinova, LLC | SYSTEMS AND METHODS FOR DETECTING EMOTION IN VIDEO DATA |
CN108377356B (zh) * | 2018-01-18 | 2020-07-28 | 上海掌门科技有限公司 | 基于虚拟画像的视频通话的方法、设备和计算机可读介质 |
CN110278140B (zh) * | 2018-03-14 | 2022-05-24 | 阿里巴巴集团控股有限公司 | 通讯方法及装置 |
CN109101806A (zh) * | 2018-08-17 | 2018-12-28 | 浙江捷尚视觉科技股份有限公司 | 一种基于风格迁移的隐私人像数据标注方法 |
CN109302598B (zh) * | 2018-09-30 | 2021-08-31 | Oppo广东移动通信有限公司 | 一种数据处理方法、终端、服务器和计算机存储介质 |
US11356640B2 (en) * | 2019-05-09 | 2022-06-07 | Present Communications, Inc. | Method for securing synthetic video conference feeds |
US10958874B2 (en) * | 2019-05-09 | 2021-03-23 | Present Communications, Inc. | Video conferencing method |
US11095901B2 (en) | 2019-09-23 | 2021-08-17 | International Business Machines Corporation | Object manipulation video conference compression |
CN113099150B (zh) * | 2020-01-08 | 2022-12-02 | 华为技术有限公司 | 图像处理的方法、设备及系统 |
US20230199147A1 (en) * | 2021-12-21 | 2023-06-22 | Snap Inc. | Avatar call platform |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20020033480A (ko) * | 2000-10-31 | 2002-05-07 | 박태철 | 곤게임기능을 가진 화상음성채팅 시스템 |
JP2003141563A (ja) * | 2001-10-31 | 2003-05-16 | Nippon Telegr & Teleph Corp <Ntt> | 顔3次元コンピュータグラフィック生成方法、そのプログラム及び記録媒体 |
CN101021899A (zh) * | 2007-03-16 | 2007-08-22 | 南京搜拍信息技术有限公司 | 综合利用人脸及人体辅助信息的交互式人脸识别系统及方法 |
CN101930618A (zh) * | 2010-08-20 | 2010-12-29 | 李浩民 | 一种个性化二维动漫的制作方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008091485A2 (en) * | 2007-01-23 | 2008-07-31 | Euclid Discoveries, Llc | Systems and methods for providing personal video services |
US9544543B2 (en) * | 2011-02-11 | 2017-01-10 | Tangome, Inc. | Augmenting a video conference |
WO2012145340A2 (en) * | 2011-04-21 | 2012-10-26 | Shah Talukder | Flow-control based switched group video chat and real-time interactive broadcast |
-
2012
- 2012-04-11 CN CN201210104867.2A patent/CN103368929B/zh active Active
-
2013
- 2013-02-22 US US14/239,204 patent/US9094571B2/en active Active
- 2013-02-22 WO PCT/CN2013/071793 patent/WO2013152639A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20020033480A (ko) * | 2000-10-31 | 2002-05-07 | 박태철 | 곤게임기능을 가진 화상음성채팅 시스템 |
JP2003141563A (ja) * | 2001-10-31 | 2003-05-16 | Nippon Telegr & Teleph Corp <Ntt> | 顔3次元コンピュータグラフィック生成方法、そのプログラム及び記録媒体 |
CN101021899A (zh) * | 2007-03-16 | 2007-08-22 | 南京搜拍信息技术有限公司 | 综合利用人脸及人体辅助信息的交互式人脸识别系统及方法 |
CN101930618A (zh) * | 2010-08-20 | 2010-12-29 | 李浩民 | 一种个性化二维动漫的制作方法 |
Also Published As
Publication number | Publication date |
---|---|
US20140192136A1 (en) | 2014-07-10 |
CN103368929A (zh) | 2013-10-23 |
US9094571B2 (en) | 2015-07-28 |
CN103368929B (zh) | 2016-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013152639A1 (zh) | 一种视频聊天方法及系统 | |
WO2018161604A1 (zh) | 移动终端的播放控制方法、装置、存储介质及电子设备 | |
WO2016165556A1 (zh) | 一种视频流的数据处理方法、装置和系统 | |
WO2019128174A1 (zh) | 音频播放方法、智能电视及计算机可读存储介质 | |
WO2020098462A1 (zh) | Ar虚拟人物绘制方法、装置、移动终端及存储介质 | |
WO2013139239A1 (en) | Method for recommending users in social network and the system thereof | |
WO2014187158A1 (zh) | 终端数据云分享的控制方法、服务器及终端 | |
WO2016101698A1 (zh) | 基于dlna技术实现屏幕推送的方法及系统 | |
WO2019192085A1 (zh) | 银企直联通信方法、装置、设备及计算机可读存储介质 | |
WO2016052814A1 (en) | Mobile terminal and method of controlling the same | |
WO2019147064A1 (ko) | 오디오 데이터를 송수신하는 방법 및 그 장치 | |
JP2008067203A (ja) | 映像合成装置、方法およびプログラム | |
WO2019010926A1 (zh) | 广告的推送方法、装置及计算机可读存储介质 | |
WO2015170832A1 (ko) | 디스플레이 장치 및 그의 화상 통화 수행 방법 | |
WO2015057013A1 (ko) | 휴대용 장치가 웨어러블 장치를 통하여 정보를 표시하는 방법 및 그 장치 | |
WO2015154639A1 (en) | Method and apparatus for recording and replaying video of terminal | |
CN113419693B (zh) | 一种多用户轨迹的同步显示方法及系统 | |
WO2017206377A1 (zh) | 同步播放节目的方法和装置 | |
WO2015139594A1 (en) | Security verification method, apparatus, and system | |
US20170185142A1 (en) | Method, system and smart glove for obtaining immersion in virtual reality system | |
WO2017054488A1 (zh) | 电视播放控制方法、服务器及电视播放控制系统 | |
WO2017020649A1 (zh) | 音视频播放控制方法及装置 | |
WO2022037261A1 (zh) | 音频播放、设备管理方法及装置 | |
WO2018006581A1 (zh) | 智能电视的播放方法及装置 | |
WO2019210574A1 (zh) | 消息处理方法、装置、设备及可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13775126 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14239204 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 18/12/2014) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13775126 Country of ref document: EP Kind code of ref document: A1 |