CN105657325A - Method, apparatus and system for video communication - Google Patents

Method, apparatus and system for video communication Download PDF

Info

Publication number
CN105657325A
CN105657325A CN201610074923.0A CN201610074923A CN105657325A CN 105657325 A CN105657325 A CN 105657325A CN 201610074923 A CN201610074923 A CN 201610074923A CN 105657325 A CN105657325 A CN 105657325A
Authority
CN
China
Prior art keywords
dynamic effect
terminal
target dynamic
video communication
described target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610074923.0A
Other languages
Chinese (zh)
Inventor
李志刚
柯红锋
傅雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201610074923.0A priority Critical patent/CN105657325A/en
Publication of CN105657325A publication Critical patent/CN105657325A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a method, an apparatus and a system for video communication, belonging to the technical field of computers. The method comprises the following steps: during a process of video communication performed by a first terminal and a second terminal, determining a to-be-used target dynamic effect; sending a dynamic effect notice to the second terminal, wherein the dynamic effect notice carries an identifier of the target dynamic effect; and displaying the target dynamic effect. With the method, the apparatus and the system provided by the invention, flexibility of video communication can be improved.

Description

A kind of methods, devices and systems of video communication
Technical field
The disclosure is directed to field of computer technology, especially with respect to the methods, devices and systems of a kind of video communication.
Background technology
Along with the development of mobile terminal technology, the application of mobile terminal is more and more extensive, and function is also from strength to strength. The mode that people carry out radio communication by mobile terminal is also more and more various, and people are except can carrying out voice communication by mobile terminal, it is also possible to carry out video communication by mobile terminal.
When user wishes to carry out video communication with the other side, this user can click the option that video communication is corresponding, video communication is initiated to the other side, after the other side confirms, the mobile terminal of user can pass through shooting part, obtain the video data (such as view data and speech data etc.) of user, and the video data got is sent to the mobile terminal of correspondence, so that the other side can see the video of user on mobile terminals, in like manner, user can also see the video of the other side on mobile terminals, thus realizing video communication.
In the process realizing the disclosure, inventor have found that and at least there is problems in that
In the process carrying out video communication, user can only see the video of the other side, and video capability is more single, thus causing that the motility of video communication is relatively low.
Summary of the invention
In order to overcome Problems existing in correlation technique, present disclose provides the methods, devices and systems of a kind of video communication. Described technical scheme is as follows:
First aspect according to disclosure embodiment, it is provided that a kind of method of video communication, described method includes:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used;
Sending dynamic effect notice to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect;
Show described target dynamic effect.
Optionally, described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, based on the first video communication image, detect action message or the expression information of the user of described first terminal;
Corresponding relation according to the action message prestored or expression information and dynamic effect, it is determined that the action message detected or target dynamic effect corresponding to expression information.
So, user can pass through some expression and actions, makes first terminal send dynamic effect to the second terminal, such that it is able to improve the motility of video communication.
Optionally, described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that the target dynamic effect chosen in each dynamic effect provided.
So, user in the multiple dynamic effects provided, can choose the dynamic effect wanting to use, it is possible to improves the motility of video communication.
Optionally, described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, obtain the voice messaging of user's input;
Identify the text message that described voice messaging is corresponding;
Corresponding relation according to the key word prestored Yu dynamic effect, it is determined that the target dynamic effect that the first key word of comprising in described text message is corresponding.
So, user can pass through to say some words specified, and makes first terminal send dynamic effect to the second terminal, such that it is able to improve the motility of video communication.
Optionally, the described target dynamic effect of described display, including:
It is displayed in full screen described target dynamic effect.
Optionally, the described target dynamic effect of described display, including:
In the first video communication image, show described target dynamic effect.
Optionally, described in the first video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
As such, it is possible to different dynamic effects is shown in different regions, such that it is able to improve the interest of video communication.
Optionally, described method also includes:
In the second video communication image, show described target dynamic effect.
As such, it is possible in the first video communication image and the second video communication image, show target dynamic effect simultaneously, such that it is able to improve the motility of video communication.
Optionally, described in the second video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
As such, it is possible to different dynamic effects is shown in different regions, such that it is able to improve the interest of video communication.
Second aspect according to disclosure embodiment, it is provided that a kind of method of video communication, described method includes:
Carrying out in the process of video communication at the second terminal and first terminal, receive the dynamic effect notice that described first terminal sends, described dynamic effect notice carries the mark of target dynamic effect;
Mark according to described target dynamic effect, obtains and shows described target dynamic effect.
Optionally, the described mark according to described target dynamic effect, obtain and show described target dynamic effect, including:
Mark according to described target dynamic effect, obtains described target dynamic effect, and is displayed in full screen described target dynamic effect.
Optionally, the described mark according to described target dynamic effect, obtain and show described target dynamic effect, including:
Mark according to described target dynamic effect, obtains described target dynamic effect, and in the second video communication image, shows described target dynamic effect.
Optionally, described in the second video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
Optionally, described method also includes:
In the first video communication image, show described target dynamic effect.
Optionally, described in the first video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
The third aspect according to disclosure embodiment, it is provided that a kind of first terminal, described first terminal includes:
Determine module, for carrying out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used;
Sending module, for sending dynamic effect notice to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect;
First display module, is used for showing described target dynamic effect.
Optionally, described determine module, including:
Detection sub-module, for carrying out in the process of video communication at first terminal and the second terminal, based on the first video communication image, detects action message or the expression information of the user of described first terminal;
First determines submodule, for the corresponding relation according to the action message that prestores or expression information and dynamic effect, it is determined that the action message detected or target dynamic effect corresponding to expression information.
Optionally, described determine module, be used for:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that the target dynamic effect chosen in each dynamic effect provided.
Optionally, described determine module, including:
Obtain submodule, for carrying out in the process of video communication at first terminal and the second terminal, obtain the voice messaging of user's input;
Identify submodule, for identifying the text message that described voice messaging is corresponding;
Second determines submodule, for the corresponding relation according to the key word that prestores with dynamic effect, it is determined that the target dynamic effect that the first key word of comprising in described text message is corresponding.
Optionally, described first display module, it is used for:
It is displayed in full screen described target dynamic effect.
Optionally, described first display module, it is used for:
In the first video communication image, show described target dynamic effect.
Optionally, described first display module, including:
3rd determines submodule, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
First display sub-module, in the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
Optionally, described first terminal also includes:
Second display sub-module, for, in the second video communication image, showing described target dynamic effect.
Optionally, described second display sub-module, including:
4th determines submodule, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
Second display sub-module, in the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
Fourth aspect according to disclosure embodiment, it is provided that a kind of second terminal, described second terminal includes:
Receiver module, is used for carrying out in the process of video communication at the second terminal and first terminal, receives the dynamic effect notice that described first terminal sends, and described dynamic effect notice carries the mark of target dynamic effect;
First display module, for the mark according to described target dynamic effect, obtains and shows described target dynamic effect.
Optionally, described first display module, it is used for:
Mark according to described target dynamic effect, obtains described target dynamic effect, and is displayed in full screen described target dynamic effect.
Optionally, described first display module, it is used for:
Mark according to described target dynamic effect, obtains described target dynamic effect, and in the second video communication image, shows described target dynamic effect.
Optionally, described first display module, including:
First determines submodule, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
First display sub-module, in the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
Optionally, described second terminal also includes:
Second display module, for, in the first video communication image, showing described target dynamic effect.
Optionally, described second display module, including:
Second determines submodule, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
Second display sub-module, in the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
The 5th aspect according to disclosure embodiment, it is provided that a kind of first terminal, including:
Processor;
For storing the memorizer of processor executable;
Wherein, described processor is configured to:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used;
Sending dynamic effect notice to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect;
Show described target dynamic effect.
The 6th aspect according to disclosure embodiment, it is provided that a kind of second terminal, including:
Processor;
For storing the memorizer of processor executable;
Wherein, described processor is configured to:
Carrying out in the process of video communication at the second terminal and first terminal, receive the dynamic effect notice that described first terminal sends, described dynamic effect notice carries the mark of target dynamic effect;
Mark according to described target dynamic effect, obtains and shows described target dynamic effect.
The 7th aspect according to disclosure embodiment, it is provided that the system of a kind of video communication, described system includes first terminal and the second terminal, wherein:
Described first terminal, for carrying out in the process of video communication in described first terminal and described second terminal, determine target dynamic effect to be used, dynamic effect notice is sent to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect, shows described target dynamic effect;
Described second terminal, is used for carrying out in the process of video communication at described second terminal and described first terminal, receives the dynamic effect notice that described first terminal sends, the mark according to described target dynamic effect, obtains and show described target dynamic effect.
Embodiment of the disclosure that the technical scheme of offer can include following beneficial effect:
In disclosure embodiment, carry out in the process of video communication at first terminal and the second terminal, determine target dynamic effect to be used, sending dynamic effect notice to the second terminal, dynamic effect notice carries the mark of target dynamic effect, shows target dynamic effect, so, user in the process carrying out video communication, can send dynamic effect to the other side, such that it is able to improve the motility of video communication.
It should be appreciated that it is only exemplary and explanatory that above general description and details hereinafter describe, the disclosure can not be limited.
Accompanying drawing explanation
Accompanying drawing herein is merged in description and constitutes the part of this specification, it is shown that meets and embodiment of the disclosure, and for explaining the principle of the disclosure together with description. In the accompanying drawings:
Fig. 1 is the flow chart of the method for a kind of video communication according to an exemplary embodiment;
Fig. 2 is the interface display schematic diagram according to an exemplary embodiment;
Fig. 3 is the interface display schematic diagram according to an exemplary embodiment;
Fig. 4 is the structural representation of a kind of first terminal according to an exemplary embodiment;
Fig. 5 is the structural representation of a kind of first terminal according to an exemplary embodiment;
Fig. 6 is the structural representation of a kind of first terminal according to an exemplary embodiment;
Fig. 7 is the structural representation of a kind of first terminal according to an exemplary embodiment;
Fig. 8 is the structural representation of a kind of first terminal according to an exemplary embodiment;
Fig. 9 is the structural representation of a kind of first terminal according to an exemplary embodiment;
Figure 10 is the structural representation of a kind of second terminal according to an exemplary embodiment;
Figure 11 is the structural representation of a kind of second terminal according to an exemplary embodiment;
Figure 12 is the structural representation of a kind of second terminal according to an exemplary embodiment;
Figure 13 is the structural representation of a kind of second terminal according to an exemplary embodiment;
Figure 14 is the structural representation of a kind of first terminal according to an exemplary embodiment;
Figure 15 is the structural representation of a kind of second terminal according to an exemplary embodiment.
By above-mentioned accompanying drawing, it has been shown that the embodiment that the disclosure is clear and definite, will there is more detailed description hereinafter. These accompanying drawings and word describe the scope being not intended to be limited disclosure design by any mode, but are the concept that those skilled in the art illustrate the disclosure by reference specific embodiment.
Detailed description of the invention
Here in detail exemplary embodiment being illustrated, its example representation is in the accompanying drawings. When as explained below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represents same or analogous key element. Embodiment described in following exemplary embodiment does not represent all embodiments consistent with the disclosure. On the contrary, they only with in appended claims describe in detail, the disclosure some in the example of consistent apparatus and method.
A kind of method that the disclosure one exemplary embodiment provides video communication, a kind of method that the disclosure one exemplary embodiment provides video communication, the method may be used in terminal, and wherein, terminal can be the mobile terminal such as mobile phone or panel computer.This terminal can include communication component, is connected for setting up video communication with other-end; This terminal can include shooting part, and for obtaining the video information of user in video communication, this video information can include the image information of user and the voice messaging of user; This terminal can include processor, for determining target dynamic effect to be used; This terminal can include transceiver, for sending dynamic effect notice to the second terminal; This terminal can include display unit, is used for showing target dynamic effect; This terminal can include memorizer, is used for storing multiple dynamic effect and the data produced in above-mentioned processing procedure. It addition, terminal can also include the parts such as bluetooth and power supply. In the present embodiment, two terminals carrying out video communication may be respectively referred to as first terminal and the second terminal, in the process carrying out video communication, any terminal can send dynamic effect to the other side, the present embodiment sends dynamic effect for first terminal to the second terminal and illustrates, the situation that second terminal sends dynamic effect to first terminal is similar with it, repeats no more.
As it is shown in figure 1, the handling process of the method can comprise the following steps that
In a step 101, carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used.
In force, user can install the application program with video communication function in terminal (can be described as first terminal), when user wants to carry out video communication, user can open this application program at first terminal, then can in the buddy list of this application program, select to want to carry out good friend's account of video communication, click the option that video communication function is corresponding. First terminal then can send the first video communication request to the server of this application program, can carry the terminal iidentification of first terminal and the account identification of good friend's account of user's selection in this first video communication request. After server receives this video communication request, it is possible to send the second video communication request to the terminal (can be described as the second terminal) logging in this good friend's account, the second terminal can also be provided with above-mentioned application program. After second terminal receives the second video communication request, video communication prompting message can be shown, after user's selection of the second terminal accepts the option that video communication is corresponding, first terminal then can be set up video communication with the second terminal and connect, thus realizing video communication.
Can prestoring multiple dynamic effect in above-mentioned application program, carry out in the process of video communication at first terminal and the second terminal, user can send dynamic effect by first terminal to the other side, to improve the interest of video communication. First terminal determines that the mode of target dynamic effect to be used can be diversified, present embodiments provides several feasible mode.
Mode one, carry out in the process of video communication at first terminal and the second terminal, based on the first video communication image, the action message of the user of detection first terminal or expression information; Corresponding relation according to the action message prestored or expression information and dynamic effect, it is determined that the action message detected or target dynamic effect corresponding to expression information.
Wherein, can be first terminal carrying out the image that photographs in the process of video communication to the first video communication image with the second terminal.
In force, first terminal can prestore the code for carrying out image detection, carry out in the process of video communication at first terminal and the second terminal, the video communication image of user can be carried out image detection by first terminal, and concrete detection algorithm can adopt image detection algorithm of the prior art.
The action message set that can prestore in first terminal and expression information set, such as, action message set can include embrace action message and kiss action message etc., expression information set can prestore smile expression information, angry expression information and sobbing expression information etc.First terminal can detect action message or the expression information of the user of first terminal in real time, and may determine that in the action message set and expression information set prestored, whether comprise the action message or expression information that detect, if comprised, then can according to the corresponding relation of the action message prestored or expression information and dynamic effect, it is determined that the action message detected or target dynamic effect corresponding to expression information. Such as, the user of first terminal is made that action of pouting one's lips (namely kissing action), then first terminal can detect action message of pouting one's lips, and may determine that the target dynamic effect that action message of pouting one's lips is corresponding, this target dynamic effect can be that love is by flying to dynamic effect at a distance nearby.
Mode two, carry out in the process of video communication at first terminal and the second terminal, it is determined that the target dynamic effect chosen in each dynamic effect provided.
In force, carrying out in the process of video communication at first terminal and the second terminal, first terminal can show the option using dynamic effect corresponding. User can click this option, first terminal then can show dynamic listing, this dynamic effect list can show each dynamic effect that above-mentioned application program provides, user clicks the target dynamic effect being desirable in this dynamic effect list, what terminal then can receive corresponding target dynamic effect chooses instruction, so that it is determined that the target dynamic effect that user chooses.
Mode three, carry out in the process of video communication at first terminal and the second terminal, obtain the voice messaging of user's input; Identify the text message that voice messaging is corresponding; Corresponding relation according to the key word prestored Yu dynamic effect, it is determined that the target dynamic effect that the first key word of comprising in text message is corresponding.
In force, first terminal can prestore the code for carrying out speech recognition, carry out in the process of video communication at first terminal and the second terminal, first terminal can obtain the voice messaging of user's input, and may identify which the text message that voice messaging is corresponding, this speech recognition algorithm can adopt speech recognition algorithm of the prior art. First terminal can the text message corresponding to voice messaging of Real time identification user input, then text information can be carried out participle etc. and process, it is determined that the word that the text message identified comprises. First terminal can prestore each dynamic effect that above-mentioned application program provides and the keyword set that each dynamic effect is corresponding. Such as, the keyword set that the dynamic effect of the love that flies out is corresponding can include liking you, parent one etc.
First terminal may determine that the arbitrary key word whether comprised in text message in each keyword set prestored, if comprised, then first terminal can according to the corresponding relation of the key word prestored Yu dynamic effect, it is determined that the target dynamic effect that the key word (i.e. the first key word) comprised in text message is corresponding. Such as, the voice messaging of user's input is " liking you ", then first terminal may determine that text information comprises the first key word " liking you ", then may determine that the target dynamic effect that " liking you " is corresponding, and this target dynamic effect can be that love is by flying to dynamic effect at a distance nearby; Or, the voice messaging of user's input is " christmas is drawing near ", then first terminal may identify which that text message is for " christmas is drawing near ", and may determine that text information comprises the first key word " Christmas Day ", then may determine that the target dynamic effect of " Christmas Day " correspondence, this target dynamic effect can be rise the dynamic effect of a Christmas tree below screen.
In a step 102, first terminal sends dynamic effect notice to the second terminal.
In force, after first terminal determines target dynamic effect to be used, it is possible to sending dynamic effect notice to the second terminal, this dynamic effect notice can carry the mark of target dynamic effect. It addition, dynamic effect notice can also carry the terminal iidentification of the terminal iidentification of first terminal, the second terminal.
In step 103, display target dynamic effect.
In force, after first terminal determines target dynamic effect to be used, it is possible to obtain target dynamic effect, then can show target dynamic effect.
Optionally, it is possible to full screen display dynamic effect, accordingly, the processing procedure of step 103 can be such that full screen display target dynamic effect.
In force, in the process carrying out video communication, first terminal can show the first video communication image and the second video communication image, and the second video communication image can be the second terminal is carrying out the image that photographs in the process of video communication with first terminal. Wherein, the display window of the second video communication image can be big window, and such as full screen display, the display window of the first video communication image can be wicket, such as the wicket in the screen lower right corner. First terminal in whole on-screen display (osd) area, can show target dynamic effect. For example, it is possible in whole on-screen display (osd) area, display love is by flying to dynamic effect at a distance nearby, as shown in Figure 2.
Optionally, it is possible in the first video communication image, showing dynamic effect, accordingly, the processing procedure of step 103 can be such that in the first video communication image, shows target dynamic effect.
In force, first terminal may determine that the scope (wicket in screen) that the video communication image of user is corresponding, then can in the scope that this video communication image is corresponding, display target dynamic effect, as it is shown on figure 3, so, first terminal can in the first video communication image and the second video communication image, show target dynamic effect respectively, such that it is able to improve the interest of video communication.
Optionally, it is possible to different dynamic effects is shown in different positions, accordingly, the processing procedure of step 103 can be such that the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that target dynamic effect is corresponding; In the first video communication image, using the viewing area as target dynamic effect, the viewing area at target body position, show target dynamic effect.
In force, first terminal can prestore the code for carrying out recognition of face, the video communication image got can be carried out recognition of face process by first terminal, recognition detection to video communication image in the human body that comprises, and determine the human body location identified, for example, it may be determined that eyes location, mouth location, crown location etc. in video communication image. Concrete face recognition algorithms can adopt face recognition algorithms of the prior art.
After first terminal determines target dynamic effect to be used, can according to the corresponding relation of the dynamic effect prestored Yu human body, determine the target body position that target dynamic effect is corresponding, and then process based on above-mentioned recognition of face, determine the position at target body position, then using the viewing area as target dynamic effect, the viewing area at target body position, target dynamic effect can be shown. Such as, target dynamic effect can be love by flying to dynamic effect at a distance nearby, then in the first video communication image, can show that love is by flying to dynamic effect at a distance nearby in mouth location. If it addition, first terminal is in the first video communication image, being not detected by target body position, then can showing target dynamic effect in predetermined position, predeterminated position can be the centre position of the first video communication image.
Optionally, first terminal can also show that target dynamic effect, corresponding processing procedure can be such that in the second video communication image, shows target dynamic effect.
Wherein, can be the second terminal carrying out the image that photographs in the process of video communication to the second video communication image with first terminal.
In force, first terminal may determine that the indication range of the second video communication image, then in the indication range of the second video communication image, can show target dynamic effect.
Optionally, in the second video communication image, different dynamic effects can be shown in different positions, and corresponding processing procedure can be such that the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that target dynamic effect is corresponding; In the second video communication image, using the viewing area as target dynamic effect, the viewing area at target body position, show target dynamic effect.
In force, after first terminal determines target dynamic effect to be used, can according to the corresponding relation of the dynamic effect prestored Yu human body, determine the target body position that target dynamic effect is corresponding, and then process based on above-mentioned recognition of face, it is determined that the position at target body position, then can in the second video communication image, using the viewing area as target dynamic effect, the viewing area at target body position, show target dynamic effect. Such as, target dynamic effect can be love by flying to dynamic effect at a distance nearby, then in the second video communication image, can show that love is by flying to dynamic effect at a distance nearby in mouth location. If it addition, first terminal is in the second video communication image, being not detected by target body position, then can showing target dynamic effect in predetermined position, predeterminated position can be the centre position of the second video communication image.
At step 104, carry out in the process of video communication at the second terminal and first terminal, receive the dynamic effect notice that first terminal sends.
In force, after first terminal determines target dynamic effect to be used, it is possible to sending dynamic effect notice to the second terminal, this dynamic effect notice can carry the mark of target dynamic effect. Second terminal can receive this dynamic effect notice, then this dynamic effect notice can be resolved, obtain the mark of target dynamic effect therein, in order to carry out subsequent treatment.
In step 105, the mark according to target dynamic effect, obtain and show target dynamic effect.
In force, after second terminal obtains the mark of target dynamic effect, can obtaining target dynamic effect according to the mark of target dynamic effect, then can show target dynamic effect, the target effect in winter that the target dynamic effect of the second terminal demonstration and first terminal show is identical.
Optionally, it is possible to full screen display dynamic effect, accordingly, the processing procedure of step 105 can be such that the mark according to target dynamic effect, obtains target dynamic effect, and is displayed in full screen target dynamic effect.
In force, in the process carrying out video communication, the second terminal can show the first video communication image and the second video communication image. Wherein, the display window of the first video communication image can be big window, and such as full screen display, the display window of the second video communication image can be wicket, such as the wicket in the screen lower right corner. Second terminal in whole on-screen display (osd) area, can show target dynamic effect. For example, it is possible in whole on-screen display (osd) area, display love is by flying to dynamic effect at a distance nearby, as shown in Figure 2.Additionally, the initial display position of the target object that this target dynamic effect comprises, can in the first video communication image, then to be shown in whole screen in the way of becoming larger, such as, target dynamic effect can also be the dynamic effect that love flies to nearby from afar, it is possible to display love is sudden from the first video communication image, finally at the image of full frame middle display love.
Optionally, it is possible in the second video communication image, display dynamic effect, accordingly, the processing procedure of step 105 can be such that the mark according to target dynamic effect, obtains target dynamic effect, and in the second video communication image, show target dynamic effect.
In force, second terminal may determine that the indication range (wicket in screen) of the video communication image of user, then can in the indication range of this video communication image, display target dynamic effect, as it is shown on figure 3, so, the second terminal can in the first video communication image and the second video communication, show target dynamic effect respectively, such that it is able to improve the interest of video communication.
Optionally, it is possible to different dynamic effects is shown in different positions, accordingly, the processing procedure of step 103 can be such that the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that target dynamic effect is corresponding; In the second video communication image, using the viewing area as target dynamic effect, the viewing area at target body position, show target dynamic effect.
In force, second terminal can prestore the code for carrying out recognition of face, the video communication image got can be carried out recognition of face process by the second terminal, recognition detection to video communication image in each human body of comprising, and determine each human body location, for example, it may be determined that eyes location, mouth location, crown location etc. in video communication image. Concrete face recognition algorithms can adopt any face recognition algorithms of the prior art.
After second terminal obtains the mark of target dynamic effect, can according to the corresponding relation of the dynamic effect prestored Yu human body, determine the target body position that target dynamic effect is corresponding, and then process based on above-mentioned recognition of face, determine the position at target body position, then using the viewing area as target dynamic effect, the viewing area at target body position, target dynamic effect can be shown. Such as, target dynamic effect is love from flying to dynamic effect at a distance nearby, then in the second video communication image, can show that love is from flying to dynamic effect at a distance nearby in mouth location. If it addition, the second terminal is in the second video communication image, being not detected by target body position, then can showing target dynamic effect in predetermined position, predeterminated position can be the centre position of the second video communication image.
Optionally, the second terminal can also show that target dynamic effect, corresponding processing procedure can be such that in the first video communication image, shows target dynamic effect.
In force, the second terminal may determine that the first video communication image, then in the first video communication image, can show target dynamic effect.
Optionally, in the first video communication image, different dynamic effects can be shown in different positions, and corresponding processing procedure can be such that the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that target dynamic effect is corresponding; In the first video communication image, using the viewing area as target dynamic effect, the viewing area at target body position, show target dynamic effect.
In force, after second terminal determines target dynamic effect to be used, can according to the corresponding relation of the dynamic effect prestored Yu human body, determine the target body position that target dynamic effect is corresponding, and then process based on above-mentioned recognition of face, determine the position at target body position, then using the viewing area as target dynamic effect, the viewing area at target body position, target dynamic effect can be shown.Such as, target dynamic effect can be love from flying to dynamic effect at a distance nearby, then in the first video communication image, can show that love is from flying to dynamic effect at a distance nearby in mouth location. If it addition, the second terminal is in the first video communication image, being not detected by target body position, then can showing target dynamic effect in predetermined position, predeterminated position can be the centre position of the first video communication image.
It should be noted that the process of above-mentioned steps 103 and step 104��105 can be in no particular order.
In disclosure embodiment, carry out in the process of video communication at first terminal and the second terminal, determine target dynamic effect to be used, sending dynamic effect notice to the second terminal, dynamic effect notice carries the mark of target dynamic effect, shows target dynamic effect, so, user in the process carrying out video communication, can send dynamic effect to the other side, such that it is able to improve the motility of video communication.
The disclosure one exemplary embodiment provides a kind of first terminal, and as shown in Figure 4, described first terminal comprises determining that module 410, sending module 420 and the first display module 430.
Determine module 410, for carrying out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used;
Sending module 420, for sending dynamic effect notice to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect;
First display module 430, is used for showing described target dynamic effect.
Optionally, determine module 410 as it is shown in figure 5, described, including:
Detection sub-module 411, for carrying out in the process of video communication at first terminal and the second terminal, based on the first video communication image, detects action message or the expression information of the user of described first terminal;
First determines submodule 412, for the corresponding relation according to the action message that prestores or expression information and dynamic effect, it is determined that the action message detected or target dynamic effect corresponding to expression information.
Optionally, described determine module 410, be used for:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that the target dynamic effect chosen in each dynamic effect provided.
Optionally, as shown in Figure 6, described module 410 is determined, including:
Obtain submodule 413, for carrying out in the process of video communication at first terminal and the second terminal, obtain the voice messaging of user's input;
Identify submodule 414, for identifying the text message that described voice messaging is corresponding;
Second determines submodule 415, for the corresponding relation according to the key word that prestores with dynamic effect, it is determined that the target dynamic effect that the first key word of comprising in described text message is corresponding.
Optionally, described first display module 430, it is used for:
It is displayed in full screen described target dynamic effect.
Optionally, described first display module 430, it is used for:
In the first video communication image, show described target dynamic effect.
Optionally, as it is shown in fig. 7, described first display module 430, including:
3rd determines submodule 431, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
First display sub-module 432, in the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
Optionally, as shown in Figure 8, described first terminal also includes:
Second display sub-module 440, for, in the second video communication image, showing described target dynamic effect.
Optionally, as it is shown in figure 9, described second display sub-module 440, including:
4th determines submodule 441, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
Second display sub-module 442, in the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
In disclosure embodiment, carry out in the process of video communication at first terminal and the second terminal, determine target dynamic effect to be used, sending dynamic effect notice to the second terminal, dynamic effect notice carries the mark of target dynamic effect, shows target dynamic effect, so, user in the process carrying out video communication, can send dynamic effect to the other side, such that it is able to improve the motility of video communication.
The disclosure one exemplary embodiment provides a kind of second terminal, and as shown in Figure 10, described second terminal includes:
Receiver module 1010, is used for carrying out in the process of video communication at the second terminal and first terminal, receives the dynamic effect notice that described first terminal sends, and described dynamic effect notice carries the mark of target dynamic effect;
First display module 1020, for the mark according to described target dynamic effect, obtains and shows described target dynamic effect.
Optionally, described first display module 1020, it is used for:
Mark according to described target dynamic effect, obtains described target dynamic effect, and is displayed in full screen described target dynamic effect.
Optionally, described first display module 1020, it is used for:
Mark according to described target dynamic effect, obtains described target dynamic effect, and in the second video communication image, shows described target dynamic effect.
Optionally, as shown in figure 11, described first display module 1020, including:
First determines submodule 1021, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
First display sub-module 1022, in the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
Optionally, as shown in figure 12, described second terminal also includes:
Second display module 1030, for, in the first video communication image, showing described target dynamic effect.
Optionally, as shown in figure 13, described second display module 1030, including:
Second determines submodule 1031, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
Second display sub-module 1032, in the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
In disclosure embodiment, carry out in the process of video communication at first terminal and the second terminal, determine target dynamic effect to be used, sending dynamic effect notice to the second terminal, dynamic effect notice carries the mark of target dynamic effect, shows target dynamic effect, so, user in the process carrying out video communication, can send dynamic effect to the other side, such that it is able to improve the motility of video communication.
The disclosure one exemplary embodiment provides the system of a kind of video communication, and described system includes first terminal and the second terminal, wherein:
Described first terminal, for carrying out in the process of video communication in described first terminal and described second terminal, determine target dynamic effect to be used, dynamic effect notice is sent to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect, shows described target dynamic effect;
Described second terminal, is used for carrying out in the process of video communication at described second terminal and described first terminal, receives the dynamic effect notice that described first terminal sends, the mark according to described target dynamic effect, obtains and show described target dynamic effect.
In disclosure embodiment, carry out in the process of video communication at first terminal and the second terminal, determine target dynamic effect to be used, sending dynamic effect notice to the second terminal, dynamic effect notice carries the mark of target dynamic effect, shows target dynamic effect, so, user in the process carrying out video communication, can send dynamic effect to the other side, such that it is able to improve the motility of video communication.
The structural representation of a kind of first terminal that disclosure embodiment also illustrates that, this first terminal can be the mobile first terminal such as mobile phone or panel computer.
With reference to Figure 14, first terminal 800 can include following one or more assembly: processes assembly 802, memorizer 804, power supply module 806, multimedia groupware 808, audio-frequency assembly 810, the interface 812 of input/output (I/O), sensor cluster 814, and communications component 816.
Process assembly 802 and generally control the integrated operation of first terminal 800, such as with display, call, data communication, the operation that camera operation and record operation are associated. Treatment element 802 can include one or more processor 820 to perform instruction, to complete all or part of step of above-mentioned method. Additionally, process assembly 802 can include one or more module, it is simple to what process between assembly 802 and other assemblies is mutual. Such as, processing component 802 can include multi-media module, with facilitate multimedia groupware 808 and process between assembly 802 mutual.
Memorizer 804 is configured to store various types of data to support the operation at first terminal 800. The example of these data includes any application program for operation on first terminal 800 or the instruction of method, contact data, telephone book data, message, picture, video etc. Memorizer 804 can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that electric power assembly 806 is first terminal 800 provide electric power. Electric power assembly 806 can include power-supply management system, one or more power supplys, and other generate, manage and distribute, with for audio output apparatus 800, the assembly that electric power is associated.
Multimedia groupware 808 includes the screen providing an output interface between described first terminal 800 and user. In certain embodiments, screen can include liquid crystal display (LCD) and touch panel (TP). If screen includes touch panel, screen may be implemented as touch screen, to receive the input signal from user. Touch panel includes one or more touch sensor to sense the gesture on touch, slip and touch panel. Described touch sensor can not only sense the border of touch or sliding action, but also detects the persistent period relevant to described touch or slide and pressure. In certain embodiments, multimedia groupware 808 includes a front-facing camera and/or post-positioned pick-up head.When first terminal 800 is in operator scheme, during such as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive the multi-medium data of outside. Each front-facing camera and post-positioned pick-up head can be a fixing optical lens system or have focal length and optical zoom ability.
Audio-frequency assembly 810 is configured to output and/or input audio signal. Such as, audio-frequency assembly 810 includes a mike (MIC), and when audio output apparatus 800 is in operator scheme, during such as call model, logging mode and speech recognition mode, mike is configured to receive external audio signal. The audio signal received can be further stored at memorizer 804 or send via communications component 816.
I/O interface 812 provides interface for processing between assembly 802 and peripheral interface module, above-mentioned peripheral interface module can be keyboard, puts striking wheel, button etc. These buttons may include but be not limited to: home button, volume button, startup button and locking press button.
Sensor cluster 814 includes one or more sensor, for providing the state estimation of various aspects for first terminal 800. Such as, what sensor cluster 814 can detect first terminal 800 opens/closed mode, the relative localization of assembly, such as described assembly is display and the keypad of first terminal 800, sensor cluster 814 can also detect first terminal 800 or the position change of 800 1 assemblies of first terminal, the presence or absence that user contacts with first terminal 800, the variations in temperature of first terminal 800 orientation or acceleration/deceleration and first terminal 800. Sensor cluster 814 can include proximity transducer, is configured to when not having any physical contact object near detection. Sensor cluster 814 can also include optical sensor, such as CMOS or ccd image sensor, for using in imaging applications. In certain embodiments, this sensor cluster 814 can also include acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 816 is configured to facilitate between first terminal 800 and other equipment the communication of wired or wireless mode. First terminal 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or their combination. In one exemplary embodiment, communication component 816 receives the broadcast singal or the broadcast related information that manage system from external broadcasting via broadcast channel. In one exemplary embodiment, described communication component 816 also includes near-field communication (NFC) module, to promote junction service. Such as, can based on RF identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, first terminal 800 can be realized by one or more application specific integrated circuits (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, is used for performing said method.
In the exemplary embodiment, additionally providing a kind of non-transitory computer-readable recording medium including instruction, for instance include the memorizer 804 of instruction, above-mentioned instruction can have been performed said method by the processor 820 of first terminal 800. Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of first terminal so that first terminal is able to carry out said method, and the method includes:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used;
Sending dynamic effect notice to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect;
Show described target dynamic effect.
Optionally, described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, based on the first video communication image, detect action message or the expression information of the user of described first terminal;
Corresponding relation according to the action message prestored or expression information and dynamic effect, it is determined that the action message detected or target dynamic effect corresponding to expression information.
Optionally, described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that the target dynamic effect chosen in each dynamic effect provided.
Optionally, described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, obtain the voice messaging of user's input;
Identify the text message that described voice messaging is corresponding;
Corresponding relation according to the key word prestored Yu dynamic effect, it is determined that the target dynamic effect that the first key word of comprising in described text message is corresponding.
Optionally, the described target dynamic effect of described display, including:
It is displayed in full screen described target dynamic effect.
Optionally, the described target dynamic effect of described display, including:
In the first video communication image, show described target dynamic effect.
Optionally, described in the first video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
Optionally, described method also includes:
In the second video communication image, show described target dynamic effect.
Optionally, described in the second video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
In disclosure embodiment, carry out in the process of video communication at first terminal and the second terminal, determine target dynamic effect to be used, sending dynamic effect notice to the second terminal, dynamic effect notice carries the mark of target dynamic effect, shows target dynamic effect, so, user in the process carrying out video communication, can send dynamic effect to the other side, such that it is able to improve the motility of video communication.
The structural representation of a kind of second terminal that disclosure embodiment also illustrates that.This second terminal can be mobile phone etc.
With reference to Figure 15, the second terminal 900 can include following one or more assembly: processes assembly 902, memorizer 904, power supply module 906, multimedia groupware 908, audio-frequency assembly 910, the interface 912 of input/output (I/O), sensor cluster 914, and communications component 916.
Process assembly 902 and generally control the integrated operation of the second terminal 900, such as with display, call, data communication, the operation that camera operation and record operation are associated. Treatment element 902 can include one or more processor 920 to perform instruction, to complete all or part of step of above-mentioned method. Additionally, process assembly 902 can include one or more module, it is simple to what process between assembly 902 and other assemblies is mutual. Such as, processing component 902 can include multi-media module, with facilitate multimedia groupware 908 and process between assembly 902 mutual.
Memorizer 904 is configured to store various types of data to support the operation in the second terminal 900. The example of these data includes any application program for operation in the second terminal 900 or the instruction of method, contact data, telephone book data, message, picture, video etc. Memorizer 904 can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that electric power assembly 906 is the second terminal 900 provide electric power. Electric power assembly 906 can include power-supply management system, one or more power supplys, and other generate, manage and distribute, with for audio output apparatus 900, the assembly that electric power is associated.
Multimedia groupware 908 includes the screen providing an output interface between described second terminal 900 and user. In certain embodiments, screen can include liquid crystal display (LCD) and touch panel (TP). If screen includes touch panel, screen may be implemented as touch screen, to receive the input signal from user. Touch panel includes one or more touch sensor to sense the gesture on touch, slip and touch panel. Described touch sensor can not only sense the border of touch or sliding action, but also detects the persistent period relevant to described touch or slide and pressure. In certain embodiments, multimedia groupware 908 includes a front-facing camera and/or post-positioned pick-up head. When the second terminal 900 is in operator scheme, during such as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive the multi-medium data of outside. Each front-facing camera and post-positioned pick-up head can be a fixing optical lens system or have focal length and optical zoom ability.
Audio-frequency assembly 910 is configured to output and/or input audio signal. Such as, audio-frequency assembly 910 includes a mike (MIC), and when audio output apparatus 900 is in operator scheme, during such as call model, logging mode and speech recognition mode, mike is configured to receive external audio signal. The audio signal received can be further stored at memorizer 904 or send via communications component 916.
I/O interface 912 provides interface for processing between assembly 902 and peripheral interface module, above-mentioned peripheral interface module can be keyboard, puts striking wheel, button etc.These buttons may include but be not limited to: home button, volume button, startup button and locking press button.
Sensor cluster 914 includes one or more sensor, for providing the state estimation of various aspects for the second terminal 900. Such as, what sensor cluster 914 can detect the second terminal 900 opens/closed mode, the relative localization of assembly, such as described assembly is display and the keypad of the second terminal 900, sensor cluster 914 can also detect the position change of the second terminal 900 or second 900 1 assemblies of terminal, the presence or absence that user contacts with the second terminal 900, the variations in temperature of the second terminal 900 orientation or acceleration/deceleration and the second terminal 900. Sensor cluster 914 can include proximity transducer, is configured to when not having any physical contact object near detection. Sensor cluster 914 can also include optical sensor, such as CMOS or ccd image sensor, for using in imaging applications. In certain embodiments, this sensor cluster 914 can also include acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 916 is configured to facilitate between the second terminal 900 and other equipment the communication of wired or wireless mode. Second terminal 900 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or their combination. In one exemplary embodiment, communication component 916 receives the broadcast singal or the broadcast related information that manage system from external broadcasting via broadcast channel. In one exemplary embodiment, described communication component 916 also includes near-field communication (NFC) module, to promote junction service. Such as, can based on RF identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, second terminal 900 can be realized by one or more application specific integrated circuits (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, is used for performing said method.
In the exemplary embodiment, additionally providing a kind of non-transitory computer-readable recording medium including instruction, for instance include the memorizer 904 of instruction, above-mentioned instruction can have been performed said method by the processor 920 of the second terminal 900. Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of the second terminal so that the second terminal is able to carry out said method, and the method includes:
Carrying out in the process of video communication at the second terminal and first terminal, receive the dynamic effect notice that described first terminal sends, described dynamic effect notice carries the mark of target dynamic effect;
Mark according to described target dynamic effect, obtains and shows described target dynamic effect.
Optionally, the described mark according to described target dynamic effect, obtain and show described target dynamic effect, including:
Mark according to described target dynamic effect, obtains described target dynamic effect, and is displayed in full screen described target dynamic effect.
Optionally, the described mark according to described target dynamic effect, obtain and show described target dynamic effect, including:
Mark according to described target dynamic effect, obtains described target dynamic effect, and in the second video communication image, shows described target dynamic effect.
Optionally, described in the second video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
Optionally, described method also includes:
In the first video communication image, show described target dynamic effect.
Optionally, described in the first video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
In disclosure embodiment, carry out in the process of video communication at first terminal and the second terminal, determine target dynamic effect to be used, sending dynamic effect notice to the second terminal, dynamic effect notice carries the mark of target dynamic effect, shows target dynamic effect, so, user in the process carrying out video communication, can send dynamic effect to the other side, such that it is able to improve the motility of video communication.
Those skilled in the art, after considering description and putting into practice disclosed herein disclosing, will readily occur to other embodiment of the disclosure. The application is intended to any modification of the disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed the general principle of the disclosure and include the undocumented known general knowledge in the art of the disclosure or conventional techniques means. Description and embodiments is considered only as exemplary, and the true scope of the disclosure and spirit are pointed out by claim below.
It should be appreciated that the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and various amendment and change can carried out without departing from the scope. The scope of the present disclosure is only limited by appended claim.

Claims (33)

1. the method for a video communication, it is characterised in that described method includes:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used;
Sending dynamic effect notice to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect;
Show described target dynamic effect.
2. method according to claim 1, it is characterised in that described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, based on the first video communication image, detect action message or the expression information of the user of described first terminal;
Corresponding relation according to the action message prestored or expression information and dynamic effect, it is determined that the action message detected or target dynamic effect corresponding to expression information.
3. method according to claim 1, it is characterised in that described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that the target dynamic effect chosen in each dynamic effect provided.
4. method according to claim 1, it is characterised in that described carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used, including:
Carry out in the process of video communication at first terminal and the second terminal, obtain the voice messaging of user's input;
Identify the text message that described voice messaging is corresponding;
Corresponding relation according to the key word prestored Yu dynamic effect, it is determined that the target dynamic effect that the first key word of comprising in described text message is corresponding.
5. method according to claim 1, it is characterised in that the described target dynamic effect of described display, including:
It is displayed in full screen described target dynamic effect.
6. method according to claim 1, it is characterised in that the described target dynamic effect of described display, including:
In the first video communication image, show described target dynamic effect.
7. method according to claim 6, it is characterised in that described in the first video communication image, shows described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
8. method according to claim 6, it is characterised in that described method also includes:
In the second video communication image, show described target dynamic effect.
9. method according to claim 8, it is characterised in that described in the second video communication image, shows described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
10. the method for a video communication, it is characterised in that described method includes:
Carrying out in the process of video communication at the second terminal and first terminal, receive the dynamic effect notice that described first terminal sends, described dynamic effect notice carries the mark of target dynamic effect;
Mark according to described target dynamic effect, obtains and shows described target dynamic effect.
11. method according to claim 10, it is characterised in that the described mark according to described target dynamic effect, obtain and show described target dynamic effect, including:
Mark according to described target dynamic effect, obtains described target dynamic effect, and is displayed in full screen described target dynamic effect.
12. method according to claim 10, it is characterised in that the described mark according to described target dynamic effect, obtain and show described target dynamic effect, including:
Mark according to described target dynamic effect, obtains described target dynamic effect, and in the second video communication image, shows described target dynamic effect.
13. method according to claim 12, it is characterised in that described in the second video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
14. method according to claim 12, it is characterised in that described method also includes:
In the first video communication image, show described target dynamic effect.
15. method according to claim 14, it is characterised in that described in the first video communication image, show described target dynamic effect, including:
Corresponding relation according to the dynamic effect prestored Yu human body, it is determined that the target body position that described target dynamic effect is corresponding;
In the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, show described target dynamic effect.
16. a first terminal, it is characterised in that described first terminal includes:
Determine module, for carrying out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used;
Sending module, for sending dynamic effect notice to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect;
First display module, is used for showing described target dynamic effect.
17. first terminal according to claim 16, it is characterised in that described determine module, including:
Detection sub-module, for carrying out in the process of video communication at first terminal and the second terminal, based on the first video communication image, detects action message or the expression information of the user of described first terminal;
First determines submodule, for the corresponding relation according to the action message that prestores or expression information and dynamic effect, it is determined that the action message detected or target dynamic effect corresponding to expression information.
18. first terminal according to claim 16, it is characterised in that described determine module, it is used for:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that the target dynamic effect chosen in each dynamic effect provided.
19. first terminal according to claim 16, it is characterised in that described determine module, including:
Obtain submodule, for carrying out in the process of video communication at first terminal and the second terminal, obtain the voice messaging of user's input;
Identify submodule, for identifying the text message that described voice messaging is corresponding;
Second determines submodule, for the corresponding relation according to the key word that prestores with dynamic effect, it is determined that the target dynamic effect that the first key word of comprising in described text message is corresponding.
20. first terminal according to claim 16, it is characterised in that described first display module, it is used for:
It is displayed in full screen described target dynamic effect.
21. first terminal according to claim 16, it is characterised in that described first display module, it is used for:
In the first video communication image, show described target dynamic effect.
22. first terminal according to claim 21, it is characterised in that described first display module, including:
3rd determines submodule, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
First display sub-module, in the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
23. first terminal according to claim 21, it is characterised in that described first terminal also includes:
Second display sub-module, for, in the second video communication image, showing described target dynamic effect.
24. first terminal according to claim 23, it is characterised in that described second display sub-module, including:
4th determines submodule, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
Second display sub-module, in the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
25. a terminal, it is characterised in that described second terminal includes:
Receiver module, is used for carrying out in the process of video communication at the second terminal and first terminal, receives the dynamic effect notice that described first terminal sends, and described dynamic effect notice carries the mark of target dynamic effect;
First display module, for the mark according to described target dynamic effect, obtains and shows described target dynamic effect.
26. the second terminal according to claim 25, it is characterised in that described first display module, it is used for:
Mark according to described target dynamic effect, obtains described target dynamic effect, and is displayed in full screen described target dynamic effect.
27. the second terminal according to claim 25, it is characterised in that described first display module, it is used for:
Mark according to described target dynamic effect, obtains described target dynamic effect, and in the second video communication image, shows described target dynamic effect.
28. the second terminal according to claim 27, it is characterised in that described first display module, including:
First determines submodule, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
First display sub-module, in the second video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
29. the second terminal according to claim 27, it is characterised in that described second terminal also includes:
Second display module, for, in the first video communication image, showing described target dynamic effect.
30. the second terminal according to claim 29, it is characterised in that described second display module, including:
Second determines submodule, for the corresponding relation according to the dynamic effect prestored with human body, it is determined that the target body position that described target dynamic effect is corresponding;
Second display sub-module, in the first video communication image, using the viewing area as described target dynamic effect, the viewing area at described target body position, shows described target dynamic effect.
31. a first terminal, it is characterised in that including:
Processor;
For storing the memorizer of processor executable;
Wherein, described processor is configured to:
Carry out in the process of video communication at first terminal and the second terminal, it is determined that target dynamic effect to be used;
Sending dynamic effect notice to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect;
Show described target dynamic effect.
32. a terminal, it is characterised in that including:
Processor;
For storing the memorizer of processor executable;
Wherein, described processor is configured to:
Carrying out in the process of video communication at the second terminal and first terminal, receive the dynamic effect notice that described first terminal sends, described dynamic effect notice carries the mark of target dynamic effect;
Mark according to described target dynamic effect, obtains and shows described target dynamic effect.
33. the system of a video communication, it is characterised in that described system includes first terminal and the second terminal, wherein:
Described first terminal, for carrying out in the process of video communication in described first terminal and described second terminal, determine target dynamic effect to be used, dynamic effect notice is sent to described second terminal, described dynamic effect notice carries the mark of described target dynamic effect, shows described target dynamic effect;
Described second terminal, is used for carrying out in the process of video communication at described second terminal and described first terminal, receives the dynamic effect notice that described first terminal sends, the mark according to described target dynamic effect, obtains and show described target dynamic effect.
CN201610074923.0A 2016-02-02 2016-02-02 Method, apparatus and system for video communication Pending CN105657325A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610074923.0A CN105657325A (en) 2016-02-02 2016-02-02 Method, apparatus and system for video communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610074923.0A CN105657325A (en) 2016-02-02 2016-02-02 Method, apparatus and system for video communication

Publications (1)

Publication Number Publication Date
CN105657325A true CN105657325A (en) 2016-06-08

Family

ID=56488273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610074923.0A Pending CN105657325A (en) 2016-02-02 2016-02-02 Method, apparatus and system for video communication

Country Status (1)

Country Link
CN (1) CN105657325A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878654A (en) * 2017-03-10 2017-06-20 北京小米移动软件有限公司 The method and device of video communication
CN108366221A (en) * 2018-05-16 2018-08-03 维沃移动通信有限公司 A kind of video call method and terminal
CN108551562A (en) * 2018-04-16 2018-09-18 维沃移动通信有限公司 A kind of method and mobile terminal of video communication
CN110536075A (en) * 2019-09-20 2019-12-03 上海掌门科技有限公司 Video generation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101287093A (en) * 2008-05-30 2008-10-15 北京中星微电子有限公司 Method for adding special effect in video communication and video customer terminal
US20100079576A1 (en) * 2005-06-02 2010-04-01 Lau Chan Yuen Display system and method
CN103297742A (en) * 2012-02-27 2013-09-11 联想(北京)有限公司 Data processing method, microprocessor, communication terminal and server
CN103947190A (en) * 2011-12-01 2014-07-23 坦戈迈公司 Video messaging
CN105262676A (en) * 2015-10-28 2016-01-20 广东欧珀移动通信有限公司 Method and apparatus for transmitting message in instant messaging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100079576A1 (en) * 2005-06-02 2010-04-01 Lau Chan Yuen Display system and method
CN101287093A (en) * 2008-05-30 2008-10-15 北京中星微电子有限公司 Method for adding special effect in video communication and video customer terminal
CN103947190A (en) * 2011-12-01 2014-07-23 坦戈迈公司 Video messaging
CN103297742A (en) * 2012-02-27 2013-09-11 联想(北京)有限公司 Data processing method, microprocessor, communication terminal and server
CN105262676A (en) * 2015-10-28 2016-01-20 广东欧珀移动通信有限公司 Method and apparatus for transmitting message in instant messaging

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878654A (en) * 2017-03-10 2017-06-20 北京小米移动软件有限公司 The method and device of video communication
CN106878654B (en) * 2017-03-10 2022-04-01 北京小米移动软件有限公司 Video communication method and device
CN108551562A (en) * 2018-04-16 2018-09-18 维沃移动通信有限公司 A kind of method and mobile terminal of video communication
CN108366221A (en) * 2018-05-16 2018-08-03 维沃移动通信有限公司 A kind of video call method and terminal
CN110536075A (en) * 2019-09-20 2019-12-03 上海掌门科技有限公司 Video generation method and device
CN110536075B (en) * 2019-09-20 2023-02-21 上海掌门科技有限公司 Video generation method and device

Similar Documents

Publication Publication Date Title
CN104159218B (en) Internetwork connection establishing method and device
CN104010222A (en) Method, device and system for displaying comment information
CN106231378A (en) The display packing of direct broadcasting room, Apparatus and system
CN106331761A (en) Live broadcast list display method and apparatuses
CN105491048A (en) Account management method and apparatus
EP3147802B1 (en) Method and apparatus for processing information
CN105162693A (en) Message display method and device
CN105468767A (en) Method and device for acquiring calling card information
CN105722064A (en) Method and device for acquiring terminal information
CN104717554A (en) Smart television control method and device and electronic equipment
CN104486451A (en) Application program recommendation method and device
CN103973900B (en) The method of transmission information and device
CN105578113A (en) Video communication method, device and system
CN105515831A (en) Network state information display method and device
CN105162889A (en) Device finding method and apparatus
CN104185304A (en) Method and device for accessing WI-FI network
CN105530165A (en) Instant chat method and device
CN104767857A (en) Telephone calling method and device based on cloud name cards
CN105786507A (en) Display interface switching method and device
CN105739834A (en) Menu displaying method and device
CN105872573A (en) Video playing method and apparatus
CN106331328B (en) Information prompting method and device
CN106101773A (en) Content is with shielding method, device and display device
CN105677023A (en) Information presenting method and device
CN105657325A (en) Method, apparatus and system for video communication

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160608