CN106817349A - A kind of method and device for making communication interface produce animation effect in communication process - Google Patents
A kind of method and device for making communication interface produce animation effect in communication process Download PDFInfo
- Publication number
- CN106817349A CN106817349A CN201510859600.8A CN201510859600A CN106817349A CN 106817349 A CN106817349 A CN 106817349A CN 201510859600 A CN201510859600 A CN 201510859600A CN 106817349 A CN106817349 A CN 106817349A
- Authority
- CN
- China
- Prior art keywords
- user
- communication interface
- communication
- virtual
- virtual image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/26—Devices for calling a subscriber
- H04M1/27—Devices whereby a plurality of signals may be stored simultaneously
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42136—Administration or customisation of services
Abstract
The embodiment of the present application discloses a kind of method and device for making communication interface produce animation effect in communication process.Methods described includes:User is based on the content that communication is produced in receiving communication process;The identification content produced based on communication, and corresponding state change instruction is generated based on the content that communication is produced according to described;State according to user profile on the state change instruction modification communication interface makes communication interface produce animation effect.Allow that user's communication interface in communication process produces corresponding animation effect according to the Content of Communication between user using the embodiment of the present application.
Description
Technical field
The application is related to the communications field, more particularly to one kind makes communication interface produce animation to imitate in communication process
The method and device of fruit.
Background technology
With the fast development of the communication technology, communication equipment etc., the mode of intelligence transmission of people is more and more various
Change.Initially, people are only capable of carrying out simple voice call by communication terminal, and develop into be carried out later
Various new paragons such as text communication, video calling, or even in some MSNs, people can be with root
According to the demand of oneself, unrestricted choice is adapted to the communication mode of present case from communication.These communications
The information transmission that mode gives people brings facility, the extreme enrichment communication life of user.
With the variation of communication mode, the user profile presented on the communication interface of user is also increasingly given birth to
It is dynamic, interesting.For example, in traditional voice call, communicating pair or one are only shown on communication interface all the time
The telephone number (or user's name that user is communicating number setting) of side.But soft by instant messaging
No longer it is then stiff communicating number on communication interface during part is communicated, and can be that display is used
Family image, the user image may be from actual photo or user of user's storage in communication equipment and make
Virtual image;Can also show that camera is caught on communication equipment in the case of video communication, on communication interface
The real picture grasped.
The above-mentioned this way being presented in user image during telex network on communication interface, improves
The visuality of telex network, enhances the communication experiences of user.But, which is during telex network
User image that communication interface shows or the information associated with user image are static.For example, user opens
When beginning carries out voice or text communication to end communication, the user image presented on communication interface is remained not
Become;Again for example, user is when video calling is carried out, the true picture that only equipment catches that communication interface shows
Face.
The content of the invention
In view of the above problems, the embodiment of the present application provides one kind makes communication interface produce animation in communication process
The method and device of effect so that communication interface can be according to the communication between user in communication process for user
Content produces corresponding animation effect.
The embodiment of the present application uses following technical proposals:One kind makes communication interface produce animation in communication process
The method of effect, the method includes:User is based on the content that communication is produced in receiving communication process;Identification institute
The content produced based on communication is stated, and corresponding state change is generated based on the content that communication is produced according to described
Instruction;State according to user profile on the state change instruction modification communication interface produces communication interface
Animation effect.
Preferably, the animated element on the communication interface of the user profile including user, described in the basis
The state of user profile includes communication interface generation animation effect on state change instruction modification communication interface:
Communication interface is set to produce animation according to the animated element on the state change instruction modification communication interface
Effect.
Preferably, the user profile includes user's virtual image and/or the presentation on the communication interface of user
Virtual scene, the state according to user profile on the state change instruction modification communication interface makes communication
Interface generation animation effect includes:
According to the user's virtual image on the state change instruction modification communication interface and/or virtual scene
State makes communication interface produce animation effect.
Preferably, the user's virtual image and/or virtual scene on the communication interface include the void of communicating pair
Intend image and/or virtual scene, the Content of Communication is the Content of Communication of the first party of communicating pair, described
Make according to the user's virtual image and/or the state of virtual scene on the state change instruction modification communication interface
Communication interface generation animation effect includes:
Change the virtual image and/or virtual scene of first party on communication interface according to state change instruction
State, and the virtual image of second party and/or virtual is changed on communication interface according to status modifier instruction
The state of scene, the virtual image of the second party and/or the condition responsive of virtual scene are in the first party
The state of virtual image and/or virtual scene;Or,
Change the state and/or void of the virtual image of second party on communication interface according to state change instruction
Intend scene, and according to the virtual image of first party on status modifier instruction change communication interface and/or virtually
The state of scene, the virtual image of the first party and/or the condition responsive of virtual scene are in the second party
The state of virtual image and/or virtual scene.
Preferably, the shape of the user's virtual image according on the state change instruction modification communication interface
State, specifically includes:
According to the shape of the mouth as one speaks, action, profile, dressing, the body of state change instruction modification user's virtual image
The state of type, sound and/or expression.
Preferably, user's virtual image and/or virtual scene from terminal device or remote server to provide
Multiple virtual images and virtual scene in select virtual image and/or virtual scene, and/or, be from end
According to the letter of external environment in end equipment or multiple virtual images and/or virtual scene of remote server offer
The virtual image and/or virtual scene of the user for determining are ceased, and/or, it is from terminal device or remote server
Basis is stored in advance in terminal device or remote server in multiple virtual images and/or virtual scene of offer
Information determine user virtual image and/or virtual scene;And/or,
User's virtual image and/or virtual scene are true where based on user's authentic image and/or user
The virtual image and/or virtual scene of real scene creation.
Preferably, the storage includes following at least one in the information of terminal device or remote server:
Personal information;Mood information;Condition information;It is related to user's virtual image and/or virtual scene
Information;With the relevant information of the other user that the user sets up correspondence.
Preferably, the content produced based on communication is included:
What the camera on voice, word and/or terminal device that user produces in communication process was caught regards
Frequency content;Or, selecting the operation behavior letter of animation in the animation selection interface that presents in the communication process of user
Breath;Or, user touches the operation behavior information of communication interface in communication process.
Preferably, methods described also includes:
Communication interface is produced according to the state of user profile on the state change instruction modification communication interface
After raw animation effect, the animation effect that the communication interface is produced is stored in terminal device.
A kind of device for making communication interface produce animation effect in communication process, described device is located at and sets up
In the terminal device of the two parties of correspondence, the device includes:
Receiving unit, recognition unit and modification unit, wherein:
The receiving unit, the content that communication is produced is based on for receiving user in communication process;
The recognition unit, for recognizing the content produced based on communication, and according to described based on communication
The content of generation generates corresponding state change instruction;
The modification unit, for the shape according to user profile on the state change instruction modification communication interface
State makes communication interface produce animation effect.
Preferably, the recognition unit includes:Identification subelement and generation subelement, wherein:
The recognition unit, for recognizing voice, word and/or end that the user produces in communication process
The video content that camera in end equipment is caught;Or,
Recognize the operation behavior that animation is selected in the animation selection interface that the user is presented in communication process
Information;Or,
Recognize that the user touches the operation behavior information of communication interface in communication process.
The generation subelement, for the voice, word and/or the terminal that are produced in communication process according to user
The video content that the camera of equipment is caught generates corresponding state change instruction;Or,
According to the operation behavior information that animation is selected in the animation selection interface that user is presented in communication process
Generate corresponding state change instruction;Or,
The operation behavior information corresponding state of generation for touching communication interface in communication process according to user changes
Become instruction.
Preferably, the user profile includes the animated element on the communication interface of user, the modification unit
For:
Communication interface is set to produce animation according to the animated element on the state change instruction modification communication interface
Effect.
Preferably, the user profile includes user's virtual image and/or the presentation on the communication interface of user
Virtual scene, the modification unit is used for:
According to the user's virtual image on the state change instruction modification communication interface and/or virtual scene
State makes communication interface produce animation effect.
Preferably, described device also includes memory cell, and the memory cell is used for:
The animation effect that the communication interface is produced is stored in terminal device.
During telex network, the Content of Communication according to user generates corresponding state to the embodiment of the present application
The instruction of change, the communication interface for making user further according to the status command produces corresponding animation effect so that
Communication interface of the user during voice communication or text communication is no longer static, or user entering
During row video calling, video calling interface is not the real picture that only camera catches, but can
Corresponding animation effect is produced with according to the Content of Communication of user, the communication mode of user is enriched.
Brief description of the drawings
Accompanying drawing described herein is used for providing further understanding of the present application, constitutes the part of the application,
The schematic description and description of the application does not constitute the improper limit to the application for explaining the application
It is fixed.In the accompanying drawings:
Fig. 1 is that one kind that the embodiment of the present application 1 is provided makes communication interface produce animation effect in communication process
Method idiographic flow schematic diagram;
Fig. 2 is that one kind that the embodiment of the present application 2 is provided makes communication interface produce animation effect in communication process
Device idiographic flow schematic diagram;
Fig. 3-1 is that one kind that the embodiment of the present application 3 is provided makes in user under concrete scene in communication process
Call interface produces the idiographic flow schematic diagram of the method for animation effect;
Fig. 3-2 is the corresponding virtual image of two parties for setting up correspondence that the embodiment of the present application 3 is provided
It is displayed in the sectional drawing of same call interface;
Fig. 3-3 makes call interface produce corresponding animation to imitate for what the embodiment of the present application 3 was provided according to dialog context
The sectional drawing of fruit;
Fig. 4-1 is that one kind that the embodiment of the present application 4 is provided makes under concrete scene during user's communication
Call interface produces the idiographic flow schematic diagram of the method for animation effect;
Fig. 4-2 is the corresponding virtual image of two parties for setting up correspondence that the embodiment of the present application 4 is provided
It is displayed in the sectional drawing of same call interface;
Fig. 4-3 is that the animation that selected in animation selection interface according to user that the embodiment of the present application 4 is provided makes to lead to
Words interface produces the sectional drawing of corresponding animation effect;
Fig. 5-1 is the corresponding authentic image of two parties for setting up correspondence that the embodiment of the present application 5 is provided
It is displayed in the sectional drawing at same video calling interface;
Fig. 5-2 makes video calling interface produce corresponding moving for what the embodiment of the present application 5 was provided according to Content of Communication
Draw the sectional drawing of effect;
Fig. 5-3 is the sectional drawing that the user that the embodiment of the present application 5 is provided selects animation on video calling interface;
Specific embodiment
It is specifically real below in conjunction with the application to make the purpose, technical scheme and advantage of the application clearer
Apply example and corresponding accompanying drawing is clearly and completely described to technical scheme.Obviously, it is described
Embodiment is only some embodiments of the present application, rather than whole embodiments.Based on the implementation in the application
Example, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of the application protection.
Embodiment 1
As it was previously stated, user using MSN during voice or text communication is carried out, communication
Interface can show user image, and the user image may be from true phase of user's storage in communication equipment
The virtual image that piece or user make, thus the user image be it is static, will not be according to the communication of user
Content produces corresponding animation;And in the case of video communication, although display picture is usually on communication interface
Dynamically, but also be only that equipment camera flutters the real picture grasped, on interface equally also not according to
The Content of Communication at family shows corresponding animation.
Regarding to the issue above, the embodiment of the present application proposes one kind makes communication interface generation dynamic in communication process
The method for drawing effect.The method causes user in communication process, and communication interface can be according to the communication of user
Content produces corresponding animation effect, and Fig. 1 is the method idiographic flow schematic diagram, the specific steps of the method
It is as follows:
Step 11:User is based on the content that communication is produced in receiving communication process.
In this step, set up correspondence between user first, the mode for setting up correspondence can be into
Row number dialing telephone call, or voice, word or video communication etc. are carried out using MSN, here
It is not especially limited.In communication process, the content that user is based on communication generation can refer to user in communication
During produce the Content of Communication such as voice, word or video, can also be that user performs phase in communication process
Operation behavior information closed operation and produce etc., for example, user selects corresponding dynamic in animation selection interface
The operation behavior information for producing, or user are drawn by touching the operation behavior that the animation on communication interface is produced
Information etc..
Step 12:The identification content produced based on communication, and according to the content produced based on communication
Generate corresponding state change instruction.
In this step, state change instruction is the instruction for instigating communication interface to produce corresponding animation effect, on
The Content of Communication that terminal device identifying user is produced in communication process is stated, and generates corresponding state change and referred to
Order, the mode of the corresponding state change instruction of generation here can include but is not limited to following several ways:
If user is communicated using voice call, the voice recognition unit in equipment recognizes the user in call
During the voice content that produces, and the semanteme of voice content is parsed, then further according to semanteme generation phase
The state change answered is instructed, and can also change into the voice content that user produces in communication process here can
Depending on the content of text changed, and shown in communication interface;If user uses text communication, mode is led to
Letter, after user input word message, the text of the word recognition unit identifying user input in terminal device
Information, and the semanteme of text message is parsed, corresponding state change instruction is then generated according to the semanteme;
If the mode of the video calling that user uses is communicated, the video identification unit in equipment is according to equipment
Capture real picture and produce the instruction of corresponding state change, or voice recognition unit in equipment according to
The voice content that family produces in video calling generates corresponding state change instruction.
It is above-mentioned that state change instruction simply exemplary explanation is produced according to Content of Communication, in practical application except
Voice, word or the video produced in communication process according to user are produced beyond corresponding state change instruction,
Also corresponding animation is selected to produce corresponding state change to instruct in animation selection interface including user, or
User produces corresponding state change instruction etc. by touching the animation on communication interface, and such as user is in communication
" dancing " is selected in animation selection interface on interface, the selection operation according to user generates corresponding shape
State changes instruction, the i.e. instruction allows corresponding animation on communication interface to complete the action of " dancing ";Or use
Family generates corresponding state change instruction by the animation shown on communication interface touch, for example, logical
Letter shows the corresponding virtual image of user on interface, and user wants to allow the virtual image to show " race " on interface
Action, user can be touched to allow the virtual image to be completed on interface to the leg of the virtual image
The action of " race ", here user the leg of the virtual image is touched generate state change instruction
Process, the i.e. instruction cause that the corresponding virtual image of user completes the action of " race ".
Step 13:State according to user profile on the state change instruction modification communication interface makes communication circle
Face produces animation effect.
In this step, user profile can be included but is not limited to:Animated element on the communication interface of user
Or the corresponding virtual image of user and/or virtual scene, animated element here is independently of the virtual shape of user
As and/or virtual scene, and animated element, the corresponding virtual image of user and virtual scene can be 3D figure
Picture or 2D images.In addition, having various feelings according to the user profile on state change instruction modification communication interface
Shape, introduces following three situation here:
The first situation:
User loads animation according to the Content of Communication between user after correspondence is set up on communication interface
Element or the corresponding virtual image of user and/or virtual scene, for example, user is when voice call is carried out,
Although having built up call relation between user, display user is not corresponding on communication interface at the beginning
Virtual image and/or virtual scene, but when user says " snowing, good cold " in voice call process, be
Unite and started to load the virtual image and virtual scene of user on call interface according to this Content of Communication, here
Virtual image can wear the image that down jackets are branded as, corresponding virtual scene can be snowy field
Scape, or communication interface only shows snowy scene or only shows the corresponding virtual image of user.
Second case:
User is before correspondence is set up, and the animated element or user that will will be shown on call interface are virtual
Image and/or virtual scene are bound with the telephone number of user or the account of MSN,
User is set up before or after correspondence, and terminal device is by the telephone number of user or MSN
Account find the animated element or user's virtual image bound with the telephone number or the account and/or
Virtual scene, and the animated element or virtual image and/or virtual scene are shown on communication interface,
Then produce state change to instruct in communication process according to user, make animated element or use on communication interface
The state of the corresponding virtual image in family and/or virtual scene changes.
Above-mentioned virtual image and/or virtual scene can be provided by terminal device, or be carried by remote server
For exemplary here to enumerate following several modes for obtaining virtual image and/or virtual scene:
A kind of mode is:Multiple virtual images and virtual field that user provides from terminal device or remote server
The virtual image and/or virtual scene selected in scape.For example:Terminal device or remote server are in User logs in
During MSN, many virtual images and virtual scene are provided the user, user can have and therefrom select one
Individual or multiple virtual images and virtual scene are bound with the login account of oneself, and as user, to log in this next time instant
During bitcom, terminal device understands the virtual image and virtual scene of automatic loading user, here terminal device
Or the virtual image and virtual scene of remote server offer can just be downloaded and stored at terminal before to set
The virtual image and virtual scene of standby or remote server, can also be in the selected virtual image of user and virtual
At that time, terminal device or remote server just start to download user selected virtual image and virtual scene scene.
Another way is:The multiple virtual images and/or virtual field provided from terminal device or remote server
The virtual image and/or virtual scene of the user that the information in scape according to external environment determines.Here extraneous ring
Environment information has many kinds, exemplary here to enumerate a kind of external environment information, such as weather conditions, the day
Vaporous condition can system be obtained in itself by equipment, be can also be and is obtained from other softwares about weather forecast,
Can also be acquisition etc. from internet or GPS;After obtaining weather conditions, terminal device or remotely clothes
Business device meets weather at that time in automatically searching terminal device or remote server according to weather conditions at that time
The virtual image and/or virtual scene of situation, for example, weather when user sets up correspondence is to rain,
Terminal device or remote server are set up before or after correspondence according to weather conditions at that time in user, with
The virtual image and virtual scene of automatic loading user on the communication interface of family, the here corresponding virtual image of user
Thick overcoat may be worn, the virtual scene where lifting umbrella, and user's virtual image can be rainy field
Scape.
Another mode is:The multiple virtual images and/or virtual field provided from terminal device or remote server
The virtual image of the user determined according to the information for being stored in advance in terminal device or remote server in scape and/
Or virtual scene.Here the information stored in terminal device or remote server can be the personal information of user,
Such as sex of user's storage in the personal information of terminal is female, and system can be on telex network interface certainly
The virtual image of dynamic loading women;User's storage is also likely to be user's storage in the information of standby or remote server
Personal mood or health, for example, user's storage personal mood on the terminal device is good mood,
Terminal device stores the table that mood information loads the virtual image of user on telex network interface according to user
Feelings can be laughed heartily, or the limb action of user is probably to dance for joy, then for example, user stores up
The personal health that there is terminal device is flu, the physical condition information that terminal device is stored according to user
Be carried in virtual image on telex network interface can be being ceaselessly turned on sneeze, virtual scene can be
Hospital;The information stored in terminal device or remote server is also likely to be and user's virtual image and/or virtual
The related information of scene, for example, to user virtual image related information of the storage in terminal device is " high
Choose ", at this moment terminal device will load tall virtual image when virtual image is loaded for user;Deposit
The information stored up in terminal device or remote server is also possible to be the other user for setting up correspondence with user
Relevant information, relevant information can be address, remarks or the affiliated classification of the other user here, for example,
The address that the side user for setting up correspondence stores the opposing party user in terminal device is " husband ", is led to
The user image for believing interface display can be lovers' image, for example, wear lovers' clothess, and virtual scene at this moment can
To show scene at home;Again for example, the side user for setting up correspondence stores the opposing party user
In equipment in the classification of " household ", at this time the virtual scene of interface display can automatically load the void of family
Intend scene.
Yet another approach is:Based on the virtual shape that the real scene where user's authentic image and/or user is created
As and/or virtual scene.For example, terminal device or remote server are stored in terminal device or remote according to user
The photo of journey server, is processed the user's authentic image and real scene in the photo, generates user
Corresponding virtual image and virtual scene.
State here according to user profile on state change instruction modification communication interface produces communication interface
The mode of animation effect has many kinds, exemplary here to enumerate following three kinds of modes:
First way is:Virtual image on communication interface is instructed according to state change and causes the virtual image
The shape of the mouth as one speaks, action, profile, dressing, build, sound and/or expression change, for example, user is in language
Sound communication process is said:" I am too happy ", the expression of the corresponding virtual image of the user is according to this sentence on interface
Dialog context begins to change into laughs heartily, and action is changed into dancing for joy, then for example, the user is in voice call
Process say:" I feels suddenly Freezing, is not it now ", the dressing of the virtual image of user occurs corresponding on interface
Change, the possible virtual image wore the clothes in summer, now became to wear the thick clothes in winter originally,
Virtual image action may be changed into shivering.Here the sound of user's virtual image changes, can be with
Be the virtual image by the dialog context " saying " of user out, can also be the corresponding virtual image root of user
Sound effect is produced according to concrete application scene, for example, virtual scene is changed into lower snow pack according to state change instruction
Scape, at this moment the corresponding virtual image of user send the voice content of " Freezing, is not it " automatically.In addition, state changes
Become instruction it is also possible that the interaction between setting up the corresponding virtual image of two parties of correspondence occurs
Change, for example, the side user for setting up correspondence says in call:" embracing ", the at this moment user
Corresponding virtual image makes the action embraced, while the corresponding virtual image of the other user can also be made and gather around
The action embraced is used as response.Furthermore it is also possible to be the side user for setting up correspondence virtual image according to
State change instruction changes, and the corresponding virtual image of the opposing party user does not change, and user is empty
Intend the interaction between image changing can be that the action of user's virtual image, sound, expression etc. become
Change.
The second way is:Virtual scene on communication interface changes according to state change instruction, wherein,
It can be that animated element in virtual scene changes that virtual scene changes, for example, on communication interface
Virtual scene according to state change instruction produce music effect, or composition virtual scene element occur
Change, for example, the sky in state change instruction virtual scene has played light rain down suddenly.In addition, this
In scene change mode can be according to state change instruction only have virtual scene change, or use
The corresponding virtual image in family and virtual scene simultaneously or asynchronous change.
The third mode is:According to the animated element on state change instruction modification communication interface, and the animation
Element changes independently of the virtual image and/or virtual scene of user on interface, such as in communication interface
On when there is not the corresponding virtual image of user and/or virtual scene, communication interface can also produce sound effect,
Or produce other animated elements.
Three of the above is produced in the manifestation mode of animation effect, can be instructed according to same state change and be caused list
One virtual image and/or virtual scene change, or multiple virtual images and/or virtual scene occur
Change.Wherein, in the case where multiple animation effects are produced, the order of multiple animations is shown on communication interface
Can be order, or the display simultaneously of multiple animations of the status command received according to equipment, for example,
One side user of opening relationships says in call:" embracing ", the two parties pair at this moment being shown on interface
The virtual image answered makes the action embraced, and can be the two virtual images while making the action embraced,
Can also be that the corresponding virtual image of that side spoken first is made and embraces action, then the opposing party makes gathering around again
The action embraced is used as response.
The third situation:
This situation is to combine above two situation:User will incite somebody to action before correspondence is set up on call interface
The telephone number of the animated element or virtual image and/or virtual scene to be shown and user or immediately
The account of bitcom is bound, and before or after user sets up correspondence, terminal device is by user's
The account of telephone number or MSN, finds dynamic with what the telephone number or the account were bound
Draw element or user's virtual image and/or virtual scene, and by the animated element or virtual image and/or
Virtual scene is shown on communication interface, during communication, is produced in communication process according to user
Raw Content of Communication, generates corresponding state change instruction, finally according to state change instruction on communication circle
The virtual image and/or virtual scene of animated element or user are reloaded on face, rather than dynamic before
Localized variation is done under the basis for drawing element or virtual image and/or virtual scene, for example:Set up the one of communication
Square user says in call:" I will go jogging ", the user is probably one and stands before saying this Content of Communication
Motionless image, after user finishes this Content of Communication, a new user is reloaded on interface
Virtual image, it may be possible to wear the image of gym suit, the virtual scene at place may be changed into stadium of track and field.
In three kinds of situations described above, wherein, identification Content of Communication can be each for setting up correspondence
The Content of Communication that the terminal recognition terminal-pair application family produces, generates corresponding state change instruction, and root
Cause that communication interface produces animation effect according to the status command;It can also be the side's terminal for setting up correspondence
Recognize after Content of Communication, the corresponding state change of generation is instructed, and issue the opposing party's terminal and perform the instruction
Produce animation effect;Can also recognize that Content of Communication produces state change to instruct by the server of telecommunication network,
The instruction is sent to each terminal again and performs instruction generation animation effect.Such as user's first and user's second are entered
During row call, can the only corresponding virtual image A of display user's first and corresponding on the call interface of user's first
Virtual scene a, if after user's first sends dialog context, the corresponding terminal device of user's first recognizes the call
Generate the instruction of corresponding state change after content, or after recognizing the dialog context by the server of telecommunication network
Corresponding state change instruction is generated, the instruction is then sent to the corresponding terminal device of user's first again, most
The corresponding terminal device of user's first makes the virtual image A of user's first on communication interface and phase according to the instruction afterwards
The virtual scene a for answering produces animation effect;The call interface of user's first can also only show that user's second is corresponding
Virtual image B and corresponding virtual scene b, if after user's second sends dialog context, user's second is corresponding
Terminal device recognizes the dialog context, and generates corresponding state change instruction, or by the clothes of telecommunication network
Business device generates corresponding state change instruction after recognizing the dialog context, and is sent to the corresponding terminal of user's second
Equipment, then the corresponding terminal device of user's second the instruction is sent to the corresponding terminal device of user's first again,
The corresponding terminal device of end user's first makes the user's second shown on communication interface according to the instruction for receiving
Virtual image B and corresponding virtual scene b produces animation effect.The transmission means of state change instruction here
Can be transmitted including Bluetooth transmission, NFC transmission, ICP/IP protocol transmission, UDP transmission, Real-time Transport Protocol,
The transmission means such as the transmission of SRTP agreements, the transmission of HTTP/HTTPS agreements.
After communication interface is produced animation effect according to state change instruction, the animation effect can be stored
In terminal device so that user can watch at any time before communications records, the communications records are with animation
Mode to user present, increased the interest of communication process.
Using the method that the embodiment of the present application is provided, during telex network, according in the communication of user
Hold the instruction of the corresponding state change of generation, produce the communication interface of user further according to the status command corresponding
Animation effect so that user's communication interface in communication process is no longer static, and can be according to
The Content of Communication at family produces corresponding animation, enriches the communication mode of user.
Embodiment 2
Based on a kind of method for making communication interface produce animation effect in communication process that embodiment 1 is provided,
The present embodiment provides a kind of device for making communication interface produce animation effect in communication process, the dress accordingly
Put so that user is in communication process, communication interface can produce corresponding animation according to the Content of Communication of user
Effect.The concrete structure schematic diagram of the device is as shown in Fig. 2 the device includes:
Receiving unit 21, recognition unit 22 and modification unit 23;
Receiving unit 21, user is based on the content that communication is produced in can be used for receiving communication process;
Recognition unit 22, can be used for the identification content produced based on communication, and according to described based on logical
Believe that the content for producing generates corresponding state change instruction;
Modification unit 23, can be used for according to user profile on the state change instruction modification communication interface
State makes communication interface produce animation effect.
The workflow of said apparatus embodiment is:User is received by receiving unit 21 first and is based on communication product
Raw content, then recognizes that this is based on the content that communication is produced, and give birth to according to the content by recognition unit 22
Into the instruction of corresponding state change, finally by modification unit 23 according to state change instruction modification communication circle
The state of user profile on face, makes communication interface produce animation effect.
In said apparatus embodiment, user makes communication interface produce the implementation of animation effect in communication process
Mode has many kinds, and in one embodiment, recognition unit 22 includes that identification subelement and generation are single
Unit, wherein:
Identification subelement, can be used for recognizing voice, word that the user produces in communication process and/
Or the video content that terminal device camera is caught;Or,
Recognize that the user selects the operation behavior information of animation in animation selection interface;Or,
Recognize that the user touches the operation behavior information of the animation on communication interface.
Generation subelement, can be used for the voice, word and/or the terminal that are produced in communication process according to user
The video content that equipment camera is caught generates corresponding state change instruction;Or,
Selected the operation behavior information of animation to generate corresponding state in animation selection interface according to user to change
Become instruction;Or,
Corresponding state change is generated by touching the operation behavior information of the animation of communication interface according to user
Instruction.
In another embodiment, the user profile includes the animated element on the communication interface of user,
Modification Unit 23 can be used for:
According to the animated element on the state change instruction modification communication interface.
In another implementation method, the user profile includes user's virtual image and/or the communication in user
The virtual scene presented on interface, the modification unit 23 can be used for:
According to the user's virtual image on the state change instruction modification communication interface and/or virtual scene
State makes communication interface produce animation effect;Or,
In another embodiment, described device also includes memory cell, and the memory cell is used for:
The animation effect that the communication interface is produced is stored in terminal device.
The method reality that the beneficial effect and embodiment 1 that the device provided using the embodiment of the present application is obtained are provided
The beneficial effect for applying example acquisition is same or similar, to avoid repeating, does not repeat here.
Embodiment 3
Technical scheme, technical characteristic in order to illustrate more clearly of the application, with reference to user in specific field
When user carries out voice call using mobile phone under scape call interface carry out related display example illustrate (from
And constitute embodiment 3), the embodiment causes communication interface of the user in communication process, can be according to user
Content of Communication produce corresponding animation effect.The specific scene of the example is:User A and user B
During voice call is carried out, user A says to user B:" recently you be tired of very much want well beat you ",
At this moment mobile phone interface carries out related display according to this voice content, is as shown in figure 3-1 under the scene
Mobile phone interface carries out the particular flow sheet of the method for related display according to the voice content communicated between user, should
Method includes:
Step 31:By set up virtual image corresponding to the telephone number query to the number of call relation and
Associated scenario, and the virtual scene and associated scenario are loaded on call interface;
Specific scene is in this step:User A and user B carry out voice call by handset dialing,
And both sides are using earphone in communication process or hands-free conversed, it is assumed that be that user A first dials and makes
User B, after user B receives calls, both sides' mobile phone interface will simultaneously load user A and user B
Corresponding virtual image and associated scenario.Here the virtual image and associated scenario of user A and user B
It is that user A and user B shifts to an earlier date the void image chosen in the void image and virtual scene that cell phone system is provided
And virtual scene, and void image and virtual scene are bound with the cell-phone number of oneself, as user B
After the phone of listening user A, the cell phone system of both sides inquires number correspondence according to the phone number of both sides
Virtual image and virtual scene, and shown in both sides' mobile phone interface, here user A and user B
Virtual image by set all be displayed on the mobile phone interface of both sides (such as Fig. 3-2), in addition, user A
Can be selected according to two users with identical or difference with the virtual scene shown on the mobile phone interface of user B
Virtual scene is shown.
Step 32:After system identification user's communication content, related animation is transferred according to dialog context;
From step 31, after user B connects phone, both sides' mobile phone interface will show user A and
The virtual image and associated scenario of user B, it is assumed that user A says to user B in communication process:" recently
You be tired of very much want well beat you ", the cell phone system of user A receives the dialog context, by the voice of mobile phone
The voice content is changed into visual content of text by identification module, and generates corresponding state change instruction,
There is the state of the corresponding virtual image of user on communication interface and/or virtual scene according to the instruction in system
Change, be the mobile phone interface sectional drawing of the user A of simulation as shown in Fig. 3-3:The virtual image of user A is in shape
The action of the virtual image for beating user B is made under state change instruction, while the mouth of the virtual image of user A
Type could be made that the animation of " you have been tired of very much and have wanted well to beat you recently ", and show near the virtual image of user A
Show the dialog box of user A, the content that dialog box shows be exactly user A say " you have been tired of very much good recently
Want beat you " word;Accordingly, the virtual image of dozen user B is being made in the virtual image of user A
Action after, the virtual scene that user's A mobile phone interfaces show is changed into ratio from the scene selected before user A
The scene of military ring.
Likewise, the cell phone system of user B also can identifying user A voice content, when user A institute it is right
After the virtual image answered makes the action of the virtual image for beating user B, the cell phone system generation phase of user B
The state change instruction answered acts on the virtual image of user B, as shown in Fig. 3-3, the virtual shape of user B
As making the action being knocked down on ground under the instruction, and system is by the virtual image corresponding to user B
The dialog box on side shows the word of " how it hurts, I again dare not " as the virtual image to user A automatically
The virtual image of user B is got to the response of the action on ground, in addition, user's B mobile phone interfaces is virtual
The virtual scene that scene can be selected from user B before is changed into the scene of the tourney ring as user A,
User B before can also retaining selected virtual scene does not change, by user according to demands of individuals from
Row is limited.
Using the method that the embodiment of the present application is provided, given birth to according to the dialog context of user in the process of user's communication
Into the instruction of corresponding state change, produce the call interface of user further according to the status command corresponding dynamic
Draw effect so that user's call interface in communication process, not being static, and can be according to user's
Dialog context produces corresponding animation, enriches the communication mode of user.
Embodiment 4
Embodiment 4 is that another provided on the basis of embodiment 3 makes communication interface in communication process
The method for producing animation effect, the scene that the embodiment is used is consistent with embodiment 3:User A and user B
During voice call is carried out, user A says to user B:" recently you be tired of very much want well beat you ",
At this moment mobile phone interface carries out related display according to the voice content, and such as Fig. 4-1 show the mobile phone under the scene
Interface carries out the particular flow sheet of the method for related display, the method according to the voice content communicated between user
Including:
Step 41:The two users for setting up call relation selection system providing virtual image and virtual field in call
Scape is shown on mobile phone interface;
Specific scene is in this step:User A and user B carry out voice call by handset dialing,
And both sides are using earphone in communication process or hands-free conversed, it is assumed that be that user A first dials and makes
User B, after user B receives calls, display system is provided on the call interface of user A and user B
Virtual image and virtual scene for user select, after user A and user B choose, both sides'
Mobile phone interface display user A and user B selected virtual image and virtual scene.
Step 42:User selects the animation that system is provided to be shown on call interface in communication process;
From step 41, after user B connects phone, call interface will load user A and user
B selected virtual image and virtual scene (such as Fig. 4-2), it is assumed that user A is to user B in communication process
Say:" recently you be tired of very much want well beat you ", the cell phone system of user A receives this dialog context, lead to
This voice content is changed into visual content of text by the sound identification module for crossing mobile phone, in addition, with
Family selects the animation that will be shown in the animation selection interface that system is provided, and the behaviour of animation is selected according to user
Make the corresponding state change instruction of generation, system changes the virtual image and virtual scene of user according to the instruction,
The virtual image shape of the mouth as one speaks of such as user A shows the animation of " you have been tired of very much and have wanted well to beat you recently ", and this is led to
Letter content carries out text importing in the form of dialog box in the vicinity of the corresponding virtual images of user A, meanwhile,
User A presses " animation selection " button on call interface, and animation selection interface will provide animation and supply user
A is selected, and such as Fig. 4-3 show user's A mobile phone interfaces, and interface the latter half is exactly user A by animation
The animation that mobile phone interface is provided after select button, the animation of user A selections " beating other side ", at this moment, according to
The animation of " the beating other side " of user's selection, generates corresponding state change instruction, is referred to according to the state change
Order causes that the virtual image of mobile phone interface display user A completes the action of the virtual image for beating user B (such as
Fig. 4-3);The animation of " beating other side " that other system is selected according to user A, automatic loading is right with the action
The scene (such as Fig. 4-3) of the virtual scene answered, such as tourney ring.
The mobile phone of user B equally also receives the dialog context of user A, virtual corresponding to user A
After image makes the action of the virtual image for beating user B, the cell phone system of user B generates corresponding state
Change the virtual image that instruction acts on user B, as shown in Fig. 4-3, the virtual image corresponding to user B
The action being knocked down on ground is made under the instruction, and system is beside the virtual image corresponding to user B
Dialog box show the word of " how it hurts, I again dare not " as response, or user B oneself
It is same on mobile phone interface to press animation select button, select corresponding animation to carry out interaction with user A virtual images,
For example, user B can also select the animation of " beating other side " to the void of user A in animation selection interface
Intend image to be fought back;In addition, the virtual scene of user's B mobile phone interfaces can be selected by user B before
Fixed virtual scene is changed into the scene of the tourney ring as user A, it is also possible to user B before reservation
Selected virtual scene does not change.
Using the embodiment of the present application obtain beneficial effect is identical with the beneficial effect that embodiment 3 is obtained or phase
Seemingly, to avoid repeating, repeat no more here.
Embodiment 5
Embodiment 3 and embodiment 4 be directed to user's call interface when dialing carries out voice call carry out it is related
The method of display, for more perfect the technical scheme for illustrating the application, with reference to user in concrete scene
Video calling interface carries out the example of related display and illustrates when lower user carries out video calling using mobile phone
(so as to constitute embodiment 5), the specific scene of the example is:User A and user B is carrying out video
During call, user A says to user B:" today, weather was particularly sunny ", at this moment mobile phone interface
Related display is carried out according to this voice content, the method is specifically included:User A and user B are carried out
During video communication, it is assumed that user A logs in certain MSN and carries out Video chat with user B,
After user B receives Video chat, mobile phone interface will show the real picture that camera is captured, such as Fig. 5-1
It is the simulation sectional drawing of user A and user the B mobile phone interface of user B in video communication, big image in figure
It is assumed to be the video pictures of the user A that the mobile phone of user B is received, small figure in the small frame in the upper right corner in figure
Seem the video pictures of user B, it is assumed that user A says " today day to user B in video call process
Gas is particularly sunny ", the cell phone system of user B receives this voice content, is known by the voice of mobile phone
This voice content is changed into visual content of text by other module, and is generated corresponding state change and referred to
Order, system transfers related animation according to the instruction, such as Fig. 5-2 show the cell phone system of user B according to
The voice content of " today, weather was particularly sunny " described in the A of family transfers related animation at video calling interface
On shown, as illustrated, the virtual animation for occurring the sun and cloud in interface is drawn in the video of user A
Shown on face, and increase a text box on the side of user A images, shown above is that user B is received
To the voice content of user A.
The method for making communication interface generation animation effect in communication process provided in analogy embodiment 4, this reality
The method in video calling interface display animation for applying example offer can also be:User A says to user B:It is " modern
Its weather is particularly sunny ", then user A selects animation select button on communication interface, shows on interface
Show animation selection interface, user A can select the animated button of " fine day " (as schemed on cartoon interface
5-3), cell phone system generates corresponding state change instruction according to the operation, and the cell phone system of user A again will
The instruction is transmitted to the cell phone system of user B, and the state change that the cell phone system of user B is received is instructed
Afterwards so that related animation, the fine day picture as shown in Fig. 5-2 are shown on the mobile phone interface of user B.
Using the method that the embodiment of the present application is provided, during user video is conversed, according to the logical of user
Words content generates the instruction of corresponding state change, and the video calling interface of user is made further according to the status command
It is upper to produce corresponding animation effect so that user no longer simply takes the photograph at the video calling interface in video call process
As the real picture that head is captured, corresponding animation can also be produced according to the Content of Communication of user, enriched
The communication mode of user.
Claims (14)
1. it is a kind of make in communication process communication interface produce animation effect method, it is characterised in that should
Method includes:
User is based on the content that communication is produced in receiving communication process;
The identification content produced based on communicating, and according to the content generation produced based on communication accordingly
State change instruction;
State according to user profile on the state change instruction modification communication interface produces communication interface
Animation effect.
2. method according to claim 1, it is characterised in that the user profile includes user's
Animated element on communication interface, it is described according to user profile on the state change instruction modification communication interface
State make communication interface produce animation effect include:
Communication interface is set to produce animation according to the animated element on the state change instruction modification communication interface
Effect.
3. method according to claim 1, it is characterised in that the user profile includes that user is empty
Intend image and/or the virtual scene presented on the communication interface of user, it is described to be instructed according to the state change
The state of user profile includes communication interface generation animation effect on modification communication interface:
According to the user's virtual image on the state change instruction modification communication interface and/or virtual scene
State makes communication interface produce animation effect.
4. method according to claim 3, it is characterised in that the user on the communication interface is empty
Intending image and/or virtual scene includes the virtual image and/or virtual scene of communicating pair, the Content of Communication
It is the Content of Communication of the first party of communicating pair, it is described according on the state change instruction modification communication interface
User's virtual image and/or virtual scene state make communication interface produce animation effect include:
Change the virtual image and/or virtual scene of first party on communication interface according to state change instruction
State, and the virtual image of second party and/or virtual is changed on communication interface according to status modifier instruction
The state of scene, the virtual image of the second party and/or the condition responsive of virtual scene are in the first party
The state of virtual image and/or virtual scene;Or,
Change the state and/or void of the virtual image of second party on communication interface according to state change instruction
Intend scene, and according to the virtual image of first party on status modifier instruction change communication interface and/or virtually
The state of scene, the virtual image of the first party and/or the condition responsive of virtual scene are in the second party
The state of virtual image and/or virtual scene.
5. method according to claim 3, it is characterised in that described to be referred to according to the state change
The state of the user's virtual image on order modification communication interface, specifically includes:
According to the shape of the mouth as one speaks, action, profile, dressing, the body of state change instruction modification user's virtual image
The state of type, sound and/or expression.
6. method according to claim 3, it is characterised in that
User's virtual image and/or virtual scene are the multiple provided from terminal device or remote server
The virtual image and/or virtual scene selected in virtual image and virtual scene, and/or, it is from terminal device
Or determined according to the information of external environment in multiple virtual images and/or virtual scene of remote server offer
User virtual image and/or virtual scene, and/or, be what is provided from terminal device or remote server
According to the information for being stored in advance in terminal device or remote server in multiple virtual images and/or virtual scene
The virtual image and/or virtual scene of the user of determination;And/or,
User's virtual image and/or virtual scene are true where based on user's authentic image and/or user
The virtual image and/or virtual scene of real scene creation.
7. method according to claim 6, it is characterised in that the storage is in terminal device or remote
The information of journey server includes following at least one:
Personal information;Mood information;Condition information;It is related to user's virtual image and/or virtual scene
Information;With the relevant information of the other user that the user sets up correspondence.
8. method according to claim 1, it is characterised in that the content produced based on communication
Including:
What the camera on voice, word and/or terminal device that user produces in communication process was caught regards
Frequency content;Or, selecting the operation behavior letter of animation in the animation selection interface that presents in the communication process of user
Breath;Or, user touches the operation behavior information of communication interface in communication process.
9. method according to claim 1, it is characterised in that methods described also includes:
Communication interface is produced according to the state of user profile on the state change instruction modification communication interface
After raw animation effect, the animation effect that the communication interface is produced is stored in terminal device.
10. it is a kind of to make the device of communication interface generation animation effect in communication process, it is characterised in that institute
State device to be located in the terminal device of the two parties for setting up correspondence, the device includes:
Receiving unit, recognition unit and modification unit, wherein:
The receiving unit, the content that communication is produced is based on for receiving user in communication process;
The recognition unit, for recognizing the content produced based on communication, and according to described based on communication
The content of generation generates corresponding state change instruction;
The modification unit, for the shape according to user profile on the state change instruction modification communication interface
State makes communication interface produce animation effect.
11. devices according to claim 10, it is characterised in that the recognition unit includes:Know
Small pin for the case unit and generation subelement, wherein:
The recognition unit, for recognizing voice, word and/or end that the user produces in communication process
The video content that camera in end equipment is caught;Or,
Recognize the operation behavior that animation is selected in the animation selection interface that the user is presented in communication process
Information;Or,
Recognize that the user touches the operation behavior information of communication interface in communication process.
The generation subelement, for the voice, word and/or the terminal that are produced in communication process according to user
The video content that the camera of equipment is caught generates corresponding state change instruction;Or,
According to the operation behavior information that animation is selected in the animation selection interface that user is presented in communication process
Generate corresponding state change instruction;Or,
The operation behavior information corresponding state of generation for touching communication interface in communication process according to user changes
Become instruction.
12. devices according to claim 10, it is characterised in that the user profile includes user
Communication interface on animated element, the modification unit is used for:
Communication interface is set to produce animation according to the animated element on the state change instruction modification communication interface
Effect.
13. devices according to claim 10, it is characterised in that the user profile includes user
Virtual image and/or the virtual scene presented on the communication interface of user, the modification unit are used for:
According to the user's virtual image on the state change instruction modification communication interface and/or virtual scene
State makes communication interface produce animation effect.
14. devices according to claim 10, it is characterised in that described device also includes that storage is single
Unit, the memory cell is used for:
The animation effect that the communication interface is produced is stored in terminal device.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510859600.8A CN106817349B (en) | 2015-11-30 | 2015-11-30 | Method and device for enabling communication interface to generate animation effect in communication process |
PCT/CN2016/076590 WO2017092194A1 (en) | 2015-11-30 | 2016-03-17 | Method and apparatus for enabling communication interface to produce animation effect in communication process |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510859600.8A CN106817349B (en) | 2015-11-30 | 2015-11-30 | Method and device for enabling communication interface to generate animation effect in communication process |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106817349A true CN106817349A (en) | 2017-06-09 |
CN106817349B CN106817349B (en) | 2020-04-14 |
Family
ID=58796227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510859600.8A Active CN106817349B (en) | 2015-11-30 | 2015-11-30 | Method and device for enabling communication interface to generate animation effect in communication process |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106817349B (en) |
WO (1) | WO2017092194A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107911643A (en) * | 2017-11-30 | 2018-04-13 | 维沃移动通信有限公司 | Show the method and apparatus of scene special effect in a kind of video communication |
CN109684544A (en) * | 2018-12-14 | 2019-04-26 | 维沃移动通信有限公司 | One kind, which is worn, takes recommended method and terminal device |
CN110062116A (en) * | 2019-04-29 | 2019-07-26 | 上海掌门科技有限公司 | Method and apparatus for handling information |
CN110418010A (en) * | 2019-08-15 | 2019-11-05 | 咪咕文化科技有限公司 | A kind of control method of virtual objects, equipment and computer storage medium |
CN110971747A (en) * | 2018-09-30 | 2020-04-07 | 华为技术有限公司 | Control method for media display and related product |
CN111316203A (en) * | 2018-07-10 | 2020-06-19 | 微软技术许可有限责任公司 | Actions for automatically generating a character |
CN111949118A (en) * | 2019-05-17 | 2020-11-17 | 深圳欧博思智能科技有限公司 | Equipment state switching method and device, storage medium and sound box |
CN111949117A (en) * | 2019-05-17 | 2020-11-17 | 深圳欧博思智能科技有限公司 | Equipment state switching method and device, storage medium and sound box |
CN113325983A (en) * | 2021-06-30 | 2021-08-31 | 广州酷狗计算机科技有限公司 | Virtual image processing method, device, terminal and storage medium |
CN113395597A (en) * | 2020-10-26 | 2021-09-14 | 腾讯科技(深圳)有限公司 | Video communication processing method, device and readable storage medium |
CN113965542A (en) * | 2018-09-30 | 2022-01-21 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for displaying sound message in application program |
CN114979789A (en) * | 2021-02-24 | 2022-08-30 | 腾讯科技(深圳)有限公司 | Video display method and device and readable storage medium |
US11983807B2 (en) | 2018-07-10 | 2024-05-14 | Microsoft Technology Licensing, Llc | Automatically generating motions of an avatar |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1581903A (en) * | 2003-08-14 | 2005-02-16 | 日本电气株式会社 | A portable telephone including an animation function and a method of controlling the same |
US7991401B2 (en) * | 2006-08-08 | 2011-08-02 | Samsung Electronics Co., Ltd. | Apparatus, a method, and a system for animating a virtual scene |
CN103905644A (en) * | 2014-03-27 | 2014-07-02 | 郑明� | Generating method and equipment of mobile terminal call interface |
CN104468959A (en) * | 2013-09-25 | 2015-03-25 | 中兴通讯股份有限公司 | Method, device and mobile terminal displaying image in communication process of mobile terminal |
CN104902212A (en) * | 2015-04-30 | 2015-09-09 | 努比亚技术有限公司 | Video communication method and apparatus |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010067993A (en) * | 2001-04-13 | 2001-07-13 | 장민근 | Portable communication system capable of abstraction and inserting background image and method thereof |
US20090311993A1 (en) * | 2008-06-16 | 2009-12-17 | Horodezky Samuel Jacob | Method for indicating an active voice call using animation |
CN105100432B (en) * | 2015-06-10 | 2018-02-06 | 小米科技有限责任公司 | Call interface display methods and device |
-
2015
- 2015-11-30 CN CN201510859600.8A patent/CN106817349B/en active Active
-
2016
- 2016-03-17 WO PCT/CN2016/076590 patent/WO2017092194A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1581903A (en) * | 2003-08-14 | 2005-02-16 | 日本电气株式会社 | A portable telephone including an animation function and a method of controlling the same |
US7991401B2 (en) * | 2006-08-08 | 2011-08-02 | Samsung Electronics Co., Ltd. | Apparatus, a method, and a system for animating a virtual scene |
CN104468959A (en) * | 2013-09-25 | 2015-03-25 | 中兴通讯股份有限公司 | Method, device and mobile terminal displaying image in communication process of mobile terminal |
CN103905644A (en) * | 2014-03-27 | 2014-07-02 | 郑明� | Generating method and equipment of mobile terminal call interface |
CN104902212A (en) * | 2015-04-30 | 2015-09-09 | 努比亚技术有限公司 | Video communication method and apparatus |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107911643A (en) * | 2017-11-30 | 2018-04-13 | 维沃移动通信有限公司 | Show the method and apparatus of scene special effect in a kind of video communication |
US11983807B2 (en) | 2018-07-10 | 2024-05-14 | Microsoft Technology Licensing, Llc | Automatically generating motions of an avatar |
CN111316203B (en) * | 2018-07-10 | 2022-05-31 | 微软技术许可有限责任公司 | Actions for automatically generating a character |
CN111316203A (en) * | 2018-07-10 | 2020-06-19 | 微软技术许可有限责任公司 | Actions for automatically generating a character |
CN110971747A (en) * | 2018-09-30 | 2020-04-07 | 华为技术有限公司 | Control method for media display and related product |
CN113965542A (en) * | 2018-09-30 | 2022-01-21 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for displaying sound message in application program |
CN113965542B (en) * | 2018-09-30 | 2022-10-04 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for displaying sound message in application program |
CN109684544A (en) * | 2018-12-14 | 2019-04-26 | 维沃移动通信有限公司 | One kind, which is worn, takes recommended method and terminal device |
CN110062116A (en) * | 2019-04-29 | 2019-07-26 | 上海掌门科技有限公司 | Method and apparatus for handling information |
CN111949118A (en) * | 2019-05-17 | 2020-11-17 | 深圳欧博思智能科技有限公司 | Equipment state switching method and device, storage medium and sound box |
CN111949117A (en) * | 2019-05-17 | 2020-11-17 | 深圳欧博思智能科技有限公司 | Equipment state switching method and device, storage medium and sound box |
CN110418010A (en) * | 2019-08-15 | 2019-11-05 | 咪咕文化科技有限公司 | A kind of control method of virtual objects, equipment and computer storage medium |
CN113395597A (en) * | 2020-10-26 | 2021-09-14 | 腾讯科技(深圳)有限公司 | Video communication processing method, device and readable storage medium |
WO2022089224A1 (en) * | 2020-10-26 | 2022-05-05 | 腾讯科技(深圳)有限公司 | Video communication method and apparatus, electronic device, computer readable storage medium, and computer program product |
CN114979789A (en) * | 2021-02-24 | 2022-08-30 | 腾讯科技(深圳)有限公司 | Video display method and device and readable storage medium |
CN113325983A (en) * | 2021-06-30 | 2021-08-31 | 广州酷狗计算机科技有限公司 | Virtual image processing method, device, terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2017092194A1 (en) | 2017-06-08 |
CN106817349B (en) | 2020-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106817349A (en) | A kind of method and device for making communication interface produce animation effect in communication process | |
CN107566728A (en) | A kind of image pickup method, mobile terminal and computer-readable recording medium | |
US20110143728A1 (en) | Method and apparatus for recognizing acquired media for matching against a target expression | |
CN105930073A (en) | Method And Apparatus For Supporting Communication In Electronic Device | |
CN103368816A (en) | Instant communication method based on virtual character and system | |
US20230089566A1 (en) | Video generation method and related apparatus | |
CN106803921A (en) | Instant audio/video communication means and device based on AR technologies | |
CN102857605A (en) | Grouping method and apparatus of contacts | |
CN109788204A (en) | Shoot processing method and terminal device | |
JP2013009073A (en) | Information processing apparatus, information processing method, program, and server | |
CN109408168A (en) | A kind of remote interaction method and terminal device | |
CN107818787B (en) | Voice information processing method, terminal and computer readable storage medium | |
CN108986026A (en) | A kind of picture joining method, terminal and computer readable storage medium | |
CN110166439A (en) | Collaborative share method, terminal, router and server | |
CN109639569A (en) | A kind of social communication method and terminal | |
CN107272887A (en) | A kind of method that client scene interactivity is realized based on augmented reality | |
CN110209332A (en) | A kind of information processing method and terminal device | |
CN107623830A (en) | A kind of video call method and electronic equipment | |
JP2005064939A (en) | Portable telephone terminal having animation function and its control method | |
CN102364965A (en) | Refined display method of mobile phone communication information | |
CN107959755A (en) | A kind of photographic method and mobile terminal | |
CN102263789A (en) | Call patterning assisting system | |
CN110532412A (en) | A kind of document handling method and mobile terminal | |
CN108551562A (en) | A kind of method and mobile terminal of video communication | |
CN107948400A (en) | A kind of information edit method, terminal and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190326 Address after: 361012 3F-A193, Innovation Building C, Software Park, Xiamen Torch High-tech Zone, Xiamen City, Fujian Province Applicant after: Xiamen Black Mirror Technology Co., Ltd. Address before: 9th Floor, Maritime Building, 16 Haishan Road, Huli District, Xiamen City, Fujian Province, 361000 Applicant before: XIAMEN HUANSHI NETWORK TECHNOLOGY CO., LTD. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |