CN111464430A - Dynamic expression display method, dynamic expression creation method and device - Google Patents

Dynamic expression display method, dynamic expression creation method and device Download PDF

Info

Publication number
CN111464430A
CN111464430A CN202010273094.5A CN202010273094A CN111464430A CN 111464430 A CN111464430 A CN 111464430A CN 202010273094 A CN202010273094 A CN 202010273094A CN 111464430 A CN111464430 A CN 111464430A
Authority
CN
China
Prior art keywords
animation
dynamic
area
dynamic expression
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010273094.5A
Other languages
Chinese (zh)
Other versions
CN111464430B (en
Inventor
汪倩怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010273094.5A priority Critical patent/CN111464430B/en
Publication of CN111464430A publication Critical patent/CN111464430A/en
Application granted granted Critical
Publication of CN111464430B publication Critical patent/CN111464430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a dynamic expression display method, a dynamic expression creating method and a dynamic expression creating device, and belongs to the technical field of computers. The method comprises the following steps: responding to a selection operation for selecting the dynamic expression triggered by a session interface, and displaying a dynamic body diagram of the selected dynamic expression in the session interface as a session message; and playing the animation element associated with the dynamic body diagram in the session interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic body diagram. Therefore, the display size limitation of the traditional dynamic expression is broken through, the display range of the dynamic expression is expanded, a new dynamic expression playing mechanism is provided, the dynamic expression can express richer contents, the flexibility and the interestingness of dynamic expression display are improved, and the display effect of the dynamic expression is enhanced.

Description

Dynamic expression display method, dynamic expression creation method and device
Technical Field
The application relates to the technical field of computers, in particular to a dynamic expression display method, a dynamic expression creating method and a dynamic expression creating device.
Background
With the rapid development of the internet, a variety of online social applications for social activities, such as instant messaging using instant messaging applications, have emerged. In the process of using the social applications, in order to express the words vividly and vividly, users often send some dynamic expressions as conversation messages, and the interestingness of communication among the users can be greatly promoted in a conversation mode through the dynamic expressions.
In the related art, the dynamic emoticons have a fixed display size, and when the dynamic emoticons are displayed as a session message, the dynamic emoticons are played in an area of the fixed size, that is, the size of the area is limited when the dynamic emoticons are played.
Disclosure of Invention
The embodiment of the application provides a dynamic expression display method, a dynamic expression creating method and a dynamic expression creating device, which are used for expanding the display range of dynamic expressions and enhancing the display effect of the dynamic expressions.
In one aspect, a method for displaying dynamic expressions is provided, and the method includes:
responding to a selection operation for selecting the dynamic expression triggered by a session interface, and displaying a dynamic body diagram of the selected dynamic expression in the session interface as a session message; and
playing an animation element associated with the dynamic body diagram in the session interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic body diagram.
In a possible implementation manner, the playing area of the animation element further includes a second area, and the second area is a part or all of the display area of the dynamic body diagram.
In one possible implementation, playing the animation element associated with the dynamic body diagram in the session interface includes:
the animation element gradually crosses from one of the first area and the second area to the other area for playing; alternatively, the first and second electrodes may be,
the animation element is played in the first area; alternatively, the first and second electrodes may be,
the animation element is played in the first area and the second area.
In one possible implementation manner, displaying the dynamic body diagram of the selected dynamic emoticon in the conversation interface as a conversation message, and playing an animation element associated with the dynamic body diagram in the conversation interface, includes:
and determining a reference position for animation drawing according to the playing mode associated with the dynamic expression, and starting to draw and display each animation frame corresponding to the dynamic expression frame by using the reference position.
In a possible implementation manner, determining a reference position for animation drawing according to the play mode associated with the dynamic expression, and starting to draw and display each animation frame corresponding to the dynamic expression frame by frame with the reference position, includes:
determining the reference position according to the animation type corresponding to the play mode; and
and drawing and displaying each animation frame corresponding to the dynamic expression frame by starting from the reference position according to the animation attribute information corresponding to the play mode, wherein the animation attribute information comprises at least one of the motion trail, the size, the shape, the color and the animation special effect of the animation element.
In a possible implementation manner, determining the reference position according to the animation type corresponding to the play mode includes:
if the animation type of the dynamic expression is trigger type animation, determining the position of the trigger source of the animation element as the reference position; or
If the animation type of the dynamic expression is atmosphere animation, determining the central position of the conversation interface as the reference position, or determining the central position of a dialog box area in the conversation interface as the reference position; or
And if the animation type of the dynamic expression is position type animation, determining the central position of the playing area of the dynamic main body diagram as the reference position.
In one possible implementation manner, before the displaying the dynamic body diagram of the selected dynamic emoticon as a conversation message in the conversation interface, the method further includes:
and displaying the dynamic expression in an input box area in the session interface, and triggering and sending the dynamic expression when a confirmation operation for determining to send the dynamic expression is detected.
In a possible implementation manner, the transparency of the background region of the dynamic body map is a transparent value, or the color of the background region of the dynamic body map is the background color of the session interface.
In a possible implementation manner, the dynamic expression is associated with a first identifier, and the first identifier is used to indicate that the associated dynamic expression is a cross-region dynamic expression.
In one aspect, a method for creating a dynamic expression is provided, where the method includes:
responding to the expression creating operation and displaying a video recording interface;
responding to a video recording operation triggered on the video recording interface, obtaining recorded video data, and storing the video data and animation elements in a correlated manner as dynamic expressions, wherein the video data is used as a dynamic main body diagram of the dynamic expressions, and the playing area of the animation elements at least comprises a first area outside the dynamic main body diagram.
In one possible implementation manner, the video recording interface includes a video recording area, and stores the video data and the animation element as a dynamic expression in an associated manner, including:
determining a reference position of animation drawing according to the playing mode associated with the animation element, and synthesizing a sequence video frame of the video data and the animation element by taking the reference position as a coordinate origin to obtain a sequence animation frame corresponding to the dynamic expression; wherein, in the process of composition, the display area of the animation element at least comprises the area outside the video recording area.
In one possible implementation, the display area of the animation element further includes a part or all of the video recording area.
In one possible implementation, determining a reference position of the animation according to the play mode associated with the animation element includes:
if the animation element is a trigger type animation, determining a trigger source position for triggering the animation element in the video data as the reference position;
if the animation type of the animation element is atmosphere animation, determining the central position of the video recording interface as the reference position;
and if the animation type of the dynamic expression is position type animation, determining the central position of the video recording area as the reference position.
In one possible implementation, the method further includes:
responding to a follow-up shooting operation aiming at a target dynamic expression displayed in a dialog box area of a session interface, and extracting the animation element from the target dynamic expression, wherein the target dynamic expression is associated with a first identifier which is used for indicating that the associated dynamic expression is a cross-area dynamic expression; alternatively, the first and second electrodes may be,
responding to the operation of selecting the animation material template or the animation icon, extracting the animation elements from the selected animation material template, or determining the selected animation icon as the animation elements, wherein the selected animation material template and the selected animation icon are both associated with second identifications, and the second identifications are used for indicating that the corresponding animation elements can be displayed outside the video recording area to synthesize dynamic expressions.
In a possible implementation manner, synthesizing the sequence video frames of the video data and the animation elements to obtain sequence animation frames corresponding to the dynamic expressions includes:
determining a background area of each video frame in the video data;
adjusting the transparency of the background area of each video frame to a transparent value, or adjusting the color of the background area of each video frame to a predetermined color;
and synthesizing each adjusted video frame and the animation element to obtain a sequence animation frame corresponding to the dynamic expression.
In one aspect, a dynamic expression display device is provided, the device comprising:
the response module is used for responding to the selection operation of selecting the dynamic expression triggered by the session interface;
and the display module is used for displaying the selected dynamic body diagram of the dynamic expression in the conversation interface as a conversation message and playing an animation element associated with the dynamic body diagram in the conversation interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic body diagram.
In a possible implementation manner, the playing area of the animation element further includes a second area, and the second area is a part or all of the display area of the dynamic body diagram.
In one possible implementation, the display module is configured to:
the animation element gradually crosses from one of the first area and the second area to the other area for playing; alternatively, the first and second electrodes may be,
the animation element is played in the first area; alternatively, the first and second electrodes may be,
the animation element is played in the first area and the second area.
In one possible implementation, the display module is configured to:
and determining a reference position for animation drawing according to the playing mode associated with the dynamic expression, and starting to draw and display each animation frame corresponding to the dynamic expression frame by using the reference position.
In one possible implementation, the display module is configured to:
determining the reference position according to the animation type corresponding to the play mode; and
and drawing and displaying each animation frame corresponding to the dynamic expression frame by starting from the reference position according to the animation attribute information corresponding to the play mode, wherein the animation attribute information comprises at least one of the motion trail, the size, the shape, the color and the animation special effect of the animation element.
In one possible implementation, the display module is configured to:
if the animation type of the dynamic expression is trigger type animation, determining the position of the trigger source of the animation element as the reference position; or
If the animation type of the dynamic expression is atmosphere animation, determining the central position of the conversation interface as the reference position, or determining the central position of a dialog box area in the conversation interface as the reference position; or
And if the animation type of the dynamic expression is position type animation, determining the central position of the playing area of the dynamic main body diagram as the reference position.
In one possible implementation, the apparatus further includes a confirmation module configured to:
before the display module displays the selected dynamic body diagram of the dynamic expression in the conversation interface as a conversation message, displaying the dynamic expression in an input box area in the conversation interface, and triggering and sending the dynamic expression when a confirmation operation for determining to send the dynamic expression is detected.
In a possible implementation manner, the transparency of the background region of the dynamic body map is a transparent value, or the color of the background region of the dynamic body map is the background color of the session interface.
In a possible implementation manner, the dynamic expression is associated with a first identifier, and the first identifier is used to indicate that the associated dynamic expression is a cross-region dynamic expression.
In one aspect, a dynamic expression creation apparatus is provided, the apparatus including:
the display module is used for responding to the expression creating operation and displaying the video recording interface;
the creating module is used for responding to a video recording operation triggered on the video recording interface, obtaining recorded video data, and storing the video data and animation elements in a correlated manner as dynamic expressions, wherein the video data is stored as a dynamic main body diagram of the dynamic expressions, and the playing area of the animation elements at least comprises a first area outside the dynamic main body diagram.
In one possible implementation manner, the video recording interface includes a video recording area, and the creating module is configured to:
determining a reference position of animation drawing according to the playing mode associated with the animation element, and synthesizing a sequence video frame of the video data and the animation element by taking the reference position as a coordinate origin to obtain a sequence animation frame corresponding to the dynamic expression; wherein, in the process of composition, the display area of the animation element at least comprises the area outside the video recording area.
In one possible implementation, the display area of the animation element further includes a part or all of the video recording area.
In one possible implementation, the creation module is configured to:
if the animation element is a trigger type animation, determining a trigger source position for triggering the animation element in the video data as the reference position;
if the animation type of the animation element is atmosphere animation, determining the central position of the video recording interface as the reference position;
and if the animation type of the dynamic expression is position type animation, determining the central position of the video recording area as the reference position.
In one possible implementation manner, the apparatus further includes a determining module configured to:
responding to a follow-up shooting operation of a target dynamic expression displayed in a dialog box area of a session interface, and extracting the animation element from the target dynamic expression, wherein the target dynamic expression is displayed in a related manner with a first identifier, and the first identifier is used for indicating that the animation element of the related dynamic expression can be displayed outside a display area of a dynamic main body diagram of the dynamic expression; alternatively, the first and second electrodes may be,
responding to the operation of selecting the animation material template or the animation icon, extracting the animation elements from the selected animation material template, or determining the selected animation icon as the animation elements, wherein the selected animation material template and the selected animation icon are both associated and displayed with a second identifier, and the second identifier is used for indicating that the corresponding animation elements can be displayed outside the video recording area to synthesize the dynamic expression.
In one possible implementation, the creation module is configured to:
determining a background area of each video frame in the video data;
adjusting the transparency of the background area of each video frame to a transparent value, or adjusting the color of the background area of each video frame to a predetermined color;
and synthesizing each adjusted video frame and the animation element to obtain a sequence animation frame corresponding to the dynamic expression.
In one aspect, a computing device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the method for displaying dynamic expressions includes the steps described in the foregoing various possible implementation manners.
In one aspect, a computing device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the steps included in the dynamic expression creating method described in the above various possible implementation manners.
In one aspect, a storage medium is provided, where computer-executable instructions are stored, and the computer-executable instructions are configured to cause a computer to perform the steps included in the dynamic expression display method described in the foregoing various possible implementation manners.
In one aspect, a storage medium is provided, and the storage medium stores computer-executable instructions for causing a computer to execute the steps included in the dynamic expression creation method described in the above-mentioned various possible implementation manners.
In one aspect, a computer program product containing instructions is provided, which when run on a computer causes the computer to execute the steps included in the dynamic expression display method described in the above various possible implementations.
In one aspect, a computer program product containing instructions is provided, which when run on a computer causes the computer to perform the steps included in the dynamic expression creation method described in the above various possible implementations.
In the embodiment of the present application, after detecting a selection operation for selecting a dynamic expression triggered by a session interface, by responding to the selection operation, the selected dynamic expression may be displayed in the session interface as a session message, specifically, a playing area of an animation element of the dynamic expression at least includes an area (for example, the area is referred to as a first area) outside a dynamic body diagram of the dynamic expression, that is, in a display process of the dynamic expression, the animation element in the dynamic expression may be played at least in the area outside the playing area of the dynamic body diagram, so that the playing area of the animation element in the dynamic expression is not limited to a display area corresponding to a fixed size set for the dynamic expression in a conventional manner, but may span the area of the fixed size and merge a display space outside the area of an inherent size, thereby breaking through a limitation of the conventional dynamic expression on a display size, the display range of the dynamic expression is expanded, a new dynamic expression playing mechanism is provided, the dynamic expression can express richer contents, meanwhile, the flexibility and the interestingness of dynamic expression display can be improved, and the display effect of the dynamic expression is enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only the embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram of a dynamic expression;
FIG. 2a is a schematic diagram of a session interface in an embodiment of the present application;
FIG. 2b is another schematic diagram of a session interface in an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario in which the present application is applied;
FIG. 4 is a flowchart of a method for displaying dynamic expressions in an embodiment of the present application;
fig. 5a is a schematic diagram of selecting a dynamic expression triggered in the embodiment of the present application;
FIG. 5b is another diagram illustrating triggering dynamic expression selection according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a comparison between a conventional dynamic expression and a cross-region dynamic expression in an embodiment of the present application;
FIG. 7 is a schematic coordinate diagram illustrating drawing of a trigger-type animation according to an embodiment of the present application;
FIG. 8 is a schematic coordinate diagram of creating an atmosphere animation according to an embodiment of the present application;
FIG. 9 is a schematic diagram showing coordinates of rendering a positional animation according to an embodiment of the present application;
FIG. 10 is a flowchart of a dynamic expression creation method in an embodiment of the present application;
fig. 11a is a schematic diagram of performing expression creation operation in the embodiment of the present application;
fig. 11b is another schematic diagram of performing expression creation operation in the embodiment of the present application;
FIG. 12 is a schematic diagram of a reverse video recording operation in an embodiment of the present application;
FIG. 13a is a diagram illustrating animation drawing during dynamic emotion creation in an embodiment of the present application;
FIG. 13b is another diagram illustrating animation in the dynamic expression creation process in an embodiment of the present application;
FIG. 13c is another diagram illustrating animation in the dynamic emoticon creation process according to an embodiment of the present application;
FIG. 14a is a block diagram of a dynamic expression display apparatus according to an embodiment of the present application;
FIG. 14b is another block diagram of the dynamic expression display device in the embodiment of the present application;
fig. 15a is a block diagram of a dynamic expression creating apparatus according to an embodiment of the present application;
fig. 15b is another block diagram of the dynamic expression creation apparatus in the embodiment of the present application;
fig. 16 is a schematic structural diagram of a computing device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments given herein without making any creative effort, shall fall within the scope of the claimed protection. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The terms "first" and "second" in the description and claims of the present application and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the term "comprises" and any variations thereof, which are intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The "plurality" in the present application may mean at least two, for example, two, three or more, and the embodiments of the present application are not limited.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship unless otherwise specified.
Some technical terms referred to herein are explained below to facilitate understanding by those skilled in the art.
1. Instant messaging application, a kind of application that can provide instant messaging function, instant messaging, also called instant messaging, refers to a service that can send and receive internet messages and the like instantly, allows two or more people to use a network to transmit text messages, documents, voice and video communications in real time, and has evolved into a comprehensive information platform that integrates communications, information, entertainment, search, e-commerce, office collaboration, enterprise customer service and the like.
2. The dynamic expressions are expressions with animation effects, the expressions are images with meaning expression functions and can reflect the internal activities, emotions or specific semantics of a user sending the expressions, and the expressions comprise static expressions and dynamic expressions. Generally, a static emoticon is a frame of static picture, and may be in a file format of PNG (portable Network graphics), and a dynamic emoticon is an animation composed of multiple frames of pictures, and may be in a file format of GIF.
The dynamic expression can comprise two parts, namely a dynamic body diagram and an animation element, wherein the dynamic body diagram is a main body part of the dynamic expression, such as a head portrait of a user who shoots the dynamic expression or some cartoon images, the animation element can be understood as an element which embodies an animation special effect in the dynamic expression, the animation element can substantially embody the animation special effect of the whole dynamic expression, the animation element can be used as an auxiliary element to better express the dynamic expression, and the animation element is a dynamic image with the animation special effect, such as a heart shape, a balloon, a water drop, a five-pointed star, characters and the like, in various sizes and colors. For example, referring to the dynamic expression shown in fig. 1, the dynamic expression shown in fig. 1 is a "heart-to-heart" gesture made by a boy, and a heart-shaped graph pops up along with the completion of the "heart-to-heart" gesture of the boy, and the heart-shaped graph can present special effects such as gradually changing size and changing position, so that the image of the boy can be understood as a dynamic body graph of the dynamic expression, which corresponds to a dynamic body part of the dynamic expression, and the image of the heart-shaped graph can be immediately an animation element of the dynamic expression.
It should be noted that, since the dynamic expression is an animation including a plurality of frames of images, a main body portion corresponding to the dynamic main body diagram in the dynamic expression generally changes, for example, a motion change, a posture change, or an expression change.
3. A session interface, which may also be referred to as a chat interface, for example, is an interface for presenting session messages in an instant messaging application, and may include a private chat session interface between two users and a group chat session interface between multiple users for more than two users. The conversation interface generally includes a dialog area for presenting dialog messages that the user has successfully sent and received, which may include text messages, voice messages, and emoticon messages, and an input area for receiving dialog messages input by the user. One possible session interface is shown in fig. 2a and another possible session interface is shown in fig. 2 b.
As mentioned above, the display of the dynamic emotions in the related art is limited by the specified size, for example, the first dynamic emotions from top to bottom in fig. 2a, the size of the playing area when the dynamic emotions are played in the conversation interface is always fixed, wherein the animation elements corresponding to the dynamic body diagram and the "oil" character corresponding to the girl lifted by the right hand can only be displayed in the specified inherent area, and such a display manner is relatively rigid and single, and has poor flexibility, and may not better reflect the effect of the animation special effect which the dynamic emotions originally want to express.
Through analysis, the inventor of the present application finds that the main reason why the playing mode of the dynamic expression is single in the related art is because the size of the playing area of the dynamic expression is limited, and the playing mode of the dynamic expression is solidified due to the limitation, therefore, the inventor of the present application considers that the size limitation of the playing area can be broken through, and the dynamic expression can be displayed outside the playing area of the inherent size of the related art, for example, the dynamic body diagram of the dynamic expression can be displayed in the existing manner, that is, the dynamic body diagram is displayed in the area of the specified fixed size, and the animation element of the dynamic expression can be displayed at least outside the area of the fixed size, so that the animation element in the dynamic expression can be played at least in the area outside the playing area of the dynamic body diagram, and spans the area of the fixed size, and the display space outside the inherent display area is fused, so that the limitation of the traditional dynamic expression on the display size is broken through, a brand-new dynamic expression playing mechanism is provided, the display range of the dynamic expression is expanded, richer contents can be expressed through the dynamic expression, and meanwhile, the flexibility and interestingness of dynamic expression display can be improved.
In order to better understand the technical solution provided by the embodiment of the present application, some brief descriptions are provided below for application scenarios to which the technical solution provided by the embodiment of the present application is applicable, and it should be noted that the application scenarios described below are only used for illustrating the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Referring to fig. 3, fig. 3 is an application scenario applicable to the embodiment of the present application, where the application scenario includes a terminal device 301, a terminal device 302, and a server 303, both the terminal device 301 and the terminal device 302 may communicate with the server 303, a client corresponding to an instant messaging application is installed in the terminal 301 and the terminal 302, and the server 303 is a background service device providing a service for the instant messaging application. The user 1 may use the terminal device 301 to perform instant messaging with the user 2 using the terminal device 302, for example, text communication, voice communication and video communication may be performed, the user 1 and the user 2 may also send expression information to each other, for example, dynamic expressions and static expressions may be sent, specifically, the user 1 and the user 2 may respectively use their respective terminal devices to send dynamic expressions to each other by using the dynamic expression display method provided by the present application, and meanwhile, dynamic expressions may be created by using the dynamic expression creation method provided by the present application, and interaction quality is improved by interaction of the dynamic expressions. For example, the user 1 creates a dynamic expression by using the terminal device 301, and selects to send the created dynamic expression to the user 2, after detecting that the user 1 triggers an operation of sending the dynamic expression, the terminal device 301 uploads relevant information of the dynamic expression to the server 303, and then forwards the information to the terminal device 302 through the server 303, and the terminal device 302 displays the dynamic expression sent by the user 1 according to the received information, so that the user 2 can view the information, and further perform information interaction with the user 1.
The server 303 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. The terminal devices 301 and 302 may be, but are not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the method operation steps as shown in the following embodiments or figures, more or less operation steps may be included in the method based on the conventional or non-inventive labor. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. The method can be executed in sequence or in parallel according to the method shown in the embodiment or the figure when the method is executed in an actual processing procedure or a device.
The embodiment of the present application provides a method for displaying a dynamic expression, where the method may be performed by a device capable of playing a dynamic expression, for example, may be performed by terminal device 301 or terminal device 302 in fig. 3, or may be performed by server 303 in fig. 3. The dynamic expression display method provided by the embodiment of the application is shown in fig. 4, and the flowchart shown in fig. 4 is described as follows.
Step 401: and detecting the selection operation of selecting the dynamic expression triggered by the session interface.
In the process of message interaction through the instant messaging application, when a user wants to send a dynamic expression to other users, a trigger operation for selecting the dynamic expression can be performed on a session interface, specifically, the dynamic expression can be selected by triggering the selection operation for selecting the dynamic expression, and the device can detect the selection operation of the user.
In a possible implementation manner, a user may perform a selection operation of selecting a dynamic expression in an expression input panel, where the expression input panel is a container storing an expression thumbnail corresponding to an expression (including a dynamic expression and a static expression), and the user may add an expression displayed in a dialog box area to the expression input panel, create a new expression in the expression input panel, or delete an expression stored in the expression input panel. Referring to fig. 5a, a user selects a dynamic expression by clicking an expression thumbnail corresponding to a dynamic expression in an expression input panel, and the dynamic expression corresponding to the expression thumbnail selected by the user may be triggered to be displayed in a session interface, specifically in a dialog box area by the clicking operation. The user can select a proper dynamic expression from the expression container of the expression input panel according to the requirement of the user and send the dynamic expression to other users, and the selection space is large.
In another possible implementation manner, the user may directly perform a selection operation on a dynamic expression displayed in a dialog box area in the dialog interface, for example, please refer to fig. 5b, and the user performs a designation operation, for example, a finger joint operation or a double-click operation, on the dynamic expression already displayed in the dialog box area, so as to select an acted dynamic expression and trigger sending of the dynamic expression. That is to say, the user can directly select the displayed dynamic expressions in the dialog box area to quickly select the displayed dynamic expressions, so that the speed of selecting the dynamic expressions can be increased, the selection efficiency is high, meanwhile, the dynamic effects for displaying the dynamic expressions in the dialog box area are presented to the user, the user can quickly select the interested dynamic expressions through the method, the pertinence is strong, and the selection effectiveness is high.
Step 402: and responding to the selection operation, displaying the selected dynamic body diagram of the dynamic expression in a conversation interface as a conversation message, and playing an animation element associated with the dynamic body diagram in the conversation interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic body diagram.
After detecting the selection operation performed by the user, the device may trigger the selected dynamic expression to be displayed in a dialog box area of the session interface in a dialog message form, that is, to be sent to other users, and specifically, the device may send the information corresponding to the dynamic expression selected by the user to the backend server, and then the backend server forwards the information to other devices, so as to implement sending of the expression message.
After the selection operation is detected, the selection operation may not be directly responded to display the dynamic expression in the session interface as a session message, but after the selection operation is detected, the dynamic expression may be displayed in an input frame area in the session interface first, if the user really wants to send the selected dynamic expression, a confirmation operation for determining sending the dynamic expression may be performed, the device triggers sending the dynamic expression only after detecting the confirmation operation, that is, after the dynamic expression is selected, the user may further fully consider whether the user really needs to send the dynamic expression in a manner of staying in the input frame area, and in the related art, the dynamic expression does not have such a staying mechanism, but is directly sent after the user selects the dynamic expression, however, the user directly sends the dynamic expression which is easy to cause an erroneous operation, the accuracy and the effectiveness of message sending are influenced, the accuracy and the effectiveness of message sending can be improved through the temporary holding mechanism of the dynamic expression in the input box area in the embodiment of the application, and meanwhile, the transmission of data volume can be reduced.
In the embodiment of the present application, when the dynamic expression is played, the dynamic body diagram of the dynamic expression occupies a certain display area, and the playing area of the animation element included in the dynamic expression at least includes an area outside the display area of the dynamic body diagram, for example, the area outside the dynamic body diagram occupied by the display of the animation element is referred to as a first area, that is, the animation element can be displayed at least across the display area of the dynamic body diagram, in the embodiment of the present application, this playing mode of the animation element is referred to as "cross-area playing", and the dynamic expression having this "cross-area playing" effect is referred to as "cross-area dynamic expression", so that the animation element can be fused with the display space outside the display area of the dynamic body diagram, thereby breaking through the limitation of the conventional dynamic expression on the display size and expanding the display range of the dynamic expression, the brand-new dynamic expression playing mechanism is provided, richer contents can be expressed, the entertainment of dynamic expressions is enhanced, the display effect of the dynamic expressions is enhanced, the novel display mode is favorable for users to send expression information, and the use experience of the users is enhanced.
Referring to fig. 6, in which the left diagram in fig. 6 is a playing effect of a conventional dynamic emoticon, it can be seen that a dynamic body diagram corresponding to a dynamic body part (i.e., a boy making a "hearty" gesture) in a "hearty" dynamic emoticon and all animation elements (including a solid heart shape, a reticular heart shape and an irregular five-pointed star triggered by the boy) are displayed in a display area (referred to as an inherent display area, for example) of a fixed size, and the left diagram in fig. 6 shows the inherent display area in a dotted rectangle, it can be seen that the display area of the dynamic emoticon is limited as a whole, and it is difficult to more fully show an animation special effect of the animation elements. While the right diagram in fig. 6 is an illustration of the playing effect of the "cross-region playing" in the embodiment of the present application, it can be seen that, compared to the conventional playing manner, animation elements such as solid heart shape, reticular heart shape, and irregular five-pointed star have a part displayed in the inherent display region and another part displayed outside the inherent display region. It can be understood that, with the continuous playing of the animation effect of the dynamic expression, the animation elements may be dynamically cross-domain from the inherent display area to the outside of the inherent display area gradually, similar to the spreading from inside to outside, until spreading to most or all areas of the dialog box area, so as to achieve the playing effect of the animation elements that are approximately full of screen, as shown in the right diagram in fig. 6, the animation elements in the shape of a net heart are spread to the bubble box of the previous message, and through the cross-domain dynamic display mode, a brand new mechanism for displaying the dynamic expression is provided, the display effect of the dynamic table is enriched, and the dynamic table has more flexibility and entertainment.
In this embodiment of the application, "cross-region dynamic expression" is associated with a first identifier, where the first identifier is used to indicate that the associated dynamic expression is a "cross-region dynamic expression," that is, an animation element that the first identifier can be used to indicate the associated dynamic expression can be displayed outside a display region of a dynamic body diagram of the dynamic expression, that is, a special attribute of "cross-region playing" of the dynamic expression can be represented by the first identifier, so that when the dynamic expression associated with the first identifier is displayed, a playing effect of "cross-region playing" can be achieved.
In a specific implementation, the first identifier may be displayed in association with a dynamic expression, whether it is an expression thumbnail in an expression input panel or a dynamic expression already displayed in a dialog box area, if these dynamic expressions have a characteristic that animation elements are displayed across areas, that is, for a "cross-area dynamic expression," a specific first identifier, for example, as shown in fig. 5a, may be marked on the expression thumbnail or the dynamic expression itself corresponding to these dynamic expressions, where the first identifier is a black triangle identifier, the upper left corners of the second and third dynamic expressions in the expression input panel are both marked with black triangle identifiers, and the upper left corners of the two dynamic expressions in the dialog box area are both marked with black triangle identifiers. The displayed first identification can be used for explicitly marking the special dynamic expressions which can be played by the animation elements in a cross-region mode, so that the special dynamic expressions can be distinguished from other conventional dynamic expressions, the prompting performance is enhanced, and the selection of a user is facilitated. For the specific form of the first mark displayed explicitly, for example, the triangular mark is described above, it is understood that other forms are also possible, and the embodiment of the present application is not limited.
In another possible implementation, the first identifier may not be displayed, and the first identifier may be considered to be implicitly present in order to implicitly reflect the playing effect of "cross-region playing" of the dynamic emoticon. For example, the "cross-region dynamic expression" does not display the first identifier in association, but the display effect of "cross-region playing" can be seen when the user is displaying (for example, clicking), and in addition, the first identifier is displayed only when the cursor points to such dynamic expression, that is, the first identifier can be changed from implicit existence to explicit display through some triggering manner. When the first identification implicitly existing dynamic expressions are used, a sudden dynamic effect of 'cross-region playing' can bring surprise to a user and enhance the use experience of the user.
In this embodiment of the application, the transparency of the background region of the dynamic body diagram in the dynamic expression may be a transparent value, or the color of the background region of the dynamic body diagram may be a background color of the session interface, which may be performed by performing a background removal operation on the dynamic body diagram, for example, performing background removal processing when creating the dynamic expression, or performing background removal processing on the background of the dynamic body diagram in each frame of animation frame in a real-time rendering process, so that when the dynamic expression is displayed in the session interface as a session message, the dynamic expression may be more naturally merged into the entire session interface, thereby improving a display effect.
In a specific implementation process, for example, a playing area of the dynamic body diagram is referred to as an inherent display area, an area of the playing area of the animation element outside the dynamic body diagram is referred to as a first area, and the first area may be any area except the inherent display area, for example, all areas except the inherent display area, or a partial area except the inherent display area, for example, a left area or a lower area or an upper area except the inherent display area, and the like, and the size and the shape of the first area are not limited in this embodiment of the present application. The playing area of the animation element at least includes the first area, that is, the animation element is to be displayed at least outside the inherent display area of the dynamic body diagram, and since the motion trail, the shape, the color, and the special effect of the animation element are generally dynamically changed, the playing area of the animation element may specifically include the following situations.
In case 1, the animation element is played only in the first area, so that, during the whole playing process of the dynamic expression, there is no intersection between the animation element and the display area of the dynamic body diagram, and the animation element and the display area of the dynamic body diagram are independent from each other, for example, the animation element is shown at the beginning directly from the first area. Therefore, the dynamic body diagram is displayed in the inherent display area, and the animation elements are displayed outside the inherent display area, so that the dynamic body diagram and the animation elements are mutually reflected and brought out of the best in each other, and the display effect of the dynamic expressions is enhanced.
In case 2, the playing area of the animation element includes a second area in addition to the first area, and the second area is a partial area or a whole area of the display area of the dynamic body diagram (i.e., the inherent display area), that is, the animation element may be jointly displayed through the first area and the second area.
In case 2, one implementation is that the animation element gradually crosses from one of the first area and the second area to the other area for playing, that is, during the playing of the animation element, gradually crosses from one area to the other area, for example, gradually crosses from the first area to the second area, or gradually crosses from the second area to the first area, so as to realize the animation effect of the wandering and crossing between the two areas, thereby enhancing the display effect of the dynamic expression.
In case 2, another implementation is that the animation elements are played in the first and second regions, respectively. In practice, the animation elements may include multiple types or multiple animation icons at the same time, different animation elements may be displayed in the first area and the second area independently at different times, or the same animation element may be displayed between the first area and the second area at different times, for example, the animation element is played in the second area and then played in the first area.
According to the difference of the areas occupied by the animation elements during playing, different display schemes can be adopted to effectively display the animation elements, and the display effect of the dynamic expressions is enhanced.
The essence of playing the dynamic emoticon in the related art is to play a prerecorded video animation, which can be understood as a purely static playing, because the traditional dynamic emoticon is displayed in a display area with a fixed size, even if the position of the dynamic emoticon in the conversation interface changes, for example, the sent dynamic emoticon goes to the top of the screen by other messages sent later, the static playing mode cannot be changed. In the embodiment of the present application, because the animation elements of the dynamic expression are at least to be displayed in the area outside the dynamic body diagram, the animation elements are played while being drawn during the playing process of the dynamic expression, specifically, a reference position of the animation drawing is determined according to a playing mode associated with the dynamic expression, and each animation frame corresponding to the dynamic expression is drawn and displayed frame by frame starting from the determined reference position, that is, each animation frame included in the dynamic expression is drawn in real time to obtain a sequence animation frame, and each animation frame in the obtained sequence animation frame is sequentially displayed in real time. Therefore, the animation effect can be obviously seen as spreading out from the inherent display area of the dynamic body diagram.
The animation effects of the dynamic expressions are different, the corresponding play modes are also different, and the play modes of the dynamic expressions can be used for describing the overall animation effects of the dynamic expressions, for example, the play modes of the dynamic expressions can include two reference factors, one is animation type, and the other is animation attribute information.
The animation type refers to a type of a dynamic expression, and can be divided into a trigger type animation, an atmosphere type animation and a position type animation according to animation effects, for example; wherein, the triggered animation means that the animation element is triggered by a certain action of the dynamic body part, such as a heart-shaped element triggered by a gesture of "heart-than-heart", or a balloon element triggered by an action of "beep mouth", or a series of love heart elements triggered by an action of "blink"; the atmosphere animation is to add an overall atmosphere effect to the whole dynamic expression, such as a red love element diffused to the whole dialog box area, or a raindrop element expanded to the middle part of the dialog box area, and the like; the positional animation refers to an animation element having a positional relationship with the dynamic body diagram, such as a purple heart shape circumscribing a dashed frame of the display area of the dynamic body diagram, or a circumscribed circle concentric with the display area of the dynamic body diagram, and so on.
The animation attribute information is information for describing an animation effect of the animation element, and may include one or more combinations of a motion trajectory, a size, a shape, a color, an animation effect, or may further include other description information. The animation attribute information is configured when the dynamic expression is created, and a series of animation frames are dynamically drawn and played according to the preset animation attribute information when the dynamic expression is played subsequently.
Different dynamic expressions, because of different playing modes, the coordinate systems adopted during real-time rendering may also be different, specifically, the reference position may be determined according to the animation type corresponding to the playing mode of the dynamic expression, and the reference position may be used as the coordinate origin of the rendering coordinate system, in other words, the reference coordinate system and the coordinate origin of the reference coordinate system may be determined according to the animation type, and then the determined reference position is used to begin to render and display each obtained animation frame by frame according to the animation attribute information corresponding to the playing mode of the dynamic expression. Therefore, the position drawing of each animation frame of each dynamic expression is associated with the animation type and the animation attribute information of the dynamic expression, so that the dynamic expressions of various animation types can be reasonably drawn in real time, the difference of various dynamic expressions is reflected, and the playing effect is enhanced.
When the animation type of the dynamic expression is a trigger type animation, for example, a love heart-shaped animation element triggered by a heart-to-heart gesture of a user, for such a dynamic expression, the trigger source position of the animation element can be determined as a reference position, the trigger source position of the animation element refers to an original position where the animation element is triggered to generate, for example, the position where the love heart-to-heart-shaped trigger source position of the heart-to-heart gesture is triggered is the position where the heart-to-heart-shaped gesture is located, and for example, the position where the mouth is located where the motion of the mouth-to-mouth-shaped trigger source position of the animation element is triggered is. Referring to fig. 7, at time T1, the dynamic emoticons are displayed at the middle of the conversation interface, and at time T2 after time T1, since there is a new conversation message on top of it, the display position of the dynamic expression is shifted up a little, but when rendering is performed at time T1 and time T2, respectively, both image rendering is performed with the trigger source position origin of coordinates, it can be seen that the reference positions for image rendering at time T1 and time T2 are (x1, y1), (x1 ', y 1'), respectively, the two coordinate positions are different, so that the coordinates can be dynamically transformed to be drawn along with the different display positions of the dynamic expression, so as to achieve the effect of real-time playing, from the time T1 to the time T2, the position of the solid heart-shaped trigger animation gradually rises, and the volume is larger and larger, and the cartoon has the similar animation display effect of gradually rising and becoming larger.
When the animation type of the dynamic expression is an atmosphere animation, such as a full-screen animation, such as a reticular heart shape diffused in a large area of a screen and an irregular five-pointed star in fig. 8, the central position of the whole dialog interface (including the dialog box area and the input box area) can be determined as a reference position for position drawing, or the central position of the dialog box area in the dialog interface can be determined as a reference position for position drawing, because the atmosphere animation is generally used for representing the whole atmosphere and is generally delivered in a large area, image drawing can be accurately performed by using the whole dialog interface as a reference coordinate system, such as that shown in fig. 8, the size and the position of the reticular heart shape of the atmosphere animation are changed at the time T1 and the time T2 after the time T1, but the reference position of the drawing coordinates is not changed, are all as shown in fig. 8 (x2, y 2).
When the animation type of the dynamic expression is a position animation, since the position animation is an animation in which the animation element has a certain positional relationship with the dynamic body diagram, such as a dotted heart shape which is shown in fig. 9 and is externally connected to the dynamic body diagram, the center position of the playing area of the dynamic body diagram may be determined as a reference position for frame drawing, the position animation may be displayed at a certain position with fixed size, may be displayed at intervals in a blinking manner, or may be displayed after being hidden for a certain period of time, the dashed heart shape line may be displayed in a blinking manner from the time T1 to the time T2 after the time T1 in fig. 9, and as the position of the dynamic expression increases, the coordinates of the reference position are also changed from (x3, y3) to (x3 ', y 3'), that is, the position coordinates of the image frame drawn dynamically change. In addition, when the position type animation is subjected to image drawing, the four vertex coordinates of the display area of the dynamic body diagram can be considered at the same time, and the change of the display position of the dynamic body diagram can be reflected through the change of the four vertex coordinates, so that the animation frame can be drawn more accurately.
The above describes the process of displaying the "cross-regional dynamic expression" in the embodiment of the present application, and before such a "cross-regional dynamic expression" is used, a dynamic expression may be created first, and based on the same inventive concept, the embodiment of the present application further provides a dynamic expression creating method, by which the process of creating the "cross-regional dynamic expression" is explained, where the dynamic expression creating method in the embodiment of the present application is shown in fig. 10, and the flow shown in fig. 10 is described as follows.
Step 1001: and responding to the expression creating operation and displaying a video recording interface.
When a user wants to create a dynamic expression, the user can perform expression creation operation, the equipment displays a video recording interface through the triggering of the expression creation operation, and then video data are collected in the video recording interface and the dynamic expression is synthesized.
For example, as shown in the left diagram of fig. 11a, the user performs an emoticon creation operation in the emoticon input panel by clicking a "+" mark therein, wherein the "+" mark is an identifier for indicating creation of a dynamic emoticon, and the device presents a video recording interface as shown in the right diagram of fig. 11a in response to the emoticon creation operation.
As shown in the left diagram in fig. 11b, for example, the user may perform an expression creating operation for the dynamic expression displayed in the dialog box area of the session interface, for example, perform a long-press operation after clicking, and through the triggering of the operation, the device displays the video recording interface shown in the right diagram in fig. 11 b. In the manner shown in fig. 11b, the user may directly perform a follow-up operation on the dynamic expression already shown in the dialog box area, so that in the process of creating the dynamic expression, animation elements in the dynamic expression targeted by the follow-up operation may be directly extracted to create the dynamic expression, and the follow-up dynamic expression may be a dynamic expression sent by the user himself or a dynamic expression sent by another user participating in the conversation. Therefore, the user can quickly select the dynamic expression with the favorite animation effect from the dialog box area to simulate follow-up shooting, and interestingness is enhanced.
In addition, the video recording interface triggered and displayed based on the expression creation operation, as shown in fig. 11a or fig. 11b, may include a video recording area, an animation material template, and a shooting button, where the video recording area is a video view finder, and the device captures video data in the video recording area.
Step 1002: responding to a video recording operation triggered on a video recording interface, obtaining recorded video data, and storing the video data and animation elements in a correlated manner as dynamic expressions, wherein the video data is stored as a dynamic main body diagram of the dynamic expressions, and the playing area of the animation elements at least comprises a first area outside the dynamic main body diagram.
In the displaying video recording interface, a user may perform a video recording operation in the video recording interface, as shown in the left diagram in fig. 12, the user may perform a video recording operation of clicking a shooting button or pressing a shooting button for a long time, further may collect video data in a video recording area, and combine the collected video data and a predetermined animation element into a dynamic expression, where, in the combining process, the collected video data is used as data corresponding to a dynamic body diagram of the dynamic expression, and a playing area of the animation element at least includes a first area outside the dynamic body diagram, that is, the playing effect of "cross-area playing" shown in fig. 6 may be achieved when the dynamic expression created in the embodiment of the present application is displayed, so as to expand a display range of the dynamic expression, specifically, expand a display range of the animation element, a brand-new dynamic display scheme is provided, so that the display effect of the dynamic expressions is enhanced.
As shown in the right diagram of fig. 12, the synthesized animation elements are solid hearts, multiple reticular hearts, and irregular pentagons, and the video data and the animation elements can be dynamically synthesized in such a manner that the animation elements to be synthesized are displayed at least outside the video recording area of the video data, so as to obtain a dynamic expression, that is, in the process of synthesizing the moving video data and the animation elements, the display area of the animation elements at least includes an area outside the video recording area. In one possible implementation, the animation element is only displayed in an area outside the video recording area, i.e. the display area and the video recording area of the animation element are two mutually exclusive areas; in another possible embodiment, the display area of the animation element further includes a part or all of the video recording area, that is, the animation element may be displayed in and out of the video recording area, for example, gradually spanning from the video recording area to the outside of the video recording area, or gradually spanning from the outside of the video recording area to the inside of the video recording area, where the spanning may refer to a constant number of animation elements and only a movement in position, or may refer to a gradually increasing number of animation elements to achieve a similar diffusion and diffusion animation effect.
Conventionally, when a dynamic expression is synthesized, an animation element to be synthesized can only be displayed outside a video recording area, but the animation element of the embodiment of the application can be displayed at least outside the video recording area, which is to support an animation effect of cross-area playing of the dynamic expression, so that in the process of synthesizing the dynamic expression, a reference position of animation drawing can be determined according to a playing mode associated with the animation element to be synthesized, a video data sequence video frame and the animation element to be synthesized are synthesized by taking the determined reference position as a coordinate origin to obtain a sequence animation frame corresponding to the dynamic expression, and the dynamic expression can be stored by storing the drawn sequence animation frame. That is to say, in the process of synthesizing the "cross-region dynamic expression", because the animation element is to be displayed at least outside the video recording region, according to the difference between the display position and the display effect, the drawing coordinate system and the corresponding coordinate origin can be dynamically selected according to the playing mode of the animation element to be synthesized to draw the animation frame in real time, so as to meet the animation effect requirement of the animation element.
As mentioned above, the dynamic expressions may have different animation types, specifically, the animation effects of the animation elements are different, and in the embodiment of the present application, the animation types of the animation elements mainly include three main categories: the following describes coordinate rendering of three types of animation elements, i.e., a trigger animation, an atmosphere animation, and a position animation.
For a trigger-type animation element, such as the solid heart shown in fig. 13a, the position of the trigger source of the trigger animation element in the video data in the video recording area can be determined as the reference position for performing animation drawing as the origin of coordinates, and since the motion of the trigger animation is performed with respect to the position of the trigger source, the real-time drawing of the motion position of the trigger-type animation element with the position of the trigger source as the origin of coordinates can accurately reserve the motion position of the animation element, for example, gradually rising from the position of the trigger source to the outside of the video recording area, thereby ensuring the animation effect of the animation element.
For animation elements of an atmosphere type, such as a reticular heart shape and an irregular five-pointed star shown in fig. 13b, the animation elements are generally displayed in a large area, such as a full screen, the central position of the whole video recording interface can be determined as a reference position for animation drawing as a coordinate origin, and because the movement position of the atmosphere type animation is relative to the whole interface, the animation elements are drawn in real time at the central position of the video recording interface, so that the change of the movement position can be accurately represented, and the animation effect is ensured.
For the position type animation elements, such as the dotted line heart shape surrounding the outer frame of the video recording area shown in fig. 13c, since such animation elements move relative to the video recording area, the center position of the video recording area can be determined as the reference position for performing animation drawing as the origin of coordinates, and the coordinates of the four vertices of the video recording area can be considered, so as to accurately draw the moving position of the position type animation, thereby ensuring the animation effect.
After the sequence animation frame is obtained by drawing, the sequence animation frame can be stored in a GIF picture format, that is, the dynamic expression is obtained, and in the process of synthesizing and storing the related data, the reference position (namely the coordinate origin information during animation drawing) also needs to be stored, so that the coordinate position information can be extracted to accurately display the motion position of the animation element when the dynamic expression is played subsequently. For example, for a trigger-type dynamic expression, coordinate position information of a trigger-type serial animation frame and a trigger source needs to be saved; for the dynamic expression of the atmosphere type, a sequence animation frame of the atmosphere type needs to be saved (because the drawing coordinate position of the atmosphere type is fixed, the coordinate origin information of the atmosphere type does not need to be saved); for the position-type dynamic expression, it is necessary to store a position-type sequence animation frame, center position information of the dynamic body diagram (which may be understood as the center position of the video recording area), and four corner vertex coordinate information of the dynamic body diagram (which may be understood as four vertex coordinates of the video recording area).
In a specific implementation process, one or more types of animation elements may be selected from the same dynamic expression according to use requirements, for example, only a trigger-type animation element may be selected, or both a trigger-type animation element and an atmosphere-type animation may be selected, and so on. That is to say, the 'cross-regional dynamic expression' can freely match and combine various animation elements, and has high flexibility, so that diversified dynamic effects are realized, and the differentiation requirements of users are met.
Further, animation elements that need to be synthesized may be pre-selected before synthesizing the dynamic expression. As for the mode of selecting the animation elements, one mode can be directly selecting the animation material template, the equipment responds to the operation of selecting the animation material template and extracts the animation elements from the selected animation material template, the efficiency of acquiring the animation elements is high through the mode of the animation material template, and the combination effect is generally good; or, another mode is that one or more animation icons can be directly selected and then combined into animation elements, that is, the device responds to the operation of selecting one or more animation icons and determines the selected one or more animation icons as animation elements, and in the mode, the size and the position of each animation element are manually adjusted by a user, so that the user can flexibly match various types of animation elements according to the use requirement of the user. The animation material template and the selected one or more animation icons are associated with a second identifier, for example, the second identifier is associated and displayed, and the second identifier is used for indicating that the corresponding animation elements can be displayed outside the video recording area to synthesize the dynamic expressions.
As for the manner of selecting the animation element, another manner is that, in response to the follow-up operation for the target dynamic expression displayed in the dialog box region of the dialog interface, the animation element to be synthesized is extracted from the target dynamic expression, that is, the animation element in the target dynamic expression can be directly used as the animation element to be synthesized, for example, as shown in fig. 11b, after the user performs the expression creation operation for the target dynamic expression, the expression creation operation can be understood as the follow-up operation at the same time, and then the animation element can be extracted from the target dynamic expression to be used as the animation element used in the newly created dynamic expression, so that the animation element can be quickly obtained, and at the same time, the user can directly use the favorite animation element from the dialog message to simulate and refer, thereby satisfying the use requirements of the user. The first identifier is associated with the target dynamic expression of the shot, for example, the first identifier is directly associated and displayed, and the first identifier is, for example, a black triangle mark as shown in fig. 11b, or may also be another mark, which is not limited in the embodiment of the present application.
In addition, in the process of synthesizing the dynamic expression, the human body contour information in each video frame in the video data may be determined, the background area of each video frame may be determined according to the human body contour information in each video frame, then the transparency of the background area of each video frame may be adjusted to a transparent value, or the color of the background area of each video frame may be adjusted to a predetermined color, for example, the background color of the current session interface, and finally, each video frame after being adjusted and the animation element may be synthesized to obtain the dynamic expression. That is to say, in the process of synthesizing the dynamic expression, the background area of the dynamic body diagram can be removed, so that the dynamic expression can be integrated with the session interface when the dynamic expression is displayed in the following process, the abrupt feeling is eliminated, and the display effect is enhanced.
In the embodiment of the application, when playing the dynamic expression, the animation element of the dynamic expression can stride out of the display area outside the dynamic main body diagram of the dynamic expression to play, and the display space outside the area of the inherent size of the dynamic main body diagram is fused, thereby breaking through the limitation of the traditional dynamic expression on the display size, providing a brand-new dynamic expression playing mechanism, enlarging the display range of the dynamic expression, such dynamic expression can express richer contents, and simultaneously, the flexibility and interest of the dynamic expression display can also be improved, and the display effect of the dynamic expression is enhanced. In addition, in the process of playing the dynamic expressions, the corresponding adaptive reference coordinate system is selected according to different dynamic expressions to draw the animation frames in real time, so that the display of animation elements is not limited by positions, and the cross-region animation display effect is realized.
Based on the same inventive concept, the embodiment of the application provides a dynamic expression display device, which can be a hardware structure, a software module, or a hardware structure and a software module. The dynamic expression display device is, for example, the terminal device 301 or the terminal device 302 in fig. 3, or may be a function device provided in the terminal device 301 or the terminal device 302. Referring to fig. 14a, the dynamic expression display apparatus in the embodiment of the present application includes a response module 1401 and a display module 1402, where:
a response module 1401, configured to respond to a selection operation for selecting a dynamic expression triggered through a session interface;
the display module 1402 is configured to display the selected dynamic body diagram of the dynamic emoticon in the conversation interface as a conversation message, and play an animation element associated with the dynamic body diagram in the conversation interface, where a play area of the animation element at least includes a first area outside the dynamic body diagram.
In a possible implementation manner, the playing area of the animation element further comprises a second area, and the second area is a part or all of the display area of the dynamic body diagram.
In one possible implementation, the display module 1402 is configured to:
gradually playing the animation element from one of the first area and the second area to the other area; alternatively, the first and second electrodes may be,
playing the animation element in the first area; alternatively, the first and second electrodes may be,
the animation element is played in the first area and the second area.
In one possible implementation, the display module 1402 is configured to:
and determining a reference position for animation drawing according to the playing mode associated with the dynamic expression, and drawing and displaying each animation frame corresponding to the dynamic expression frame by starting at the reference position.
In one possible implementation, the display module 1402 is configured to:
determining a reference position according to the animation type corresponding to the play mode; and
and drawing and displaying each animation frame corresponding to the dynamic expression frame by frame at a reference position according to the animation attribute information corresponding to the play mode, wherein the animation attribute information comprises at least one of the motion trail, the size, the shape, the color and the animation special effect of the animation element.
In one possible implementation, the display module 1402 is configured to:
if the animation type of the dynamic expression is trigger type animation, determining the position of a trigger source of the animation element as a reference position; or
If the animation type of the dynamic expression is the atmosphere animation, determining the central position of the dialog interface as a reference position, or determining the central position of a dialog box area in the dialog interface as a reference position; or
And if the animation type of the dynamic expression is position type animation, determining the central position of the playing area of the dynamic body diagram as a reference position.
In a possible implementation manner, please refer to fig. 14b, the dynamic expression display apparatus in the embodiment of the present application further includes a confirmation module 1403, configured to:
before the display module 1402 displays the selected dynamic body diagram of the dynamic emoticon as a session message in the session interface, the dynamic emoticon is displayed in an input box area in the session interface, and when a confirmation operation for determining to send the dynamic emoticon is detected, sending of the dynamic emoticon is triggered.
In a possible implementation, the transparency of the background region of the dynamic body map is a transparent value, or the color of the background region of the dynamic body map is the background color of the conversation interface.
In one possible implementation manner, the dynamic expression is associated with a first identifier, and the first identifier is used for indicating that the associated dynamic expression is a cross-region dynamic expression.
All relevant contents of each step involved in the embodiment of the dynamic expression display method can be cited to the functional description of the functional module corresponding to the dynamic expression display device in the embodiment of the present application, and are not described herein again.
Based on the same inventive concept, the embodiment of the present application provides a dynamic expression creating device, which may be a hardware structure, a software module, or a hardware structure plus a software module. The dynamic expression creating apparatus is, for example, the terminal device 301 or the terminal device 302 in fig. 3 described above, or may be a function apparatus provided in the terminal device 301 or the terminal device 302. Referring to fig. 15a, the dynamic expression creation apparatus in the embodiment of the present application includes a display module 1501 and a creation module 1502, where:
the display module 1501 is used for responding to the expression creating operation and displaying a video recording interface;
the creating module 1502 is configured to respond to a video recording operation triggered on the video recording interface, obtain recorded video data, and store the video data and an animation element in an associated manner as a dynamic expression, where the video data is stored as a dynamic body diagram of the dynamic expression, and a playing area of the animation element at least includes a first area outside the dynamic body diagram.
In one possible implementation, the video recording interface includes a video recording area, and the creating module 1502 is configured to:
determining a reference position for animation drawing according to a playing mode associated with the animation element, and synthesizing a sequence video frame of the video data and the animation element by taking the reference position as a coordinate origin to obtain a sequence animation frame corresponding to the dynamic expression; wherein, in the process of composition, the display area of the animation element at least comprises the area outside the video recording area.
In one possible implementation, the display area of the animation element also includes part or all of the video recording area.
In one possible implementation, the creating module 1502 is configured to:
if the animation element is a trigger animation, determining the position of a trigger source of the trigger animation element in the video data as a reference position;
if the animation type of the animation element is atmosphere animation, determining the central position of the video recording interface as a reference position;
and if the animation type of the dynamic expression is position type animation, determining the central position of the video recording area as a reference position.
In a possible implementation manner, referring to fig. 15b, the dynamic expression creation apparatus in this embodiment of the application further includes a determining module 1503 to:
responding to a follow-up shooting operation aiming at a target dynamic expression displayed in a dialog box area of a session interface, and extracting animation elements from the target dynamic expression, wherein the target dynamic expression is associated with a first identifier which is used for indicating that the associated dynamic expression is a cross-area dynamic expression; alternatively, the first and second electrodes may be,
and responding to the operation of selecting the animation material template or the animation icon, extracting animation elements from the selected animation material template, or determining the selected animation icon as the animation elements, wherein the selected animation material template and the selected animation icon are both associated with a second identifier, and the second identifier is used for indicating that the corresponding animation elements can be displayed outside the video recording area so as to synthesize dynamic expressions.
In one possible implementation, the creating module 1502 is configured to:
determining a background area of each video frame in the video data;
adjusting the transparency of the background area of each video frame to a transparent value, or adjusting the color of the background area of each video frame to a predetermined color;
and synthesizing each adjusted video frame and animation element to obtain a sequence animation frame corresponding to the dynamic expression.
All relevant contents of each step involved in the embodiment of the dynamic expression creating method can be cited to the functional description of the functional module corresponding to the dynamic expression creating device in the embodiment of the present application, and are not described herein again.
The division of the modules in the embodiments of the present application is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Based on the same inventive concept, the present application further provides a computing device, which may execute the steps of the methods shown in fig. 4 and fig. 10, and the computing device is, for example, the terminal device 301 or the terminal device 302 in fig. 3, or may also be the server 303 in fig. 3. Referring to fig. 16, the computing device in the embodiment of the present application includes at least one processor 1601 and a memory 1602 connected to the at least one processor, and the specific connection medium between the processor 1601 and the memory 1602 is not limited in the embodiment of the present application, for example, the processor 1601 and the memory 1602 may be connected by a bus, and the bus may be divided into an address bus, a data bus, a control bus, and the like.
In the embodiment of the present application, the memory 1602 stores instructions executable by the at least one processor 1601, and the at least one processor 1601 is capable of executing the instructions stored in the memory 1602 to perform the steps included in the video processing method.
The Processor 1601 may be a general purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 1602, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1602 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function to store program instructions and/or data.
The processor 1601 is a control center of the computing device, and can connect various parts of the computing device through various interfaces and lines, and perform various functions and process data of the computing device by executing or executing instructions stored in the memory 1602 and calling data stored in the memory 1602, so as to monitor the computing device as a whole. Alternatively, the processor 1601 may include one or more processing units, and the processor 1601 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1601. In some embodiments, the processor 1601 and the memory 1602 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
Further, the computing device in this embodiment of the application may further include an input unit 1603, a display unit 1604, a radio frequency unit 1605, an audio circuit 1606, a speaker 1607, a microphone 1608, a Wireless Fidelity (WiFi) module 1609, a bluetooth module 1610, a power source 1611, an external interface 1612, a headphone jack 1613, and other components. Those skilled in the art will appreciate that FIG. 16 is merely exemplary of a computing device and is not intended to limit the computing device, which may include more or fewer components than those shown, or may combine certain components, or different components.
Input unit 1603 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computing device. For example, input unit 1603 may include a touch screen 1614 and other input devices 1615. The touch screen 1614 may collect touch operations by a user (e.g., operations by the user on or near the touch screen 1614 using any suitable object such as a finger, a joint, a stylus, etc.), i.e., the touch screen 1614 may be used to detect touch pressure and touch input position and touch input area, and drive the corresponding connection device according to a preset program. The touch screen 1614 may detect a touch operation of the touch screen 1614 by a user, convert the touch operation into a touch signal and transmit the touch signal to the processor 1601, or may transmit touch information of the touch operation to the processor 1601, and may receive a command from the processor 1601 and execute the command. The touch information may include at least one of pressure magnitude information and pressure duration information. The touch screen 1614 may provide an input interface and an output interface between the computing device and the user. In addition, the touch screen 1614 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch screen 1614, the input unit 1603 may include other input devices 1615. For example, other input devices 1615 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1604 may be used to display information input by or provided to a user as well as various menus for the computing device. Further, the touch screen 1614 may cover the display unit 1604, and when the touch screen 1614 detects a touch operation on or near the touch screen 1614, the touch screen may transmit the pressure information to the processor 1601 to determine the pressure information of the touch operation. In this embodiment, the touch screen 1614 and the display unit 1604 may be integrated into one component to implement input, output, and display functions of the computing device. For convenience of description, the embodiment of the present application is schematically illustrated by taking the touch screen 1614 as an example of the functional set of the touch screen 1614 and the display unit 1604, but in some embodiments, the touch screen 1614 and the display unit 1604 may be taken as two separate components.
The Display unit 1604 may include at least one of a liquid Crystal Display (L, liquid Crystal Display, L CD), a Thin Film Transistor liquid Crystal Display (Thin Film Transistor L, liquid Crystal Display, TFT-L CD), an Organic light Emitting Diode (O L ED) Display, an Active Matrix Organic light Emitting Diode (AMO L) Display, an In-plane switching (IPS) Display, a flexible Display, a 3D Display, etc. some of these displays may be configured to be transparent to allow a user to view the Display from outside, and may include a Display unit 16 (not shown) or other Display devices) that may be configured to allow a user to view the Display, such as a Display device 16, or other Display devices, such as a Display device 16, which may be implemented In a transparent manner, may be referred to as a computing device, or may include a Display unit 16.
In general, radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (L ow noise amplifier, &ttttranslation = L "&tttl &/ttt &tttna), a duplexer, etc. in addition, the radio frequency unit 1605 may communicate with network devices and other devices through wireless communication.
Audio circuitry 1606, speaker 1607, and microphone 1608 may provide an audio interface between the user and the computing device. The audio circuit 1606 may transmit the electrical signal converted from the received audio data to the speaker 1607, and convert the electrical signal into an audio signal for output by the speaker 1607. On the other hand, the microphone 1608 converts the collected sound signal into an electrical signal, which is received by the audio circuit 1606 and converted into audio data, and the audio data is processed by the audio data output processor 1601 and then sent to another electronic device via the rf unit 1605, for example, or the audio data is output to the memory 1602 for further processing.
WiFi belongs to a short-range wireless transmission technology, and the computing device can help a user send and receive e-mails, browse webpages, access streaming media, and the like through the WiFi module 1609, which provides wireless broadband internet access for the user. Although fig. 16 shows the WiFi module 1609, it is understood that it does not belong to the essential constitution of the computing device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
Bluetooth is a short-range wireless communication technology. By using the bluetooth technology, the communication between mobile communication computing devices such as a palm computer, a notebook computer, a mobile phone and the like can be effectively simplified, the communication between the devices and the Internet (Internet) can also be successfully simplified, the data transmission between the computing device and the Internet becomes faster and more efficient through the bluetooth module 1610 by the computing device, and a road is widened for wireless communication. Bluetooth technology is an open solution that enables wireless transmission of voice and data. Although fig. 16 shows the bluetooth module 1610, it is understood that it does not belong to the essential constitution of the computing device and may be omitted entirely as needed within the scope not changing the essence of the invention.
The computing device may also include a power source 1611 (such as a battery) for receiving external power or powering various components within the computing device. Preferably, the power source 1611 may be logically connected to the processor 1601 by a power management system, so that functions of managing charging, discharging, and power consumption management are implemented by the power management system.
The computing device may also include an external interface 1612, which may include a standard Micro USB interface, may also include a multi-pin connector, may be used to connect the computing device to communicate with other devices, and may also be used to connect a charger to charge the computing device.
Although not shown, the computing device in the embodiment of the present application may further include a camera, a flash, and other possible functional modules, which are not described herein again.
Based on the same inventive concept, the present application also provides a storage medium, which may be a computer-readable storage medium, and the storage medium stores computer instructions, and when the computer instructions are executed on a computer, the computer executes the steps of the dynamic expression display method as described above.
Based on the same inventive concept, the present application also provides a storage medium, which may be a computer-readable storage medium, and the storage medium stores computer instructions, which, when executed on a computer, cause the computer to perform the steps of the dynamic expression creation method as described above.
Based on the same inventive concept, an embodiment of the present application further provides a chip system, where the chip system includes a processor and may further include a memory, and is configured to implement the steps of the dynamic expression display method or the steps of the dynamic expression creation method. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
In some possible implementations, the various aspects of the dynamic expression presentation method provided in this application embodiment may also be implemented in the form of a program product including program code for causing a computer to perform the steps of the dynamic expression presentation method according to various exemplary implementations of this application described above when the program product runs on the computer.
In some possible implementations, the various aspects of the dynamic expression creation method provided in this application embodiment may also be implemented in the form of a program product including program code for causing a computer to perform the steps of the dynamic expression creation method according to various exemplary implementations of this application described above when the program product runs on the computer.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (15)

1. A dynamic expression display method is characterized by comprising the following steps:
responding to a selection operation for selecting the dynamic expression triggered by a session interface, and displaying a dynamic body diagram of the selected dynamic expression in the session interface as a session message; and
playing an animation element associated with the dynamic body diagram in the session interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic body diagram.
2. The method of claim 1, wherein the playback area of the animation element further comprises a second area, the second area being a partial or full area of the display area of the dynamic body map.
3. The method of claim 2, wherein playing an animation element associated with the dynamic body diagram in the conversational interface comprises:
the animation element gradually crosses from one of the first area and the second area to the other area for playing; alternatively, the first and second electrodes may be,
the animation element is played in the first area; alternatively, the first and second electrodes may be,
the animation element is played in the first area and the second area.
4. The method of claim 1, wherein presenting the selected dynamic body diagram of the dynamic emoticon as a conversation message in the conversation interface, and playing an animation element associated with the dynamic body diagram in the conversation interface comprises:
and determining a reference position for animation drawing according to the playing mode associated with the dynamic expression, and starting to draw and display each animation frame corresponding to the dynamic expression frame by using the reference position.
5. The method of claim 4, wherein determining a reference position for animation drawing according to the playing mode associated with the dynamic expression, and drawing and displaying each animation frame corresponding to the dynamic expression frame by frame starting from the reference position comprises:
determining the reference position according to the animation type corresponding to the play mode; and
and drawing and displaying each animation frame corresponding to the dynamic expression frame by starting from the reference position according to the animation attribute information corresponding to the play mode, wherein the animation attribute information comprises at least one of the motion trail, the size, the shape, the color and the animation special effect of the animation element.
6. The method of claim 5, wherein determining the reference position according to the animation type corresponding to the playback mode comprises:
if the animation type of the dynamic expression is trigger type animation, determining the position of the trigger source of the animation element as the reference position; or
If the animation type of the dynamic expression is atmosphere animation, determining the central position of the conversation interface as the reference position, or determining the central position of a dialog box area in the conversation interface as the reference position; or
And if the animation type of the dynamic expression is position type animation, determining the central position of the playing area of the dynamic main body diagram as the reference position.
7. The method of claim 1, wherein prior to presenting the dynamic body diagram of the selected dynamic emoticon as a conversation message in the conversation interface, the method further comprises:
and displaying the dynamic expression in an input box area in the session interface, and triggering and sending the dynamic expression when a confirmation operation for determining to send the dynamic expression is detected.
8. The method of claim 1, wherein the transparency of the background region of the dynamic body map is a transparent value, or the color of the background region of the dynamic body map is a background color of the conversation interface.
9. The method of claim 1, wherein the dynamic expression is associated with a first identifier indicating that the associated dynamic expression is a cross-region dynamic expression.
10. A method of creating a dynamic expression, the method comprising:
responding to the expression creating operation and displaying a video recording interface;
responding to a video recording operation triggered on the video recording interface, obtaining recorded video data, and storing the video data and animation elements in a related manner as dynamic expressions, wherein the video data is stored as a dynamic main body diagram of the dynamic expressions, and the playing area of the animation elements at least comprises a first area outside the dynamic main body diagram.
11. The method of claim 10, wherein the video recording interface includes a video recording area, and wherein storing the video data and animation element association as a dynamic expression comprises:
determining a reference position of animation drawing according to the playing mode associated with the animation element, and synthesizing a sequence video frame of the video data and the animation element by taking the reference position as a coordinate origin to obtain a sequence animation frame corresponding to the dynamic expression; wherein, in the process of composition, the display area of the animation element at least comprises the area outside the video recording area.
12. The method of claim 11, wherein determining a reference position for the animated rendering based on the play mode associated with the animated element comprises:
if the animation element is a trigger type animation, determining a trigger source position for triggering the animation element in the video data as the reference position;
if the animation type of the animation element is atmosphere animation, determining the central position of the video recording interface as the reference position;
and if the animation type of the dynamic expression is position type animation, determining the central position of the video recording area as the reference position.
13. The method of claim 10, wherein the method further comprises:
responding to a follow-up shooting operation aiming at a target dynamic expression displayed in a dialog box area of a session interface, and extracting the animation element from the target dynamic expression, wherein the target dynamic expression is associated with a first identifier which is used for indicating that the associated dynamic expression is a cross-area dynamic expression; alternatively, the first and second electrodes may be,
responding to the operation of selecting the animation material template or the animation icon, extracting the animation elements from the selected animation material template, or determining the selected animation icon as the animation elements, wherein the selected animation material template and the selected animation icon are both associated with second identifications, and the second identifications are used for indicating that the corresponding animation elements can be displayed outside the video recording area to synthesize dynamic expressions.
14. A dynamic expression display device, the device comprising:
the response module is used for responding to the selection operation of selecting the dynamic expression triggered by the session interface;
and the display module is used for displaying the selected dynamic body diagram with the dynamic expression in the conversation interface as a conversation message and playing an animation element associated with the dynamic body diagram in the conversation interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic body diagram.
15. A dynamic expression creation apparatus, characterized in that the apparatus comprises:
the display module is used for responding to the expression creating operation and displaying the video recording interface;
the creating module is used for responding to a video recording operation triggered on the video recording interface, obtaining recorded video data, and storing the video data and animation elements in a correlated manner as dynamic expressions, wherein the video data is stored as a dynamic main body diagram of the dynamic expressions, and the playing area of the animation elements at least comprises a first area outside the dynamic main body diagram.
CN202010273094.5A 2020-04-09 2020-04-09 Dynamic expression display method, dynamic expression creation method and device Active CN111464430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010273094.5A CN111464430B (en) 2020-04-09 2020-04-09 Dynamic expression display method, dynamic expression creation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010273094.5A CN111464430B (en) 2020-04-09 2020-04-09 Dynamic expression display method, dynamic expression creation method and device

Publications (2)

Publication Number Publication Date
CN111464430A true CN111464430A (en) 2020-07-28
CN111464430B CN111464430B (en) 2023-07-04

Family

ID=71683722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010273094.5A Active CN111464430B (en) 2020-04-09 2020-04-09 Dynamic expression display method, dynamic expression creation method and device

Country Status (1)

Country Link
CN (1) CN111464430B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000252A (en) * 2020-08-14 2020-11-27 广州市百果园信息技术有限公司 Virtual article sending and displaying method, device, equipment and storage medium
CN112328140A (en) * 2020-11-02 2021-02-05 广州华多网络科技有限公司 Image input method, device, equipment and medium thereof
CN112506393A (en) * 2021-02-07 2021-03-16 北京聚通达科技股份有限公司 Icon display method and device and storage medium
CN112748974A (en) * 2020-08-05 2021-05-04 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium based on session
CN113438149A (en) * 2021-07-20 2021-09-24 网易(杭州)网络有限公司 Expression sending method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104932853A (en) * 2015-05-25 2015-09-23 深圳市明日空间信息技术有限公司 Dynamic expression play method and device
US20170018289A1 (en) * 2015-07-15 2017-01-19 String Theory, Inc. Emoji as facetracking video masks
CN106357506A (en) * 2016-08-30 2017-01-25 北京北信源软件股份有限公司 Treatment method for expression flow message in instant communication
CN106534875A (en) * 2016-11-09 2017-03-22 广州华多网络科技有限公司 Barrage display control method and device and terminal
CN108055191A (en) * 2017-11-17 2018-05-18 深圳市金立通信设备有限公司 Information processing method, terminal and computer readable storage medium
CN109120866A (en) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 Dynamic expression generation method, device, computer readable storage medium and computer equipment
CN109388297A (en) * 2017-08-10 2019-02-26 腾讯科技(深圳)有限公司 Expression methods of exhibiting, device, computer readable storage medium and terminal
CN109787890A (en) * 2019-03-01 2019-05-21 北京达佳互联信息技术有限公司 Instant communicating method, device and storage medium
CN110213638A (en) * 2019-06-05 2019-09-06 北京达佳互联信息技术有限公司 Cartoon display method, device, terminal and storage medium
CN110428485A (en) * 2019-07-31 2019-11-08 网易(杭州)网络有限公司 2 D animation edit methods and device, electronic equipment, storage medium
CN110475150A (en) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 The rendering method and device of virtual present special efficacy, live broadcast system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104932853A (en) * 2015-05-25 2015-09-23 深圳市明日空间信息技术有限公司 Dynamic expression play method and device
US20170018289A1 (en) * 2015-07-15 2017-01-19 String Theory, Inc. Emoji as facetracking video masks
CN106357506A (en) * 2016-08-30 2017-01-25 北京北信源软件股份有限公司 Treatment method for expression flow message in instant communication
CN106534875A (en) * 2016-11-09 2017-03-22 广州华多网络科技有限公司 Barrage display control method and device and terminal
CN109388297A (en) * 2017-08-10 2019-02-26 腾讯科技(深圳)有限公司 Expression methods of exhibiting, device, computer readable storage medium and terminal
CN108055191A (en) * 2017-11-17 2018-05-18 深圳市金立通信设备有限公司 Information processing method, terminal and computer readable storage medium
CN109120866A (en) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 Dynamic expression generation method, device, computer readable storage medium and computer equipment
WO2020063319A1 (en) * 2018-09-27 2020-04-02 腾讯科技(深圳)有限公司 Dynamic emoticon-generating method, computer-readable storage medium and computer device
CN109787890A (en) * 2019-03-01 2019-05-21 北京达佳互联信息技术有限公司 Instant communicating method, device and storage medium
CN110213638A (en) * 2019-06-05 2019-09-06 北京达佳互联信息技术有限公司 Cartoon display method, device, terminal and storage medium
CN110428485A (en) * 2019-07-31 2019-11-08 网易(杭州)网络有限公司 2 D animation edit methods and device, electronic equipment, storage medium
CN110475150A (en) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 The rendering method and device of virtual present special efficacy, live broadcast system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHUNYA OSAWA; GUIFANG DUAN; MASATAKA SEO; TAKANORI IGARASHI; YEN-WEI CHEN: "Reconstruction of 3D dynamic expressions from single facial image", 《 2013 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 *
许良凤;王家勇;崔婧楠;胡敏;张柯柯: "基于动态时间规整和主动外观模型的动态表情识别", 《电子与信息学报》, vol. 40, no. 2 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112748974A (en) * 2020-08-05 2021-05-04 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium based on session
CN112748974B (en) * 2020-08-05 2024-04-16 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium based on session
CN112000252A (en) * 2020-08-14 2020-11-27 广州市百果园信息技术有限公司 Virtual article sending and displaying method, device, equipment and storage medium
CN112000252B (en) * 2020-08-14 2022-07-22 广州市百果园信息技术有限公司 Virtual article sending and displaying method, device, equipment and storage medium
CN112328140A (en) * 2020-11-02 2021-02-05 广州华多网络科技有限公司 Image input method, device, equipment and medium thereof
CN112506393A (en) * 2021-02-07 2021-03-16 北京聚通达科技股份有限公司 Icon display method and device and storage medium
CN113438149A (en) * 2021-07-20 2021-09-24 网易(杭州)网络有限公司 Expression sending method and device

Also Published As

Publication number Publication date
CN111464430B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
US10636221B2 (en) Interaction method between user terminals, terminal, server, system, and storage medium
CN110134484B (en) Message icon display method and device, terminal and storage medium
CN111464430B (en) Dynamic expression display method, dynamic expression creation method and device
CN109905754B (en) Virtual gift receiving method and device and storage equipment
US9542949B2 (en) Satisfying specified intent(s) based on multimodal request(s)
CN107085495B (en) Information display method, electronic equipment and storage medium
CN111343073B (en) Video processing method and device and terminal equipment
WO2022183707A1 (en) Interaction method and apparatus thereof
CN108900407B (en) Method and device for managing session record and storage medium
CN112328136A (en) Comment information display method, comment information display device, comment information display equipment and comment information storage medium
CN109032732B (en) Notification display method and device, storage medium and electronic equipment
CN113485617A (en) Animation display method and device, electronic equipment and storage medium
CN115051965B (en) Method and device for controlling video playing, computing equipment and storage medium
CN109683760B (en) Recent content display method, device, terminal and storage medium
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN112911052B (en) Information sharing method and device
CN116688526A (en) Virtual character interaction method and device, terminal equipment and storage medium
CN113852540B (en) Information transmission method, information transmission device and electronic equipment
CN115378893A (en) Message processing method and device, electronic equipment and readable storage medium
CN114338572B (en) Information processing method, related device and storage medium
CN113986377A (en) Wallpaper interaction method and device and electronic equipment
CN113362802A (en) Voice generation method and device and electronic equipment
CN112783386A (en) Page jump method, device, storage medium and computer equipment
US9384013B2 (en) Launch surface control
CN111240574B (en) Memory cleaning method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025824

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant