CN108337547B - Character animation realization method, device, terminal and storage medium - Google Patents

Character animation realization method, device, terminal and storage medium Download PDF

Info

Publication number
CN108337547B
CN108337547B CN201711206503.4A CN201711206503A CN108337547B CN 108337547 B CN108337547 B CN 108337547B CN 201711206503 A CN201711206503 A CN 201711206503A CN 108337547 B CN108337547 B CN 108337547B
Authority
CN
China
Prior art keywords
animation
pixel point
character
target
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711206503.4A
Other languages
Chinese (zh)
Other versions
CN108337547A (en
Inventor
熊涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711206503.4A priority Critical patent/CN108337547B/en
Publication of CN108337547A publication Critical patent/CN108337547A/en
Application granted granted Critical
Publication of CN108337547B publication Critical patent/CN108337547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Abstract

The embodiment of the invention discloses a method, a device, a terminal and a storage medium for realizing character animation; the embodiment of the invention obtains the video frame and the character picture corresponding to the video playing time; constructing a target video frame to be fused according to the video frame; selecting a corresponding target animation picture from the animation picture set according to the video playing time; acquiring target pixel points positioned in a preset character display area from a target video frame; when the target pixel point is located in the current character animation display sub-area, color fusion is carried out on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture, and the color value of the target pixel point is obtained; and when the target pixel point is positioned in the current character display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point. The scheme can improve the realization efficiency of the character animation.

Description

Character animation realization method, device, terminal and storage medium
Technical Field
The invention relates to the technical field of picture processing, in particular to a method, a device, a terminal and a storage medium for realizing character animation.
Background
With the development of terminal technology, mobile terminals have begun to change from simply providing telephony devices to a platform for running general-purpose software. The platform no longer aims at providing call management, but provides an operating environment including various application programs such as call management, game and entertainment, office events, mobile payment and the like, and with a great deal of popularization, the platform has been deeply developed to the aspects of life and work of people.
At present, video application programs are more and more widely applied, and users can install video applications on terminals and record and play videos through the video applications. In order to improve user experience, some current video applications can enable characters in a video to present various animation effects (such as fading animation effects) when the video is played, so as to increase the aesthetic feeling of the characters in the video, that is, to implement character animation in the video.
However, current text animation implementations include: for each character required to be displayed in the video, a complex algorithm is adopted to convert part of pixels in the character into pixels with animation effect (such as character particle dissipation effect), then the converted pixels are spliced with the rest pixels in the character, and the spliced character is fused with the video frame.
It can be seen that the algorithm adopted by the current character animation implementation method is relatively complex and has a large calculation amount, so that the implementation efficiency of the character animation is reduced.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a terminal and a storage medium for realizing character animation, which can improve the realization efficiency of the character animation.
The embodiment of the invention provides a method for realizing character animation, which comprises the following steps:
acquiring a video frame and a character picture corresponding to video playing time, wherein the character picture comprises character content needing to be displayed in a video;
constructing a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value; selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time;
acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area;
when the target pixel point is located in the current character animation display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture to obtain the color value of the target pixel point;
and when the target pixel point is positioned in the current character display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point.
Correspondingly, the embodiment of the invention also provides a device for realizing the character animation, which comprises:
the image acquisition unit is used for acquiring a video frame and a character image corresponding to video playing time, wherein the character image comprises character content needing to be displayed in a video;
the target frame generating unit is used for constructing a target video frame to be fused according to the video frame, and the color value of a pixel point in the target video frame is a preset color value;
the picture selection unit is used for selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time;
the pixel acquisition unit is used for acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area;
the first fusion unit is used for carrying out color fusion on corresponding pixel points in the video frame and corresponding pixel points in the target animation picture to obtain the color value of the target pixel points when the target pixel points are positioned in the current character animation display sub-area;
and the second fusion unit is used for carrying out color fusion on the corresponding pixel points in the video frame and the corresponding pixel points in the character picture to obtain the color values of the target pixel points when the target pixel points are positioned in the current character display sub-area.
Correspondingly, the embodiment of the invention also provides a terminal which comprises a memory and a processor, wherein the memory stores instructions, and the processor loads the instructions to execute the character animation implementation method provided by any one of the embodiments of the invention.
Correspondingly, the embodiment of the invention also provides a storage medium, wherein the storage medium stores instructions, and the instructions are executed by the processor to realize the character animation realization method provided by any one of the embodiments of the invention.
The method comprises the steps of obtaining a video frame and a character picture corresponding to video playing time, wherein the character picture comprises character contents needing to be displayed in a video; constructing a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value; selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time; acquiring target pixel points positioned in a preset character display area from a target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area; when the target pixel point is located in the current character animation display sub-area, color fusion is carried out on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture, and the color value of the target pixel point is obtained; and when the target pixel point is positioned in the current character display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point. The scheme can realize character animation in the video based on the pixel color fusion mode, is simple in realization mode, avoids the adoption of a complex algorithm to realize image fusion, and can improve the realization efficiency of the character animation.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a schematic view of a scene of an information interaction system according to an embodiment of the present invention;
FIG. 1b is a flow chart of a text animation implementation method according to an embodiment of the present invention;
FIG. 1c is a schematic diagram of a text display area according to an embodiment of the present invention;
FIG. 1d is a schematic diagram of a picture coordinate system according to an embodiment of the present invention;
FIG. 1e is a schematic diagram of image fusion according to an embodiment of the present invention;
FIG. 1f is a text dissipation animation intent provided by an embodiment of the present invention;
FIG. 2a is another flow chart of a text animation implementation method according to an embodiment of the present invention;
FIG. 2b is another schematic diagram of a text dissipation animation provided by an embodiment of the invention;
fig. 3a is a schematic structural diagram of a first structure of a text animation implementation apparatus according to an embodiment of the present invention;
fig. 3b is a schematic structural diagram of a second animation implementation apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an information interaction system, which comprises any one of the character animation realization devices provided by the embodiments of the invention, wherein the character animation realization device can be integrated in a terminal, the terminal can be a mobile phone, a tablet personal computer and other equipment, and the system can also comprise other equipment, such as a server and the like.
Referring to fig. 1a, an embodiment of the present invention provides an information interaction system, including: a terminal 10 and a server 20, the terminal 10 and the server 20 being connected via a network 30. The network 30 includes network entities such as routers and gateways, which are shown schematically in the figure. The terminal 10 may interact with the server 20 via a wired network or a wireless network, for example, to download applications (e.g., video applications) and/or application update packages and/or application-related data information or service information from the server 20. The terminal 10 may be a mobile phone, a tablet computer, a notebook computer, or the like, and fig. 1a illustrates the terminal 10 as a mobile phone. Various applications required by the user, such as applications with entertainment functions (e.g., video applications, audio playing applications, game applications, reading software) and applications with service functions (e.g., map navigation applications, group buying applications, etc.), can be installed in the terminal 10.
Based on the system shown in fig. 1a, taking a video application as an example, the terminal 10 downloads a video application and/or a video application update data packet and/or data information or service information (e.g. video information) related to the video application from the server 20 via the network 30 according to the requirement. By adopting the embodiment of the invention, the terminal 10 can construct the target video frame to be fused according to the video frame, and the color value of the pixel point in the target video frame is the preset color value; selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time; acquiring target pixel points positioned in a preset character display area from a target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area; when the target pixel point is located in the current character animation display sub-area, color fusion is carried out on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture, and the color value of the target pixel point is obtained; and when the target pixel point is positioned in the current character display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point. In addition, the terminal 10 can also play the fused video frames to realize character animation.
The above example of fig. 1a is only an example of a system architecture for implementing the embodiment of the present invention, and the embodiment of the present invention is not limited to the system architecture shown in fig. 1a, and various embodiments of the present invention are proposed based on the system architecture.
In an embodiment, a text animation implementation method is provided, which may be executed by a processor of a terminal, as shown in fig. 1b, and includes:
101. and acquiring a video frame and a character picture corresponding to the video playing time, wherein the character picture comprises character contents which need to be displayed in the video.
The method for realizing the character animation can be implemented before the video is played and can also be implemented in the process of playing the video.
For example, in the video playing process, a video frame and a text picture corresponding to the video playing time may be obtained.
After the video playing time is obtained, the video frame corresponding to the video playing time can be obtained from the video frame set of the video, and the text picture corresponding to the video playing time can be obtained from the text picture set of the video.
A video is composed of a series of video frames, i.e., video pictures, wherein the video playing time of the video is used to indicate the video frames, i.e., video pictures, which need to be played when playing, and therefore, the video frame set may include the video frames of all videos.
In this embodiment, in order to display corresponding text content on a video, text pictures including text content that need to be displayed in the video playing time generally need to be specified, that is, the corresponding relationship between the video playing time and the text pictures is set; therefore, in this embodiment, the video playing time may be used to indicate not only the video frame to be played to display the corresponding video frame, but also the text picture to be displayed to display the corresponding text content in the video frame.
The text picture is a picture containing text content, and the text content is text content which needs to be displayed in a video, such as lyrics, video subtitles, and the like.
The text picture set of the video comprises a plurality of text pictures, each text picture can comprise a section of text content, each text picture corresponds to a video playing time, and when the video playing time is up, the text content in the text picture is displayed in the video picture. Therefore, the embodiment of the invention can select the corresponding text picture from the text picture set based on the video playing time.
102. And constructing a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value.
The target video frame can be a picture which has the same size as the video frame and the pixel point is a preset color value.
Wherein, preset color value can be set according to actual demand, for example 0 etc.
For example, a target video frame with a corresponding size (e.g., the same size) may be generated according to the size information of the video frame, where the size information of the video frame may be represented by pixels, for example, the size of the video frame may be 100pix × 100pix, at this time, the target video frame with the size of 100pix × 100pix may be generated, and the color value of the pixel point in the target video frame may be zero, so that a white target video frame is constructed.
For example, in the process of playing a video, a to-be-played video frame corresponding to the current video playing time may be acquired, and then a corresponding target video frame is generated based on the to-be-played video frame.
103. And selecting a corresponding target animation picture from the animation picture set for describing the character animation according to the video playing time.
The timing sequence between step 103 and step 102 is not limited by the sequence number, and step 103 may be executed first, step 102 is executed later, or both steps may be executed simultaneously. The specific time sequence can be set according to actual conditions.
The animation picture set is used for describing a certain character animation effect, the animation picture set comprises at least two pictures, and the at least two pictures in the animation picture set are displayed according to a certain time sequence to form the character animation effect.
For example, the animation picture set is used for describing a certain character animation effect, all animation pictures in the animation picture set are used for describing a complete process of the certain character animation effect, and for example, the animation picture set can be used for describing a fading-in and fading-out animation effect. One animation picture in the animation picture set can be used for describing a partial effect of the character animation, and the character animation effect is formed by the partial effect.
For example, an animated picture collection includes a set of animated pictures that are used to describe an animation effect in which text slowly disappears or dissipates.
Alternatively, the text animation effect may include a text dissipation animation effect, for example, an animation effect of dissipating a part of text in a video picture, and the text dissipation manner may be various, for example, the text dissipation effect may be dissipated in the form of particles, that is, the text animation effect is a text particle dissipation animation effect.
The selection of the target animation picture can be based on the current display progress of the character animation, that is, the corresponding target animation picture can be selected from the animation picture set for describing the character animation according to the current display progress of the character animation. That is, the step "selecting a corresponding target animation picture from the animation picture set for describing the text animation according to the video playing time" may include:
acquiring the current display progress information of the character animation according to the video playing time;
and selecting a corresponding target animation picture from the animation picture set for describing the character animation according to the display progress information.
For example, when the display progress information includes the current display time or duration of the character animation, the corresponding animation picture may be selected from the animation picture set based on the display time.
For example, a character animation with a display time length of 4s is defined, and the animation displays time intervals of 1:01 to 1: 05, assume that the current video time (e.g., video playing time) is 1: 04, at this time, it may be determined that the current animation display time progress t is the 3 rd s, and at this time, the animation picture corresponding to the 3 rd s may be obtained from the animation picture set.
104. And acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area. The preset text display area is a total area for displaying text in the target video frame, and the size and shape of the area can be set according to actual requirements, such as a rectangle. The character animation sub-region is a region used for displaying a character animation effect in the preset character display region, and the character display sub-region is a region used for displaying characters in the preset character display region. For example, referring to fig. 1c, the text display area a in the video frame currently includes a text animation display sub-area a1 and a text display sub-area a2, and the current boundary line between the text animation display sub-area a1 and the text display sub-area a2 is a 3.
In the embodiment of the invention, as time goes by, the character animation display sub-area and the character display sub-area also change, that is, the boundary line between the two areas moves along with the change of time. For example, when the video playing time is t1, the length of the text animation display sub-region a1 is greater than the length of the text display sub-region a 2; when the video playback time is t2, the length of the sub-area a1 of the character animation display is smaller than the length of the sub-area a2 of the character display.
In the embodiment of the invention, the positions of the pixel points in the target video frame can be obtained; and when the position of the pixel point is located in the position range corresponding to the preset character display area, determining that the pixel point is located in the preset character display area, and at the moment, determining that the pixel point is the target pixel point.
The position of the pixel point may include a coordinate value of the pixel point in a predetermined coordinate system, such as a horizontal coordinate value and/or a vertical coordinate value. The origin of the predetermined coordinate system may be set according to time requirements, for example, the origin may be a central point of the target video frame, or an upper left corner point, a lower left corner point, and the like of the target video frame.
For example, when the predetermined coordinate system includes x and y axes, the position of the pixel point may be a horizontal coordinate value x and a vertical coordinate value y, i.e., (x, y).
For example, when the position of the pixel point includes a horizontal coordinate value x and a vertical coordinate value y, that is, (x, y), if the range of the vertical coordinate value of the preset text display area is a-b, it is assumed that the vertical coordinate value y is outside the range of the vertical coordinate value a-b, it indicates that the pixel point is not located in the preset text display area, and if the vertical coordinate value y is within the range of the vertical coordinate value a-b, it indicates that the pixel point is located in the preset text display area, that is, the target pixel point.
105. And when the target pixel point is positioned in the current character animation display sub-area, carrying out color fusion on the target pixel point and the corresponding pixel point in the target animation picture to obtain the color value of the target pixel point.
When the pixel point is located in the preset character display area, the pixel point is located in the character animation display sub-area, and then the pixel point is located in the character display sub-area. In the embodiment of the invention, for the pixel points in the sub-area of the character animation, the pixel points and the corresponding pixel points in the target animation picture can be subjected to color fusion so as to realize the character animation.
For the pixel points in the text display sub-area, color fusion can be performed between the pixel points and corresponding pixel points in the text picture so as to display corresponding text contents.
Therefore, the character animation can be displayed in the current character animation display sub-area, the character content is displayed in the character display sub-area, and the character dissipation animation effect can be further displayed.
The corresponding pixel point in the video frame may be a pixel point corresponding to the target pixel point in the video frame. For example, if the coordinate value of the target pixel point p in the target video frame is (x, y), then the pixel point p' whose coordinate value is (x, y) or (x, y) in the video frame may be selected as the pixel point to be fused for color fusion.
The corresponding pixel points in the target animation picture can be the pixel points in the target animation picture corresponding to the positions of the target pixel points. For example, if the coordinate value of the target pixel point p in the target video frame is (x, y), then the coordinate value of the target animation image is (x, y) or the coordinate value of the target animation image and the pixel point p ″ of (x, y) can be selected as the pixel point to be fused for color fusion.
In the embodiment of the invention, the color fusion between the pixel points refers to the fusion processing of the color values, such as RGB values, of the pixel points by adopting a preset fusion algorithm.
For example, the step of performing color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture to obtain the color value of the target pixel point may include:
determining a corresponding first pixel point to be fused in the video frame and a corresponding second pixel point to be fused in the target animation picture according to the position of the target pixel point;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the second pixel point to be fused in the target animation picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
For example, the coordinate value of the target pixel point p is (x, y), and when the target pixel point p is located in the current text animation display sub-region a1, the pixel point p' corresponding to (x, y) in the video frame and the pixel point p ″ corresponding to (x, y) in the target animation picture may be color-blended. Specifically, a color value, such as an RGB value, of the pixel point p' in the video frame and a color value, such as an RGB value, of the pixel point p ″ in the target animation picture are obtained; and finally, updating the color value of the target pixel point p into the fused color value such as the fused RGB value.
106. And when the target pixel point is positioned in the current character display subregion, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point.
The corresponding pixel point in the video frame may be a pixel point corresponding to the target pixel point in the video frame. For example, if the coordinate value of the target pixel point p in the target video frame is (x, y), then the pixel point p' whose coordinate value is (x, y) or (x, y) in the video frame may be selected as the pixel point to be fused for color fusion.
The corresponding pixel point in the text picture can be a pixel point corresponding to the target pixel point position in the text picture. For example, if the coordinate value of the target pixel point p in the target video frame is (x, y), then the pixel point p' "whose coordinate value is (x, y) or (x, y) in the text image may be selected as the pixel point to be fused for color fusion.
Optionally, the step of performing color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the text picture to obtain the color value of the target pixel point when the target pixel point is located in the current text display sub-region may include:
determining a corresponding first pixel point to be fused in a video frame according to the position of the target pixel point, and determining a corresponding third pixel point to be fused in the character picture;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the third pixel point to be fused in the character picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
For example, the coordinate value of the target pixel point p is (x, y), and when the target pixel point p is located in the current text display sub-region a2, color fusion can be performed between the pixel point p 'corresponding to (x, y) in the video frame and the pixel point p' ″ corresponding to (x, y) in the text image. Specifically, a color value, such as an RGB value, of a pixel point p 'in a video frame and a color value, such as an RGB value, of a pixel point p' ″ in a text picture are obtained; and finally, updating the color value of the target pixel point p into the fused color value such as the fused RGB value.
Optionally, for a conventional pixel point which is not located in the preset character display area, determining a target conventional pixel point corresponding to the conventional pixel point in a video frame; and setting the color value of the conventional pixel point as the color value of the target conventional pixel point in the video frame. Thereby realizing the normal picture of the video.
For example, when the position of a pixel point is not located in the position range corresponding to the preset text display area, the pixel point is a conventional pixel point, and at this time, the color value of the first pixel point corresponding to the position in the video frame can be obtained; and setting the color value of the target pixel point as the color value of the first pixel point.
For example, when the position of the pixel e includes a horizontal coordinate value x and a vertical coordinate value y, that is, (x, y), it is assumed that the vertical coordinate value y is outside the range a-b of the vertical coordinate value, indicating that the pixel e is not located in the preset text display area, at this time, the color value of the pixel e 'corresponding to (x, y) in the video frame, such as an RGB value, may be obtained, and the color value of the pixel e is set as the color value of the pixel e', such as an RGB value.
Through the steps of the method, the color value of each pixel point of the target video frame can be set, so that the fused target video frame can be obtained, corresponding character animation can be presented when the target video frame is played, specifically, the character animation is displayed in the character animation display sub-area, and corresponding character content is displayed in the character display sub-area.
The fused target video frames can be obtained through the steps of the method, so that a series of fused video frames are obtained, and further corresponding character animation effects, such as character particle dissipation animation effects and the like, are realized.
In the embodiment of the present invention, there are various ways of determining whether the pixel point is located in the text animation display sub-region or the text display sub-region. For example, as the text animation display sub-region and the text display sub-region change with the passage of time, that is, the boundary position of the region changes, the current boundary position of the region can be obtained based on the current video playing time, and whether the region is located in the text animation display sub-region or the text display sub-region is determined based on the boundary position of the region and the position of the pixel point.
For example, after the video frame and the text image are obtained and before color fusion is performed, the text animation implementation method of this embodiment may further include:
acquiring a current region boundary position according to the video playing time, wherein the region boundary position is a region boundary position of a current character animation display sub-region and a character display sub-region in a preset character display region;
comparing the position of the target pixel point with the boundary position of the area to obtain a comparison result;
and determining whether the target pixel point is positioned in the character animation display subarea or the character display subarea according to the comparison result.
And the region interface position is the region interface position of the current character animation display sub-region and the character display sub-region in the preset character display region. For example, the boundary position of the region may be a boundary coordinate value between the current character animation display sub-region and the character animation region. For example, referring to fig. 1c, the text display area a in the video frame currently includes a text animation display sub-area a1 and a text display sub-area a2, and the current boundary line between the text animation display sub-area a1 and the text display sub-area a2 is a 3. At this time, the area boundary position may include a position of a point on the boundary line a3, such as a pixel point, and specifically, may be a coordinate value of a point on the boundary line a3 in a predetermined coordinate system.
Alternatively, referring to fig. 1d, the region boundary position may include a horizontal coordinate value in a predetermined coordinate system at the boundary of the character animation display sub-region and the character display sub-region, or a vertical coordinate value, such as a horizontal coordinate value fx in a predetermined coordinate system of a point on the boundary line a 3. The coordinate system includes horizontal and vertical coordinates, i.e., x, y coordinates.
The area boundary position may be obtained according to the video playing time, for example, a mapping relationship (i.e., a corresponding relationship) between the video playing time and the area boundary position may be preset, and then, the current area boundary position may be obtained based on the mapping relationship. That is, the step "obtaining the current boundary position of the area according to the video playing time" may include:
acquiring a mapping relation between video playing time and an area boundary position;
and acquiring the current region boundary position according to the mapping relation and the video playing time.
The expression form of the mapping relationship is various, such as a functional form, for example, the region boundary position may be fx (t).
In some embodiments, in order to improve the accuracy of obtaining the boundary position of the area, a mapping relationship (i.e., a corresponding relationship) between the display time of the text animation and the boundary position of the area may be preset, and then the current display time of the text animation may be calculated according to the video playing time, and then the boundary position of the area is determined based on the mapping relationship. That is, the step "obtaining the current boundary position of the area according to the video playing time" may include:
acquiring an animation display time interval corresponding to the character animation;
acquiring the current display time of the character animation according to the video playing time and the animation display time interval;
and acquiring the current boundary position of the area according to the mapping relation among the current display time, the character animation display time and the boundary position of the area.
Wherein the animation display time zone includes: animation display starting time and animation display ending time, wherein the time starting point of the current display time of the character animation is the animation display starting time.
The expression form of the mapping relationship between the display time of the text animation and the boundary position of the region is various, for example, a functional form, for example, the boundary position of the region may be fx (t '), where t' is the display time of the text animation.
Specifically, when the video time is within the animation display time interval, a time difference between the video time and the animation display start time may be calculated, and the current display time of the text animation may be determined according to the time difference.
For example, a character animation with a display time of 5s is defined, and the animation displays a time interval of 1:00 to 1: 05, assume that the current video time (e.g., video playing time) is 1: 03, at this time, it may be determined that the current animation display time progress t 'is 3s, then an area boundary position may be calculated based on the current animation display time, for example, an area boundary position may be defined as f (t'), where t 'is the current display time of the text animation, that is, there is a functional relationship between the area boundary position and the display time t', and fx (t ') represents that the text and area boundary position f (t') that changes with time also changes.
The display time interval or range of the text animation may be set based on the text display time period, which may be the same as the text display time period or included in the text display time period. For example, the display time range 1 of a certain text: 00-1:24, then the display time period for the text animation may be 1: 05-1:24, and the like.
In this embodiment, the horizontal coordinate of the region boundary may be obtained through a function fx (t), where t may be video playing time or text animation display time, and fx (t) represents that the horizontal coordinate fx (t) of the region boundary between the text animation sub-region and the text display sub-region, which changes with time, also changes. For example, the lateral coordinate fx (t) at the text-to-particle interface can be defined as a function of time. By the definition, various effects of different particle dissipation speeds, different dissipation starting times and different dissipation ending times can be realized.
According to the above description, for each pixel point p on the target video frame, the coordinate value is assumed to be (x, y):
if y of the pixel point p is within the range of the longitudinal coordinate value of the character display area, namely a-b, the pixel point p is positioned in the character display area; at this time, the horizontal coordinate at the area boundary between the character animation sub-area and the character display sub-area is fx (t). Next, it is determined whether the character animation display sub-region or the character display sub-region is located based on the abscissa of the boundary and the abscissa of the pixel point p.
If x is less than fx (t), indicating that the target pixel point is located in a character animation display sub-area, at the moment, displaying animation, and carrying out color fusion on a pixel point p 'corresponding to (x, y) in the video frame and (x, y) and a pixel point p' corresponding to (x, y) in the target animation picture;
if x is larger than fx (t), the target pixel point is located in a character display sub-area, at the moment, characters need to be displayed, and color fusion is carried out on a pixel point p 'corresponding to (x, y) in the video frame and (x, y) and a pixel point p' ″ corresponding to (x, y) in the character picture.
The color value of the pixel point may include r (red), g (green), b (blue), a (transparency), and other color values.
If y of the pixel point p is outside the range of the longitudinal coordinate value of the character display area, namely a-b, the pixel point p is located outside the character display area, and at the moment, the color value of the pixel point at the position (x, y) corresponding to the video frame can be taken as the color value of the p.
The fused video frames can be obtained by carrying out color fusion on each pixel point in the target video frames, a series of fused video frames can be obtained by repeating the process according to the video playing time, and the character particle dissipation effect can be shown in the video by displaying the series of fused video frames.
The embodiment of the invention can sequentially acquire each video playing time of the video, then, the method can fuse the corresponding video frames to obtain a series of fused video frames, and the corresponding character animation effect can be presented when the system fused video frames are played. For example, referring to fig. 1f, when the text animation is a text particle dissipation animation, the text image including "whether you have made a weather" and the video frame, and the particle dissipation animation image may be fused by using the text animation implementation method provided in the embodiment of the present invention, so as to obtain a series of fused video frames; when these merged video frames are played, an animation effect is presented as shown in fig. 1 e.
As can be seen from the above, in the embodiment of the present invention, the video frame and the text picture corresponding to the video playing time are obtained; constructing a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value; selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time; acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area; when the target pixel point is located in the current character animation display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture to obtain the color value of the target pixel point; and when the target pixel point is positioned in the current character display subregion, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point. The scheme can realize character animation in the video in a pixel color fusion mode, is simple in realization mode, avoids the adoption of a complex algorithm to realize image fusion, and can improve the realization efficiency of the character animation.
In addition, the character animation implementation scheme provided by the embodiment of the invention can get rid of the limitation of animation display, and the diversity of character dissipation animations is improved; in addition, the scheme is realized in the presence of different character dissipation animation effects, the realization speed is not greatly different, and character animation can be quickly realized, so that the character animation realization stability is high.
In one embodiment, the method described above is further detailed.
The embodiment of the invention further describes the character animation realization method by taking the terminal as an execution main body.
As shown in fig. 2a, a method for implementing a text animation includes the following specific steps:
201. and the terminal acquires the current video playing time in the process of playing the video.
202. And the terminal acquires video pictures and character pictures corresponding to the video playing time.
For example, the terminal may obtain a video picture corresponding to the video playing time from the video picture set, and obtain a text picture corresponding to the video playing time from the text picture set.
The video picture set may include all video frames (i.e., video pictures) constituting a video, and when a video is played, corresponding video frames may be extracted from the video picture set according to video playing time to be played.
In this embodiment, in order to display corresponding text content on a video, text pictures including text content that need to be displayed in the video playing time generally need to be specified, that is, the corresponding relationship between the video playing time and the text pictures is set; therefore, in this embodiment, the video playing time may be used to indicate not only the video frame to be played to display the corresponding video frame, but also the text picture to be displayed to display the corresponding text content in the video frame.
The text picture is a picture containing text content, and the text content is text content which needs to be displayed in a video, such as lyrics, video subtitles, and the like.
The time sequence for acquiring the video picture and the text picture is not limited, and the video picture and the text picture can be acquired simultaneously or sequentially.
203. And the terminal constructs a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value.
The target video frame may be a picture having the same size as the video frame (i.e., the same arrangement of the pixels), and the pixels are preset color values.
Wherein, preset color value can be set according to actual demand, for example 0 etc.
For example, a target video frame with a corresponding size (e.g., the same size) may be generated according to the size information of the video frame, where the size information of the video frame may be represented by pixels, for example, the size of the video frame may be 200pix × 100pix, at this time, the target video frame with the size of 200pix × 100pix may be generated, and the color value of the pixel point in the target video frame may be zero, that is, a white target video frame is constructed.
204. And the terminal acquires the current display time of the character animation according to the video playing time and the animation display time interval corresponding to the character animation.
Wherein the animation display time zone includes: the animation display starting time and the animation display ending time, the current display progress of the character animation can be the current display time, and the time starting point of the display time is the animation display starting time.
Specifically, when the video playing time is within the animation display time interval, a time difference between the video playing time and the animation display starting time may be calculated, and the current display time of the character animation may be determined according to the time difference.
For example, a character animation with a display time of 5s is defined, and the animation displays time intervals of 1:01 to 1: 06, it is assumed that the current video playing time (e.g. video playing time) is 1: 03, at this time, it may be determined that the current animation display timeline progress t is the 2 nd s.
205. And the terminal selects a corresponding target animation picture from the animation picture set for describing the character animation according to the display time.
The animation picture set is used for describing a certain character animation effect, the animation picture set comprises at least two pictures, and the at least two pictures in the animation picture set are displayed according to a certain time sequence to form the character animation effect.
For example, the animation picture set is used for describing a certain character animation effect, all animation pictures in the animation picture set are used for describing a complete process of the certain character animation effect, and for example, the animation picture set can be used for describing a fading-in and fading-out animation effect. One animation picture in the animation picture set can be used for describing a partial effect of the character animation, and the character animation effect is formed by the partial effect.
Alternatively, the text animation effect may include a text dissipation animation effect, for example, an animation effect of dissipating a part of text in a video picture, and the text dissipation manner may be various, for example, the text dissipation effect may be dissipated in the form of particles, that is, the text animation effect is a text particle dissipation animation effect.
The terminal may select a target animation picture based on a display process, for example, define a text animation with a display duration of 5s, where the animation display time interval is 1:01 to 1: 06, it is assumed that the current video playing time (e.g. video playing time) is 1: 03, at this time, it may be determined that the current animation display timeline progress t is the 2 nd s. At this time, the animation picture corresponding to the 2 s-th frame may be acquired from the animation picture set.
206. And the terminal acquires the region junction position of the current character animation display sub-region and the character display sub-region in the preset character display region according to the display time.
And the region interface position is the region interface position of the current character animation display sub-region and the character display sub-region in the preset character display region. For example, the boundary position of the region may be a boundary coordinate value between the current character animation display sub-region and the character animation region.
For example, the region boundary position may include a horizontal coordinate value in a predetermined coordinate system at the boundary between the character animation display sub-region and the character display sub-region, or a vertical coordinate value, such as a horizontal coordinate value fx in a predetermined coordinate system of a point on the region boundary. The coordinate system includes horizontal and vertical coordinates, i.e., x, y coordinates.
The area boundary position may be obtained based on the current display time of the character animation, for example, a mapping relationship between the display time of the character animation and the area boundary position is preset, and at this time, the current area boundary position may be obtained according to the current display time and the mapping relationship.
For example, a character animation with the display time length of 5s is defined, and the animation displays the time intervals of 1:01 to 1: 06, assuming that the current video playing time is 1: 03, at this time, it may be determined that the current animation display time progress t is the 2 nd s, then an area boundary position may be calculated based on the current animation display time, for example, the area boundary position may be defined as f (t), where t is the current display time of the character animation, that is, a functional relationship exists between the area boundary position and the display time t, and fx (t) represents that the position f (t) of the character and area boundary, which changes with time, also changes.
207. The terminal acquires the position of a pixel point in a target video frame in the target video frame.
At this time, it may be determined that the current animation display time progress t is 3s, then an area boundary position may be calculated based on the current animation display time, for example, an area boundary position may be defined as f (t), where t is the current display time of the text animation, that is, a functional relationship exists between the area boundary position and the display time t, and fx (t) represents that the position f (t) of the text-to-area boundary, which changes with time, also changes.
The terminal may traverse each pixel point in the target video frame, and perform the following step 207 and step 213 for each pixel point.
208. The terminal determines whether the position is within a position range corresponding to the preset text display area, if so, step 209 is executed, and if not, step 213 is executed.
Wherein, predetermine the position scope that the characters display region corresponds and can include: the position range of the preset text display area on the target video frame may be, specifically, a coordinate value range of the preset text display area in a coordinate system of the target video frame, for example, a longitudinal coordinate value range a to b, or a transverse coordinate value range c to d. The coordinate system may be a coordinate system established on the target video frame, and the coordinate system used by the pixel point is the same coordinate system.
For example, the position of the pixel point includes a horizontal coordinate value x and a vertical coordinate value y, i.e., (x, y), and it is determined whether the vertical coordinate value y is within the vertical coordinate value range a-b, if yes, step 209 is executed.
209. And the terminal compares the position with the region boundary position to obtain a comparison result.
For example, the horizontal coordinate value x of the pixel point is compared with the horizontal coordinate value fx (t) of the boundary of the area.
210. And the terminal determines whether the pixel point is located in the character animation display sub-region or the character display sub-region according to the comparison result, if the pixel point is located in the character animation display sub-region, the step 211 is executed, and if the pixel point is located in the character display sub-region, the step 212 is executed.
If x is less than fx (t), indicating that the pixel point is positioned in the character animation display sub-area;
if x is larger than fx (t), it indicates that the pixel point is located in the text display sub-area.
211. And the terminal performs color fusion on the first pixel point to be fused corresponding to the position in the video frame and the second pixel point to be fused corresponding to the position in the target animation picture to obtain a color value after the fusion.
If x is less than fx (t), the pixel point is located in a character animation display sub-area, at the moment, animation needs to be displayed, and color fusion is carried out on the pixel point corresponding to the (x, y) and the pixel point corresponding to the (x, y) in the video frame and the target animation picture;
if x is larger than fx (t), the pixel point is located in the character display sub-area, and at the moment, color fusion is carried out on the pixel point corresponding to (x, y) in the video frame and the pixel point corresponding to (x, y) in the character picture.
212. And the terminal performs color fusion on the first pixel point to be fused corresponding to the position in the video frame and the third pixel point to be fused corresponding to the position in the character picture to obtain a color value after the fusion.
213. The terminal sets the color value of the pixel point as the color value after fusion, and returns to step 207 to obtain the position of the next pixel point in the target video frame.
214. The terminal obtains the color value of the pixel point corresponding to the position in the video frame, sets the color value of the pixel point as the color value, and returns to step 207 to obtain the position of the next pixel point in the target video frame.
For example, when the position of the pixel point includes a horizontal coordinate value x and a vertical coordinate value y, that is, (x, y), it is assumed that the vertical coordinate value y is outside the range a-b of the vertical coordinate value, indicating that the pixel point is not located in the preset text display area, and at this time, the color value of the pixel point corresponding to (x, y) in the video frame may be obtained, and the color value of the pixel point is set as the color value.
The embodiment of the invention can sequentially acquire each video playing time of the video in the video playing process, and then, the steps of the character animation implementation method introduced above can be adopted to fuse corresponding video frames to obtain a series of fused video frames, and the corresponding character dissipation effect can be presented when the system fused video frames are played. For example, referring to fig. 2b, with the text animation implementation method provided by the embodiment of the present invention, a text picture including a text content "remember that your speak is the only castle" may be fused with a corresponding video frame according to the corresponding animation picture to obtain a series of fused video frames, and when the fused video frames are played, a text particle dissipation effect as shown in fig. 2b may be presented.
As can be seen from the above, in the embodiment of the present invention, the video frame and the text picture corresponding to the video playing time are obtained; constructing a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value; selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time; acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area; when the target pixel point is located in the current character animation display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture to obtain the color value of the target pixel point; and when the target pixel point is positioned in the current character display subregion, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point. The scheme can realize character animation in the video in a pixel color fusion mode, is simple in realization mode, avoids the adoption of a complex algorithm to realize image fusion, and can improve the realization efficiency of the character animation.
In addition, the character animation implementation scheme provided by the embodiment of the invention can get rid of the limitation of animation display, and the diversity of character dissipation animations is improved; in addition, the scheme is realized in the presence of different character dissipation animation effects, the realization speed is not greatly different, and character animation can be quickly realized, so that the character animation realization stability is high.
In order to better implement the method for implementing the character animation provided by the embodiment of the invention, an embodiment of the invention also provides a device for implementing the character animation. The meaning of the noun is the same as that in the above character animation implementation method, and specific implementation details can refer to the description in the method embodiment.
In an embodiment, there is also provided a text animation implementation apparatus, as shown in fig. 3a, the text animation implementation apparatus may include: a picture acquisition unit 301, a target frame generation unit 302, a picture selection unit 303, a pixel acquisition unit 304, a first fusion unit 305, and a second fusion unit 306;
the picture acquisition unit is used for acquiring a video frame and a character picture corresponding to the video playing time, wherein the character picture comprises character contents which need to be displayed in the video;
the target frame generating unit is used for constructing a target video frame to be fused according to the video frame, and the color value of a pixel point in the target video frame is a preset color value;
the picture selection unit is used for selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time;
the pixel acquisition unit is used for acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area;
the first fusion unit is used for carrying out color fusion on corresponding pixel points in the video frame and corresponding pixel points in the target animation picture to obtain the color value of the target pixel point when the target pixel point is positioned in the current character animation display sub-region;
and the second fusion unit is used for carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point when the target pixel point is positioned in the current character display sub-area.
In an embodiment, referring to fig. 3b, the text animation implementation apparatus further includes: an interface determination unit 307;
the boundary determining unit 307 may be configured to:
after acquiring a video frame and a character picture and before color fusion, acquiring a current region boundary position according to the video playing time, wherein the region boundary position is a region boundary position of a current character animation display sub-region and a character display sub-region in a preset character display region;
comparing the position of the target pixel point with the boundary position of the area to obtain a comparison result;
and determining whether the target pixel point is positioned in the character animation display subarea or the character display subarea according to the comparison result.
In an embodiment, the first fusing unit 305 is configured to:
determining a corresponding first pixel point to be fused in the video frame and a corresponding second pixel point to be fused in the target animation picture according to the position of the target pixel point;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the second pixel point to be fused in the target animation picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
In an embodiment, the second fusing unit 306 is configured to:
determining a corresponding first pixel point to be fused in a video frame according to the position of the target pixel point, and determining a corresponding third pixel point to be fused in the character picture;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the third pixel point to be fused in the character picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
In an embodiment, the boundary determining unit 307 may be configured to:
acquiring an animation display time interval corresponding to the character animation;
acquiring the current display time of the character animation according to the video playing time and the animation display time interval;
and acquiring the current boundary position of the area according to the mapping relation among the current display time, the character animation display time and the boundary position of the area.
In an embodiment, the boundary determining unit 307 may be configured to:
acquiring a mapping relation between video playing time and an area boundary position;
and acquiring the current region boundary position according to the mapping relation and the video playing time.
In an embodiment, the picture selecting unit 303 may be configured to:
acquiring the current display progress information of the character animation according to the video playing time;
and selecting a corresponding target animation picture from the animation picture set for describing the character animation according to the display progress information.
In an embodiment, the first fusing unit 305 is further configured to, for a regular pixel point that is not located in the preset text display area, determine a target regular pixel point corresponding to the regular pixel point in a video frame;
and setting the color value of the conventional pixel point as the color value of the target conventional pixel point in the video frame.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
The character animation implementation device can be integrated with a terminal, for example, the terminal is integrated in the terminal in the form of a client, and the terminal can be a mobile phone, a tablet computer and other devices.
As can be seen from the above, the apparatus for implementing text animation in the embodiment of the present invention employs the picture obtaining unit 301 to obtain the video frame and the text picture corresponding to the video playing time, where the text picture includes the text content to be displayed in the video; the target generation unit 302 constructs a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value; selecting a corresponding target animation picture from an animation picture set for describing character animation by a picture selecting unit 303 according to the video playing time; a pixel obtaining unit 304 obtains a target pixel point located in a preset character display area from a target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area; when the target pixel point is located in the current text animation display sub-region, the first fusion unit 305 performs color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture to obtain a color value of the target pixel point; when the target pixel point is located in the current text display sub-region, the second fusion unit 306 performs color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the text picture to obtain the color value of the target pixel point. The scheme can realize character animation in the video based on the pixel color fusion mode, is simple in realization mode, avoids the adoption of a complex algorithm to realize image fusion, and can improve the realization efficiency of the character animation.
In an embodiment, in order to better implement the method, an embodiment of the present invention further provides a terminal, where the terminal may be a mobile phone, a tablet computer, or other device.
Referring to fig. 4, an embodiment of the present invention provides a terminal 400, which may include one or more processors 401 of a processing core, one or more memories 402 of a computer-readable storage medium, a Radio Frequency (RF) circuit 403, a power supply 404, an input unit 405, and a display unit 406. Those skilled in the art will appreciate that the terminal configuration shown in fig. 4 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the terminal. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402.
The RF circuit 403 may be used for receiving and transmitting signals during information transmission and reception, and in particular, for receiving downlink information of a base station and then processing the received downlink information by the one or more processors 401; in addition, data relating to uplink is transmitted to the base station.
The terminal also includes a power supply 404 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 401 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 404 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The terminal may further include an input unit 405, and the input unit 405 may be used to receive input numeric or character information and generate a keyboard, mouse, joystick, optical or trackball signal input in relation to user settings and function control.
The terminal may further include a display unit 406, and the display unit 406 may be used to display information input by the user or provided to the user, as well as various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 408 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-emitting diode (OLED), or the like.
Specifically, in this embodiment, the processor 401 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
acquiring a video frame and a character picture corresponding to video playing time, wherein the character picture comprises character content needing to be displayed in a video;
constructing a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value; selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time;
acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area;
when the target pixel point is located in the current character animation display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture to obtain the color value of the target pixel point;
and when the target pixel point is positioned in the current character display subregion, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point.
In an embodiment, after acquiring the video frame and the text picture, and before performing color fusion, the processor 401 may further specifically implement the following functions:
acquiring a current region boundary position according to the video playing time, wherein the region boundary position is a region boundary position of a current character animation display sub-region and a character display sub-region in a preset character display region;
comparing the position of the target pixel point with the boundary position of the area to obtain a comparison result;
and determining whether the target pixel point is positioned in the character animation display subarea or the character display subarea according to the comparison result.
In one embodiment, processor 401 may embody the following functions:
determining a corresponding first pixel point to be fused in the video frame and a corresponding second pixel point to be fused in the target animation picture according to the position of the target pixel point;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the second pixel point to be fused in the target animation picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
In one embodiment, processor 401 may embody the following functions:
determining a corresponding first pixel point to be fused in a video frame according to the position of the target pixel point, and determining a corresponding third pixel point to be fused in the character picture;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the third pixel point to be fused in the character picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
In an embodiment, the processor 401 may further implement the following functions:
for the conventional pixel points which are not positioned in the preset character display area, determining target conventional pixel points corresponding to the conventional pixel points in the video frame;
and setting the color value of the conventional pixel point as the color value of the target conventional pixel point in the video frame.
In one embodiment, processor 401 may embody the following functions:
acquiring the current display progress information of the character animation according to the video playing time;
and selecting a corresponding target animation picture from the animation picture set for describing the character animation according to the display progress information.
In one embodiment, processor 401 may embody the following functions:
acquiring an animation display time interval corresponding to the character animation;
acquiring the current display time of the character animation according to the video playing time and the animation display time interval;
and acquiring the current boundary position of the area according to the mapping relation among the current display time, the character animation display time and the boundary position of the area.
As can be seen from the above, the terminal in the embodiment of the present invention obtains the video frame and the text picture corresponding to the video playing time, where the text picture includes text content to be displayed in the video; constructing a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value; selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time; acquiring target pixel points positioned in a preset character display area from a target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area; when the target pixel point is located in the current character animation display sub-area, color fusion is carried out on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture, and the color value of the target pixel point is obtained; and when the target pixel point is positioned in the current character display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point. The scheme can realize character animation in the video based on the pixel color fusion mode, is simple in realization mode, avoids the adoption of a complex algorithm to realize image fusion, and can improve the realization efficiency of the character animation.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The method, the apparatus, the terminal and the storage medium for implementing the character animation according to the embodiments of the present invention are described in detail, and a specific example is applied to illustrate the principle and the implementation manner of the present invention, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present invention.

Claims (13)

1. A character animation realization method is characterized by comprising the following steps:
acquiring a video frame and a character picture corresponding to video playing time, wherein the character picture comprises character content needing to be displayed in a video;
constructing a target video frame to be fused according to the video frame, wherein the color value of a pixel point in the target video frame is a preset color value;
selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time;
acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area;
acquiring a current region boundary position according to the video playing time, wherein the region boundary position is a region boundary position of a current character animation display sub-region and a character display sub-region in a preset character display region;
comparing the position of the target pixel point with the boundary position of the area to obtain a comparison result;
determining whether the target pixel point is located in the character animation display sub-area or the character display sub-area according to the comparison result;
when the target pixel point is located in the current character animation display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the target animation picture to obtain the color value of the target pixel point;
and when the target pixel point is positioned in the current character display sub-area, carrying out color fusion on the corresponding pixel point in the video frame and the corresponding pixel point in the character picture to obtain the color value of the target pixel point.
2. The method of claim 1, wherein when the target pixel is located in a current text animation display sub-region, performing color fusion on a corresponding pixel in the video frame and a corresponding pixel in the target animation picture to obtain a color value of the target pixel, comprises:
determining a corresponding first pixel point to be fused in the video frame and a corresponding second pixel point to be fused in the target animation picture according to the position of the target pixel point;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the second pixel point to be fused in the target animation picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
3. The method of claim 1, wherein when the target pixel is located in a current text display sub-region, color-blending the corresponding pixel in the video frame with the corresponding pixel in the text image to obtain a color value of the target pixel comprises:
determining a corresponding first pixel point to be fused in a video frame according to the position of the target pixel point, and determining a corresponding third pixel point to be fused in the text picture;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the third pixel point to be fused in the text picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
4. The text animation implementation method of claim 1, further comprising:
for the conventional pixel points which are not located in the preset character display area, determining target conventional pixel points corresponding to the conventional pixel points in a video frame;
and setting the color value of the conventional pixel point as the color value of the target conventional pixel point in the video frame.
5. The method of claim 1, wherein selecting a corresponding target animation picture from a set of animation pictures for describing a text animation according to the video playback time comprises:
acquiring the current display progress information of the character animation according to the video playing time;
and selecting a corresponding target animation picture from an animation picture set for describing the character animation according to the display progress information.
6. The method for implementing character animation according to claim 1, wherein obtaining the current boundary position of the area according to the video playing time comprises:
acquiring an animation display time interval corresponding to the character animation;
acquiring the current display time of the character animation according to the video playing time and the animation display time interval;
and acquiring the current region boundary position according to the mapping relation among the current display time, the character animation display time and the region boundary position.
7. The method for implementing character animation according to claim 1, wherein obtaining the current boundary position of the area according to the video playing time comprises:
acquiring a mapping relation between video playing time and an area boundary position;
and acquiring the current region junction position according to the mapping relation and the video playing time.
8. A character animation realization device is characterized by comprising:
the image acquisition unit is used for acquiring a video frame and a character image corresponding to video playing time, wherein the character image comprises character content needing to be displayed in a video;
the target frame generating unit is used for constructing a target video frame to be fused according to the video frame, and the color value of a pixel point in the target video frame is a preset color value;
the picture selection unit is used for selecting a corresponding target animation picture from an animation picture set for describing character animation according to the video playing time;
the pixel acquisition unit is used for acquiring a target pixel point positioned in a preset character display area from the target video frame, wherein the preset character display area comprises a character animation display sub-area and a character display sub-area;
a boundary determining unit configured to: after acquiring a video frame and a character picture and before color fusion, acquiring a current region boundary position according to the video playing time, wherein the region boundary position is a region boundary position of a current character animation display sub-region and a character display sub-region in a preset character display region; comparing the position of the target pixel point with the boundary position of the area to obtain a comparison result; determining whether the target pixel point is located in the character animation display sub-area or the character display sub-area according to the comparison result;
the first fusion unit is used for carrying out color fusion on corresponding pixel points in the video frame and corresponding pixel points in the target animation picture to obtain color values of the target pixel points when the target pixel points are located in a current character animation display sub-region;
and the second fusion unit is used for carrying out color fusion on the corresponding pixel points in the video frame and the corresponding pixel points in the character picture to obtain the color values of the target pixel points when the target pixel points are positioned in the current character display sub-area.
9. The apparatus for implementing character animation according to claim 8, wherein the first fusing unit is configured to:
determining a corresponding first pixel point to be fused in the video frame and a corresponding second pixel point to be fused in the target animation picture according to the position of the target pixel point;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the second pixel point to be fused in the target animation picture to obtain a fused color value;
updating the color value of the target pixel point in the target video frame to the fused color value;
the second fusion unit is configured to:
determining a corresponding first pixel point to be fused in a video frame according to the position of the target pixel point, and determining a corresponding third pixel point to be fused in the text picture;
fusing the color value of the first pixel point to be fused in the video frame with the color value of the third pixel point to be fused in the text picture to obtain a fused color value;
and updating the color value of the target pixel point in the target video frame to the fused color value.
10. The character animation realization apparatus of claim 8, wherein the boundary determination unit is configured to:
acquiring an animation display time interval corresponding to the character animation;
acquiring the current display time of the character animation according to the video playing time and the animation display time interval;
and acquiring the current region boundary position according to the mapping relation among the current display time, the character animation display time and the region boundary position.
11. The character animation realization apparatus of claim 8, wherein the boundary determination unit is configured to:
acquiring a mapping relation between video playing time and an area boundary position;
and acquiring the current region junction position according to the mapping relation and the video playing time.
12. A terminal, comprising a memory storing a computer program and a processor loading the computer program to perform the method of any one of claims 1-7.
13. A storage medium storing a computer program which, when executed by a processor, implements the character animation implementation method according to any one of claims 1 to 7.
CN201711206503.4A 2017-11-27 2017-11-27 Character animation realization method, device, terminal and storage medium Active CN108337547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711206503.4A CN108337547B (en) 2017-11-27 2017-11-27 Character animation realization method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711206503.4A CN108337547B (en) 2017-11-27 2017-11-27 Character animation realization method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN108337547A CN108337547A (en) 2018-07-27
CN108337547B true CN108337547B (en) 2020-01-14

Family

ID=62923105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711206503.4A Active CN108337547B (en) 2017-11-27 2017-11-27 Character animation realization method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN108337547B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934985B (en) * 2019-01-25 2021-04-09 深圳市象形字科技股份有限公司 Bullet screen technology-based queuing and calling system display method
CN110062176B (en) * 2019-04-12 2020-10-30 北京字节跳动网络技术有限公司 Method and device for generating video, electronic equipment and computer readable storage medium
CN110213638B (en) * 2019-06-05 2021-10-08 北京达佳互联信息技术有限公司 Animation display method, device, terminal and storage medium
CN110971839B (en) * 2019-11-18 2022-10-04 咪咕动漫有限公司 Video fusion method, electronic device and storage medium
CN111182361B (en) * 2020-01-13 2022-06-17 青岛海信移动通信技术股份有限公司 Communication terminal and video previewing method
CN112053450A (en) 2020-09-10 2020-12-08 脸萌有限公司 Character display method and device, electronic equipment and storage medium
CN113240779B (en) * 2021-05-21 2024-02-23 北京达佳互联信息技术有限公司 Method and device for generating text special effects, electronic equipment and storage medium
CN113421214A (en) * 2021-07-15 2021-09-21 北京小米移动软件有限公司 Special effect character generation method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102111658A (en) * 2009-12-29 2011-06-29 财团法人工业技术研究院 Interactive video and audio playing system and method
US8537282B2 (en) * 2002-11-15 2013-09-17 Thomson Licensing Method and apparatus for composition of subtitles
CN104702856A (en) * 2013-12-10 2015-06-10 音圆国际股份有限公司 Real-time selfie special-effect MV (music video) compositing system device and real-time selfie special-effect MV compositing method applied to karaoke machines
CN104882151A (en) * 2015-06-05 2015-09-02 福建星网视易信息系统有限公司 Method, device and system for displaying multimedia resources in song singing
CN107124624A (en) * 2017-04-21 2017-09-01 腾讯科技(深圳)有限公司 The method and apparatus of video data generation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8537282B2 (en) * 2002-11-15 2013-09-17 Thomson Licensing Method and apparatus for composition of subtitles
CN102111658A (en) * 2009-12-29 2011-06-29 财团法人工业技术研究院 Interactive video and audio playing system and method
CN104702856A (en) * 2013-12-10 2015-06-10 音圆国际股份有限公司 Real-time selfie special-effect MV (music video) compositing system device and real-time selfie special-effect MV compositing method applied to karaoke machines
CN104882151A (en) * 2015-06-05 2015-09-02 福建星网视易信息系统有限公司 Method, device and system for displaying multimedia resources in song singing
CN107124624A (en) * 2017-04-21 2017-09-01 腾讯科技(深圳)有限公司 The method and apparatus of video data generation

Also Published As

Publication number Publication date
CN108337547A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108337547B (en) Character animation realization method, device, terminal and storage medium
CN109246464B (en) User interface display method, device, terminal and storage medium
CN109640188B (en) Video preview method and device, electronic equipment and computer readable storage medium
US10482660B2 (en) System and method to integrate content in real time into a dynamic real-time 3-dimensional scene
CN113114841B (en) Dynamic wallpaper acquisition method and device
KR102614263B1 (en) Interaction methods and apparatus, electronic devices and computer-readable storage media
CN109191549A (en) Show the method and device of animation
US11880999B2 (en) Personalized scene image processing method, apparatus and storage medium
WO2020220773A1 (en) Method and apparatus for displaying picture preview information, electronic device and computer-readable storage medium
CN109495427B (en) Multimedia data display method and device, storage medium and computer equipment
CN110070496A (en) Generation method, device and the hardware device of image special effect
CN111803951A (en) Game editing method and device, electronic equipment and computer readable medium
EP4345756A1 (en) Special effect generation method and apparatus, electronic device and storage medium
US20220392130A1 (en) Image special effect processing method and apparatus
CN113923499B (en) Display control method, device, equipment and storage medium
US20230306694A1 (en) Ranking list information display method and apparatus, and electronic device and storage medium
CN114245028B (en) Image display method and device, electronic equipment and storage medium
CN111796826A (en) Bullet screen drawing method, device, equipment and storage medium
CN113645476B (en) Picture processing method and device, electronic equipment and storage medium
CN108305310B (en) Character animation realization method, device, terminal and storage medium
WO2024051541A1 (en) Special-effect image generation method and apparatus, and electronic device and storage medium
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN114422698A (en) Video generation method, device, equipment and storage medium
CN114428660A (en) Page processing method, device, equipment and storage medium
CN113332720A (en) Game map display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant