CN109272565A - Animation playing method, device, storage medium and terminal - Google Patents

Animation playing method, device, storage medium and terminal Download PDF

Info

Publication number
CN109272565A
CN109272565A CN201710586589.1A CN201710586589A CN109272565A CN 109272565 A CN109272565 A CN 109272565A CN 201710586589 A CN201710586589 A CN 201710586589A CN 109272565 A CN109272565 A CN 109272565A
Authority
CN
China
Prior art keywords
pixel
area
parameter value
color
transparency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710586589.1A
Other languages
Chinese (zh)
Inventor
邓春国
谌启亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710586589.1A priority Critical patent/CN109272565A/en
Publication of CN109272565A publication Critical patent/CN109272565A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 

Abstract

The invention discloses a kind of animation playing method, device, storage medium and terminals, belong to Internet technical field.Method includes: to obtain the color parameter value and transparency parameter value of each pixel in original image for each original image in the first sequence of pictures;According to the color parameter value and transparency parameter value of each pixel, the Target Photo including first area and second area is generated, each color channel of first area saves color parameter value, and one of color channel of second area saves transparency parameter value;By the second picture being composed of Target Photo it is Sequence Transformed be the first video file;Transparent effect synthesis processing is carried out to the first video file, obtains the second video file for broadcasting.Due to the carrier using video file as animation, so decoding duration can be substantially reduced, the broadcasting of animation can be realized without occupying a large amount of memories, so this kind of broadcast mode is high-efficient.

Description

Animation playing method, device, storage medium and terminal
Technical field
The present invention relates to Internet technical field, in particular to a kind of animation playing method, device, storage medium and end End.
Background technique
With the continuous development of Internet technology, have the software application of animation play function when carrying out animation play, It is more and more abundant not only to play pattern, and visual effect is also increasingly dazzled extremely.For sociability software application, During communicating pair carries out communication exchange, if it is possible to suitably play one section of animation with transparent effect, then it can be with Do not influence communicating pair check in chat box exchange content on the basis of, the visual experience of user is significantly increased, therefore at present The broadcasting pattern of transparent effect is generallyd use to play out to animation.
The relevant technologies generally use the PNG (Portable with transparence information to realize the animation of transparent effect Network Graphic Format, image file storage format) sequence realizes.Wherein, PNG sequence is essentially by multiple The animation resource packet of picture composition, each picture in this animation resource packet has transparence information, by this Animation resource packet is decoded processing and plays, and realizes animation.
In the implementation of the present invention, inventor find the relevant technologies the prior art has at least the following problems:
The above-mentioned animation resource packet being made of plurality of pictures is usually very big, therefore decoding time can for a long time, so can occupy A large amount of memories cause the efficiency of this kind of animation play mode poor, and broadcast performance is not good enough.
Summary of the invention
In order to solve the problems, such as the relevant technologies, the embodiment of the invention provides a kind of animation playing method, device, storages to be situated between Matter and terminal.The technical solution is as follows:
In a first aspect, providing a kind of animation playing method, which comprises
For each original image in the first sequence of pictures, the color of each pixel in the original image is obtained Colour parameter value and transparency parameter value;
According to the color parameter value and transparency parameter value of each pixel, generating includes first area and second The Target Photo in region, the first area and second area are consistent with the pictured scene of the original image, firstth area Each color channel in domain saves the color parameter value, any one color channel of the second area saves described transparent Spend parameter value;
By the second picture being composed of the Target Photo it is Sequence Transformed be the first video file;
Transparent effect synthesis processing is carried out to first video file, obtains the second video file for broadcasting.
Second aspect, provides a kind of moving-image playback device, and described device includes:
Module is obtained, it is every in the original image for obtaining for each original image in the first sequence of pictures The color parameter value and transparency parameter value of one pixel;
Generation module, for including according to the color parameter value and transparency parameter value, generation of each pixel The pictured scene one of the Target Photo of first area and second area, the first area and second area and the original image It causes, each color channel of the first area saves the color parameter value, any one color of the second area is logical Road saves the transparency parameter value;
Conversion module, for by the second picture being composed of the Target Photo it is Sequence Transformed be the first video text Part;
Processing module obtains for broadcasting for carrying out transparent effect synthesis processing to first video file Two video files.
The third aspect provides a kind of computer readable storage medium, at least one finger is stored in the storage medium Enable, at least one section of program, code set or instruction set, at least one instruction, at least one section of program, the code set or Instruction set is loaded as the processor and is executed to realize animation playing method described in above-mentioned first aspect.
Fourth aspect provides a kind of terminal, and the terminal includes processor and memory, is stored in the memory At least one instruction, at least one section of program, code set or instruction set, at least one instruction, at least one section of program, institute Code set or instruction set is stated to be loaded as the processor and executed to realize animation playing method described in first aspect.
Technical solution provided in an embodiment of the present invention has the benefit that
The color parameter value and transparency parameter value of each pixel in getting each original image, and should Color parameter value is stored in each color channel of first area in Target Photo and transparency parameter value is stored in second After any one color channel in region, second picture sequence that the embodiment of the present invention can will be also made of an at least Target Photo Column are converted into the first video file with transparence information, and by carrying out at transparent effect synthesis to the first video file Reason, and play the second video file for obtaining after processing and realize animation effect, due to using video file as transparent Therefore the carrier of animation compared in such a way that PNG sequence carries out animation broadcasting, can substantially reduce decoding duration, The broadcasting of animation can be realized without occupying a large amount of memories, so the efficiency of this kind of broadcast mode is higher, broadcast performance Preferably.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of flow chart of animation playing method provided in an embodiment of the present invention;
Fig. 2 is the schematic diagram that a kind of pair of picture provided in an embodiment of the present invention is handled;
Fig. 3 is a kind of schematic diagram of the texture coordinate of OpenGL provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic diagram of region segmentation for carrying out picture provided in an embodiment of the present invention;
Fig. 5 is a kind of schematic diagram for carrying out the broadcasting of animation effect provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of moving-image playback device provided in an embodiment of the present invention;
Fig. 7 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention Formula is described in further detail.
Before to the embodiment of the present invention carrying out that explanation is explained in detail, first to the present embodiments relate to some names arrived Word is explained.
Animation: a more wide in range definition is: using and is shot or drawn frame by frame, and by continuously playing and shape At dynamic image technology.That is, no matter what shooting or rendered object be, it is only necessary to guarantee shooting or draw mode to be to use Mode frame by frame, and activity video is formd in turn using continuous play in viewing, this can be referred to as animation.
In embodiments of the present invention, animation play may be used under various scenes, in particular for sociability software application For, during communicating pair communication exchange, user experience can be obviously improved by carrying out animation play under the opportunity agreed with. For example, a certain social application has object function of giving gifts, then during user is to good friend's gifts, if can play One section of animation, for example play one section of displaying animation or present about gifts and give process animation, then it may make logical Letter both sides obtain good visual experience.
In summary, animation play can be triggered by certain condition.Wherein, the trigger condition of animation play can be logical It include the keyword of a certain triggerable animation effect in the exchange of information that letter both sides issue in communication exchange;Alternatively, user opens Have the function of animation play effect with a certain in software application, for example after object function enabling of giving gifts, can play about present The animation effect of process is given, the embodiment of the present invention is to the form of above-mentioned trigger condition without specifically limiting.
Content is exchanged in chat box in addition, checking at present in order to not influence communicating pair, broadcasting is usually transparent dynamic It draws.And in order to solve the problems, such as that memory consumption is big when playing animation at present and broadcast performance is poor, the embodiment of the present invention A kind of carrier using the video file with specific format as animation is proposed, and animation is carried out by primary player and is broadcast The method put.Wherein, specific format can be MP4 (Moving Pictures Experts Group 4th, dynamic image expert Group fourth edition) or FLV (Flash Video, stream media format) etc., the embodiment of the present invention is to this without specifically limiting.
In another embodiment, for video file normal for one, due to being typically all according to normal form It plays out, default does not carry out the demand of transparent broadcasting to the video file, so this video file itself is can not to lead to It crosses non-color channel and carries transparence information, so being directly that can not achieve animation effect by playing this video file Fruit, it also needs to carry out specially treated to this video file thus, and the embodiment of the present invention takes and adds in video file Add the processing mode of transparence information.
That is, in embodiments of the present invention, it, first can be based on number in order to realize that being based on video file realizes that animation plays The generation of new picture is carried out according to each picture in source (PNG sequence).Wherein, each newly-generated picture can be divided into two Region, pictured scene described in the two regions are consistent.And one of region only includes the color information of original image, i.e., Color information is stored in each color channel in this region;And another region only includes the transparency letter of original image Breath, i.e., transparence information is stored in any one color channel in this region, with realize saved by color channel it is transparent Information is spent, is just avoided when transparence information is stored in original non-color channel in this way, if directly closed PNG sequence As video file, then the defect that transparence information can be lost.
Later, a new sequence of pictures is formed based on each newly-generated picture, and based on this new picture sequence Column carry out the conversion of the first video file.It is portable in the first video file obtained at this time to have transparence information, next lead to It crosses and transparent effect synthesis processing is carried out to the first video file, can obtain eventually for the second view for realizing animation effect Frequency file.More detailed description refers to following embodiments.
Fig. 1 is a kind of flow chart of animation playing method provided in an embodiment of the present invention.Referring to Fig. 1, the embodiment of the present invention The method flow of offer includes:
101, for each original image in the first sequence of pictures, the color of each pixel in original image is obtained Colour parameter value and transparency parameter value.
In embodiments of the present invention, the first sequence of pictures refers to the sequence of pictures for currently needing to carry out animation play.Wherein, First sequence of pictures can be PNG sequence.And for each picture in PNG sequence, it is comprising the channel Alpha.
Wherein, the channel Alpha is one for recording the special figure layer of transparence information.For example, one is deposited using 16bit The picture of storage, possible 5bit indicate red (R) that 5bit is indicated green (G), and 5bit is indicated blue (B), and 1bit indicates transparency. In this case, picture is either fully transparent or completely opaque.And for a picture using 32bit storage, it can RGB and transparency are indicated with every 8bit.In this case, in addition to can indicate it is fully transparent and completely it is opaque other than, The channel Alpha also may indicate that 256 grades of translucence.
In addition, the value of Alpha value is between 0~1 under normal circumstances.Wherein, 1 indicate completely opaque, 0 has indicated All-transparent.In embodiments of the present invention, transparency parameter just refers to above-mentioned Alpha value.Wherein, color parameter value is in addition to that can be It can also be YUV (Luma-Chroma, brightness-coloration) value or CMYK outside RGB (Red Green Blue, RGB) value (Cyan-Magenta-Yellow-blacK, cyan-magenta-yellow-black) value, the embodiment of the present invention is to this without specific It limits.That is, method provided in an embodiment of the present invention is applicable to the picture with color modes such as RGB, YUV or CMYK.
102, according to the color parameter value of each pixel and transparency parameter value, generating includes first area and second The Target Photo in region, first area and second area are consistent with the pictured scene of original image, each color of first area Channel saves color parameter value, any one color channel of second area saves transparency parameter value.
In embodiments of the present invention, first in order to which subsequent can be realized saves transparence information in the video file of generation The first step first to be done is that the generation of new picture is carried out based on each original image for including in the first sequence of pictures, It will be referred to as Target Photo based on the newly-generated picture of original image herein.
Fig. 2 is the schematic diagram of Target Photo, and wherein first area refers to the left-half region of picture shown in Fig. 2, second Region refers to the right half part region of picture shown in Fig. 2, and first area is consistent with pictured scene described in second area, i.e., First area includes the pictured scene of original image, and second area also includes the pictured scene of original image.
Unlike, first area is that the color parameter value based on each pixel in original image generates, i.e., the One region only includes the color parameter value of original image, and in other words, each color channel of first area preserves above-mentioned color Colour parameter value.It should be noted that then each color channel of first area saves above-mentioned color ginseng by taking RGB parameter value as an example The meaning of numerical value are as follows: the channel R of first area is used to save G parameter value for saving R parameter value, the channel G of first area, the The channel B in one region is for saving B parameter value.That is, various color parameter values are stored in matched color channel.
And second area is that the transparency parameter value based on each pixel in original image generates, i.e. second area It only include the transparency parameter value of original image, in other words, any one color in each color channel of second area Channel is used to save above-mentioned transparency parameter value.Continue by taking RGB mode as an example, then this is for saving above-mentioned transparency parameter value Color channel can be any of for the channel R, the channel G and channel B.Above-mentioned transparency ginseng is not saved for other two The color channel of numerical value not will do it the preservation of other color informations usually in embodiments of the present invention.
In summary, it will not be lost after carrying out Video Composition due to the data in color channel, in order to The loss of transparence information is avoided, transparence information is stored in color channel by the embodiment of the present invention.Further, since the firstth area Each color channel in domain preserves the color information of original image, then carries out the preservation of transparence information and can cause color information Transparence information is stored in its of second area by the problem of obscuring together with transparence information, therefore, the embodiment of the present invention In in a color channel, the color channel for saving transparence information herein can be any one color channel of second area.
In addition, it is necessary to explanation is a bit, it is specifically to lead to the G that transparency parameter value has been stored in second area in Fig. 2 In road, so it is green that the subregional color of right side, which is substantially,.Similarly, if transparency parameter value has been stored in second In the channel R in region, then the subregional color of right side is just red, and if transparency parameter value has been stored in second area Channel B in, then the subregional color of right side be just blue.And left-half is just the normal display color of original image.
Wherein, above-mentioned that the treatment process application code of Target Photo is generated come when realizing by original image, main journey Sequence code is as follows:
Wherein, data [] refers to the related data for carrying out treated Target Photo, and image [] refers to original image Related data.In addition, it is necessary to explanation, the left-half region of first area and second area in addition to can be referred to a picture Outside right half part region, the top half region and lower half portion region of a picture, the embodiment of the present invention pair may also refer to This is without specifically limiting.The embodiment of the present invention refers to the left-half region of a picture only with first area and second area It is illustrated with for right half part region.
103, by the second picture being composed of Target Photo it is Sequence Transformed be the first video file, and to the first video File carries out transparent effect synthesis processing, obtains the second video file, plays out to the second video file.
In embodiments of the present invention, it can take steps 102 institutes for each original image in the first sequence of pictures The mode shown is handled, and will obtain a series of Target Photo in this way, and the embodiment of the present invention is by this series of target figure The sequence of pictures of piece composition is referred to as second picture sequence.It is adjustable when second picture sequence is converted into the first video file It is realized with specified multimedia video handling implement.Wherein, multimedia video handling implement is specified to refer to the library ffmpeg.For example, can Call the library ffmpeg that second picture sequence is converted to the first video file with specific format.For example, the first video file It can be MP4 file or FLV file etc..
In conclusion the sequence of pictures with animation result of broadcast is converted into first video text by implementation above Part.Next, it is also necessary to by carrying out transparent effect synthesis processing to obtained the first video file, can obtain eventually for It realizes the second video file of animation effect, and then realizes animation effect by playing the second video file.
It is because of the first view it should be noted is why to handle the first video file transparent effect synthesis Frequency file is generated based on above-mentioned second picture sequence, will on screen if directly playing out to the first video file There is picture such as shown in Fig. 2.Meeting left and right split screen carries out animation effect broadcasting respectively on screen, but which is tool without There is animation effect.In order to avoid such case appearance, the embodiment of the present invention can also take following manner to the obtained One video file carries out relevant treatment, and then realizes animation effect.
By animation effect play the stage using OpenGL carry out Video Rendering for, then in order to the first video text Part carries out transparent effect synthesis processing, and first have to do is a little that shader (tinter) is called to divide in each video frame Color information and transparence information are separated out, is synthesized for use in subsequent.That is, due to each mesh in second picture sequence Piece of marking on a map is substantially to be divided into the only first area comprising color information and the only second area comprising transparence information, because This was also required to carry out each of the first video frame original video frame color information and transparency letter in the video playing stage The segmentation of breath, in the first area to obtain original video frame in the color parameter value of each pixel and second area The transparency parameter value of each pixel, and then treatment process is synthesized for subsequent transparent effect.
As a kind of optional implementation, the embodiment of the present invention defines two groups of texture coordinate values to divide each view Color information and transparence information in frequency frame.Wherein, for dividing the first line of the color information in each video frame It is as follows to manage coordinate value:
Private float textureCoords1 []=
0.0f,1.0f,0.0f,1.0f,
0.0f,0.0f,0.0f,1.0f,
half,0.0f,0.0f,1.0f,
half,1.0f,0.0f,1.0f};
The second texture coordinate value for dividing the transparence information in each video frame is as follows:
Private float textureCoords2 []=
half,1.0f,0.0f,1.0f,
half,0.0f,0.0f,1.0f,
1.0f,0.0f,0.0f,1.0f,
1.0f,1.0f,0.0f,1.0f};
Wherein, half 0.5f indicates the middle position of each video frame.F refers to float, is floating type.
Before this two groups of texture coordinate values are explained, first once it is situated between to the texture coordinate of OpenGL It continues.Referring to Fig. 3, in the texture coordinate space of OpenGL, for any texture, the real size regardless of texture is left The texture coordinate perseverance of inferior horn is (0.0,0.0), and the texture coordinate perseverance in the upper right corner is (1.0,1.0), and the texture coordinate in the upper left corner is permanent For (0.0,1.0), the texture coordinate perseverance in the lower right corner is (1.0,0.0).That is, each video in embodiments of the present invention The texture coordinate value of frame should be between 0 to 1.Wherein, a texture mentioned herein refers to a picture, that is, corresponds to this A video frame of the first video file in inventive embodiments.
For Fig. 4, for each original video frame for including in above-mentioned first video file, the embodiment of the present invention Only generated by color parameter value first can be determined in the original video frame first according to above-mentioned first texture coordinate value Region, i.e. left-half region in corresponding diagram 4, and then each pixel is obtained in each color channel of first area Color parameter value.
Similarly, it is determined in the original video frame only according to above-mentioned second texture coordinate value by transparency parameter first It is worth the second area generated, i.e. right half part region in corresponding diagram 4, and then is obtained in the object color component channel of second area The transparency parameter value of each pixel.Wherein, object color component channel is in each color channel of second area for protecting Deposit that color channel of transparency parameter value.
In conclusion the embodiment of the present invention is realized by the first texture coordinate value and the second texture coordinate value of definition Color information and transparence information are partitioned into each video frame.Next, the transparent of each video frame can be carried out Spend the synthesis of information.
As a kind of optional implementation, for each of above-mentioned first video file video frame, into When the synthesis of row transparence information, calling texture rendering function drawtexture first, by the first area of each video frame In the color parameter value of each pixel and the transparency parameter value of each pixel in second area be transferred to coloring Device shader.Then tinter shader is called, according to the color parameter value and second of each pixel in first area The transparency parameter value of each pixel in region, generates a target video frame.
As a kind of optional implementation, the embodiment of the present invention when carrying out transparence information synthesis, particular by By the transparency parameter value of second area (right half part region) color parameter with first area (left-half region) respectively Value is corresponding to carry out multiplying, and then obtains the target video frame with transparent effect.
That is, determination is matched with the pixel in the second area first for each of first area pixel Target pixel points.Wherein, since first area is consistent with the size of second area and described scenic picture, pixel Point is one-to-one.That is the target pixel points of the pixel of the 1st row the 1st column of first area are the 1st row of second area The pixel of 1st column, the target pixel points of the pixel of the 1st row the 2nd column of first area are that the 1st row the 2nd of second area arranges Pixel, and so on.
Next, the transparency parameter value of color parameter value and target pixel points to the pixel carries out multiplying, And then obtain the display pixel value of the pixel.And aforesaid operations are performed both by each of first area pixel, it will The display pixel value of each pixel is obtained, and according to the display pixel value for each pixel being calculated, Bian Kesheng At a target video frame.
Further, by being performed both by aforesaid operations to each of above-mentioned first video file original video frame, just A series of target video frames can be obtained, after a series of obtained target video frames are sequentially combined with the second video file of formation, By playing out to the second video file, the displaying of animation effect can be completed.Wherein, shader completion pair is being called When the synthesis of transparence information, main code of program is as follows:
It should be noted that first, second in the first sequence of pictures and second picture sequence that are mentioned above is only For the ease of distinguishing and naming different sequence of pictures, and then appellation is carried out to sequence of pictures, for example any one is needed The sequence of pictures for carrying out animation play can be referred to as the first sequence of pictures.Similarly, the original image being mentioned above can be first Any picture in sequence of pictures, and Target Photo can also be any picture in second picture sequence.
In conclusion the embodiment of the present invention, which is realized through video, plays animation, specific local specialties are shown can be such as Shown in Fig. 5.In Fig. 5, after in the chat messages that a certain user sends including keyword " bubbling ", it can trigger about keyword The broadcasting of the animation of " bubbling ".What it is due to broadcasting is the animation with transparent effect, is placed in animation lower layer Chat content is also clear as it can be seen that on the basis of improving user's visual experience, and does not interfere the reading of user's progress chat content. In addition, decoding duration can be greatly decreased and accounted for memory after carrying out animation broadcasting using video rather than PNG sequence With.By taking MP4 video as an example, by MP4 video and PNG sequence two ways, the same animation of the same terminal plays is used When, the comparison of performance indexes is as follows:
Type Frame number Size Every frame averagely decodes duration Committed memory
PNG sequence 86 2032KB 40ms 100M
MP4 video 86 403KB 3ms 8M
By above table it is found that playing broadcast mode of the animation compared to PNG sequence by MP4 video, no matter It is to have significant improvement for size, decoding duration or committed memory size.
Method provided in an embodiment of the present invention, the color parameter value of each pixel in getting each original image And transparency parameter value, and the color parameter value is stored in each color channel of first area in Target Photo and is incited somebody to action After transparency parameter value is stored in any one color channel of second area, the embodiment of the present invention can also will be by an at least mesh The Sequence Transformed second picture of piece of marking on a map composition is the first video file with transparence information, and by the first video text Part carries out transparent effect synthesis processing, and plays the second video file obtained after processing and realize animation effect, by In the carrier using video file as animation, therefore, compared to using PNG sequence carry out animation broadcasting by the way of, Decoding duration can be substantially reduced, namely can realize the broadcasting of animation without occupying a large amount of memories, so this kind of broadcasting side The efficiency of formula is higher, and broadcast performance is preferable.
Fig. 6 is a kind of structural schematic diagram of moving-image playback device provided in an embodiment of the present invention.Referring to Fig. 6, the device packet It includes:
Module 601 is obtained, for obtaining in the original image for each original image in the first sequence of pictures The color parameter value and transparency parameter value of each pixel;
Generation module 602 generates packet for the color parameter value and transparency parameter value according to each pixel Include the pictured scene of the Target Photo of first area and second area, the first area and second area and the original image Unanimously, each color channel of the first area saves the color parameter value, any one color of the second area Channel saves the transparency parameter value;
Conversion module 603, for by the second picture being composed of the Target Photo it is Sequence Transformed be the first video File;
Processing module 604 is obtained for carrying out transparent effect synthesis processing to first video file for broadcasting Second video file.
In another embodiment, the processing module 604, for each for include in first video file A original video frame obtains the color parameter value of each pixel in the first area of the original video frame, and The transparency parameter value of each pixel in the second area;According to the color of each pixel in the first area The transparency parameter value of each pixel in parameter value and the second area, generates a target video frame, the mesh Size and the pictured scene for marking video frame are consistent with the first area and the second area;To at least one obtained Target video frame is sequentially combined, and second video file is obtained.
In another embodiment, the processing module 604, for according to pre-set first texture coordinate value, The color parameter value of each pixel is obtained in each color channel of the first area;According to pre-set Two texture coordinate values obtain the transparency parameter of each pixel in the object color component channel of the second area Value, the object color component channel are in each color channel of the second area for saving the color of the transparency parameter value Color channel.
In another embodiment, the processing module 604 is also used to call texture rendering function, by firstth area In domain in the color parameter value of each pixel and the second area each pixel the transmitting of transparency parameter value To tinter;
The processing module 604 is also used to call the tinter, execute according to each pixel in the first area The transparency parameter value of each pixel, generates a target video frame in the color parameter value of point and the second area The step of.
In another embodiment, the processing module 604, for for each of first area pixel Point determines and the matched target pixel points of the pixel in the second region;
The transparency parameter value of color parameter value and the target pixel points to the pixel carries out product calculation, obtains To the display pixel value of the pixel;
The display pixel value of each pixel according to being calculated generates the target video frame.
Device provided in an embodiment of the present invention, the color parameter value of each pixel in getting each original image And transparency parameter value, and the color parameter value is stored in each color channel of first area in Target Photo and is incited somebody to action After transparency parameter value is stored in any one color channel of second area, the embodiment of the present invention can also will be by an at least mesh The Sequence Transformed second picture of piece of marking on a map composition is the first video file with transparence information, and by the first video text Part carries out transparent effect synthesis processing, and plays the second video file obtained after processing and realize animation effect, by In the carrier using video file as animation, therefore, compared to using PNG sequence carry out animation broadcasting by the way of, Decoding duration can be substantially reduced, namely can realize the broadcasting of animation without occupying a large amount of memories, so this kind of broadcasting side The efficiency of formula is higher, and broadcast performance is preferable.
It should be understood that moving-image playback device provided by the above embodiment is when carrying out animation play, only with above-mentioned each The division progress of functional module can according to need and for example, in practical application by above-mentioned function distribution by different function Energy module is completed, i.e., the internal structure of device is divided into different functional modules, to complete whole described above or portion Divide function.In addition, moving-image playback device provided by the above embodiment and animation playing method embodiment belong to same design, have Body realizes that process is detailed in embodiment of the method, and which is not described herein again.
Fig. 7 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention, which can be used for executing above-mentioned reality The animation playing method provided in example is provided.Referring to Fig. 7, which includes:
RF (Radio Frequency, radio frequency) circuit 110 includes one or more computer-readable storage medium Memory 120, input unit 130, display unit 140, sensor 150, voicefrequency circuit 160, the WiFi (Wireless of matter Fidelity, Wireless Fidelity) module 170, include one or more than one the processor 180 and power supply of processing core 190 equal components.It, can be with it will be understood by those skilled in the art that the restriction of the not structure paired terminal of terminal structure shown in Fig. 7 Including perhaps combining certain components or different component layouts than illustrating more or fewer components.Wherein:
RF circuit 110 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, one or the processing of more than one processor 180 are transferred to;In addition, the data for being related to uplink are sent to Base station.In general, RF circuit 110 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, uses Family identity module (SIM) card, transceiver, coupler, LNA (Low Noise Amplifier, low-noise amplifier), duplex Device etc..In addition, RF circuit 110 can also be communicated with network and other equipment by wireless communication.Wireless communication, which can be used, appoints (Global System of Mobile communication, the whole world are moved for one communication standard or agreement, including but not limited to GSM Dynamic communication system), GPRS (General Packet Radio Service, general packet radio service), CDMA (Code Division Multiple Access, CDMA), WCDMA (Wideband Code Division Multiple Access, wideband code division multiple access), LTE (Long Term Evolution, long term evolution), Email, SMS (Short Messaging Service, short message service) etc..
Memory 120 can be used for storing software program and module, and processor 180 is stored in memory 120 by operation Software program and module, thereby executing various function application and data processing.Memory 120 can mainly include storage journey Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created number according to terminal 700 According to (such as audio data, phone directory etc.) etc..In addition, memory 120 may include high-speed random access memory, can also wrap Include nonvolatile memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts. Correspondingly, memory 120 can also include Memory Controller, to provide processor 180 and input unit 130 to memory 120 access.
Input unit 130 can be used for receiving the number or character information of input, and generate and user setting and function Control related keyboard, mouse, operating stick, optics or trackball signal input.Specifically, input unit 130 may include touching Sensitive surfaces 131 and other input equipments 132.Touch sensitive surface 131, also referred to as touch display screen or Trackpad are collected and are used Family on it or nearby touch operation (such as user using any suitable object or attachment such as finger, stylus in touch-sensitive table Operation on face 131 or near touch sensitive surface 131), and corresponding attachment device is driven according to preset formula.It is optional , touch sensitive surface 131 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus detection is used The touch orientation at family, and touch operation bring signal is detected, transmit a signal to touch controller;Touch controller is from touch Touch information is received in detection device, and is converted into contact coordinate, then gives processor 180, and can receive processor 180 The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves Realize touch sensitive surface 131.In addition to touch sensitive surface 131, input unit 130 can also include other input equipments 132.Specifically, Other input equipments 132 can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), One of trace ball, mouse, operating stick etc. are a variety of.
Display unit 140 can be used for showing information input by user or the information and terminal 700 that are supplied to user Various graphical user interface, these graphical user interface can be made of figure, text, icon, video and any combination thereof. Display unit 140 may include display panel 141, optionally, can use LCD (Liquid Crystal Display, liquid crystal Show device), the forms such as OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) configure display panel 141.Further, touch sensitive surface 131 can cover display panel 141, when touch sensitive surface 131 detects touching on it or nearby After touching operation, processor 180 is sent to determine the type of touch event, is followed by subsequent processing device 180 according to the type of touch event Corresponding visual output is provided on display panel 141.Although touch sensitive surface 131 and display panel 141 are conducts in Fig. 7 Two independent components output and input function to realize, but in some embodiments it is possible to by touch sensitive surface 131 and display Panel 141 is integrated and realizes and outputs and inputs function.
Terminal 700 may also include at least one sensor 150, such as optical sensor, motion sensor and other sensings Device.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 141, and proximity sensor can close display when terminal 700 is moved in one's ear Panel 141 and/or backlight.As a kind of motion sensor, gravity accelerometer can detect in all directions (generally Three axis) acceleration size, can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (ratio Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);Extremely In other sensors such as gyroscope, barometer, hygrometer, thermometer, the infrared sensors that terminal 700 can also configure, herein It repeats no more.
Voicefrequency circuit 160, loudspeaker 161, microphone 162 can provide the audio interface between user and terminal 700.Audio Electric signal after the audio data received conversion can be transferred to loudspeaker 161, be converted to sound by loudspeaker 161 by circuit 160 Sound signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 162, after being received by voicefrequency circuit 160 Audio data is converted to, then by after the processing of audio data output processor 180, such as another end is sent to through RF circuit 110 End, or audio data is exported to memory 120 to be further processed.Voicefrequency circuit 160 is also possible that earphone jack, To provide the communication of peripheral hardware earphone Yu terminal 700.
WiFi belongs to short range wireless transmission technology, and terminal 700 can help user's transceiver electronics by WiFi module 170 Mail, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.
Processor 180 is the control centre of terminal 700, utilizes each portion of various interfaces and connection whole mobile phone Point, by running or execute the software program and/or module that are stored in memory 120, and calls and be stored in memory 120 Interior data execute the various functions and processing data of terminal 700, to carry out integral monitoring to mobile phone.Optionally, processor 180 may include one or more processing cores;Preferably, processor 180 can integrate application processor and modem processor, Wherein, the main processing operation system of application processor, user interface and application program etc., modem processor mainly handles nothing Line communication.It is understood that above-mentioned modem processor can not also be integrated into processor 180.
Terminal 700 further includes the power supply 190 (such as battery) powered to all parts, it is preferred that power supply can pass through electricity Management system and processor 180 are logically contiguous, to realize management charging, electric discharge and power consumption by power-supply management system The functions such as management.Power supply 190 can also include one or more direct current or AC power source, recharging system, power supply event Hinder the random components such as detection circuit, power adapter or inverter, power supply status indicator.
Although being not shown, terminal 700 can also include camera, bluetooth module etc., and details are not described herein.Specifically in this reality It applies in example, the display unit of terminal is touch-screen display, and terminal further includes having memory, is stored at least in the memory One instruction, at least one section of program, code set or instruction set, at least one instruction, at least one section of program, the generation Code collection or instruction set are loaded as the processor of the terminal and are executed to realize animation playing method described in above-described embodiment.
In another exemplary embodiment, the embodiment of the invention also provides a kind of computer readable storage medium, Be stored at least one instruction, at least one section of program, code set or instruction set in the computer readable storage medium, it is described extremely A few instruction, at least one section of program, the code set or the instruction set loaded by the processor of the terminal and executed with Realize animation playing method described in above-described embodiment.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (12)

1. a kind of animation playing method, which is characterized in that the described method includes:
For each original image in the first sequence of pictures, the color ginseng of each pixel in the original image is obtained Numerical value and transparency parameter value;
According to the color parameter value and transparency parameter value of each pixel, generating includes first area and second area Target Photo, the first area and second area are consistent with the pictured scene of the original image, the first area Each color channel saves the color parameter value, any one color channel of the second area saves the transparency ginseng Numerical value;
By the second picture being composed of the Target Photo it is Sequence Transformed be the first video file;
Transparent effect synthesis processing is carried out to first video file, obtains the second video file for broadcasting.
2. the method according to claim 1, wherein described carry out transparent effect conjunction to first video file At processing, the second video file is obtained, comprising:
For each original video frame for including in first video file, described the first of the original video frame is obtained In region in the color parameter value of each pixel and the second area each pixel transparency parameter value;
According to each pixel in the color parameter value of each pixel in the first area and the second area Transparency parameter value, generate a target video frame, the size and pictured scene and described first of the target video frame Region and the second area are consistent;
At least one obtained target video frame is sequentially combined, second video file is obtained.
3. according to the method described in claim 2, it is characterized in that, the first area for obtaining the original video frame In the color parameter value of each pixel and the transparency parameter value of each pixel in the second area, comprising:
According to pre-set first texture coordinate value, obtained in each color channel of the first area it is described each The color parameter value of pixel;
According to pre-set second texture coordinate value, obtained in the object color component channel of the second area it is described each The transparency parameter value of pixel, the object color component channel in each color channel of the second area for saving institute State the color channel of transparency parameter value.
4. according to the method described in claim 2, it is characterized in that, the method also includes:
Texture rendering function is called, by the color parameter value of each pixel in the first area and the second area In the transparency parameter value of each pixel be transferred to tinter;
The tinter is called, is executed according to the color parameter value of each pixel in the first area and described second The transparency parameter value of each pixel in region, the step of generating a target video frame.
5. method according to claim 2 or 4, which is characterized in that described according to each pixel in the first area The transparency parameter value of each pixel, generates a target video in the color parameter value of point and the second area Frame, comprising:
For each of first area pixel, determine and the matched mesh of the pixel in the second region Mark pixel;
The transparency parameter value of color parameter value and the target pixel points to the pixel carries out product calculation, obtains institute State the display pixel value of pixel;
The display pixel value of each pixel according to being calculated generates the target video frame.
6. a kind of moving-image playback device, which is characterized in that described device includes:
Module is obtained, for obtaining each in the original image for each original image in the first sequence of pictures The color parameter value and transparency parameter value of pixel;
Generation module, for the color parameter value and transparency parameter value according to each pixel, generating includes first The Target Photo of region and second area, the first area and second area are consistent with the pictured scene of the original image, Each color channel of the first area saves the color parameter value, any one color channel of the second area is protected Deposit the transparency parameter value;
Conversion module, for by the second picture being composed of the Target Photo it is Sequence Transformed be the first video file;
Processing module obtains the second view for broadcasting for carrying out transparent effect synthesis processing to first video file Frequency file.
7. device according to claim 6, which is characterized in that the processing module, for for the first video text Each original video frame for including in part, obtains the color of each pixel in the first area of the original video frame The transparency parameter value of each pixel in colour parameter value and the second area;According to each in the first area The transparency parameter value of each pixel in the color parameter value of a pixel and the second area, generates a target Video frame, the size and pictured scene of the target video frame are consistent with the first area and the second area;It is right At least one obtained target video frame is sequentially combined, and second video file is obtained.
8. device according to claim 7, which is characterized in that the processing module, for according to pre-set first Texture coordinate value obtains the color parameter value of each pixel in each color channel of the first area;Root According to pre-set second texture coordinate value, each described pixel is obtained in the object color component channel of the second area Transparency parameter value, the object color component channel be the second area each color channel in for save it is described transparent Spend the color channel of parameter value.
9. device according to claim 7, which is characterized in that the processing module is also used to call texture rendering function, By in the color parameter value of each pixel in the first area and the second area each pixel it is transparent Degree parameter value is transferred to tinter;
The processing module is also used to call the tinter, executes the color according to each pixel in the first area The transparency parameter value of each pixel, generates the step of a target video frame in colour parameter value and the second area Suddenly.
10. the device according to claim 7 or 9, which is characterized in that the processing module, for for firstth area Each of domain pixel determines and the matched target pixel points of the pixel in the second region;
The transparency parameter value of color parameter value and the target pixel points to the pixel carries out product calculation, obtains institute State the display pixel value of pixel;
The display pixel value of each pixel according to being calculated generates the target video frame.
11. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, extremely in the storage medium Few one section of program, code set or instruction set, at least one instruction, at least one section of program, the code set or the instruction Collection is loaded as the processor and is executed to realize the animation playing method as described in any claim in claim 1 to 5.
12. a kind of terminal, which is characterized in that the terminal includes processor and memory, is stored at least in the memory One instruction, at least one section of program, code set or instruction set, at least one instruction, at least one section of program, the generation Code collection or instruction set are loaded as the processor and are executed dynamic as described in any claim in claim 1 to 5 to realize Draw playback method.
CN201710586589.1A 2017-07-18 2017-07-18 Animation playing method, device, storage medium and terminal Pending CN109272565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710586589.1A CN109272565A (en) 2017-07-18 2017-07-18 Animation playing method, device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710586589.1A CN109272565A (en) 2017-07-18 2017-07-18 Animation playing method, device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN109272565A true CN109272565A (en) 2019-01-25

Family

ID=65152750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710586589.1A Pending CN109272565A (en) 2017-07-18 2017-07-18 Animation playing method, device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN109272565A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097619A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 Animation effect implementation method, device and equipment in application program
CN110289861A (en) * 2019-05-20 2019-09-27 湖南大学 The half precision compressed sensing method of sampling
CN110290398A (en) * 2019-06-21 2019-09-27 北京字节跳动网络技术有限公司 Video delivery method, device, storage medium and electronic equipment
CN110288670A (en) * 2019-06-19 2019-09-27 杭州绝地科技股份有限公司 A kind of UI retouches the high-performance rendering method of side special efficacy
CN110351592A (en) * 2019-07-17 2019-10-18 深圳市蓝鲸数据科技有限公司 Animation rendering method, device, computer equipment and storage medium
CN110989878A (en) * 2019-11-01 2020-04-10 百度在线网络技术(北京)有限公司 Animation display method and device in applet, electronic equipment and storage medium
CN111179386A (en) * 2020-01-03 2020-05-19 广州虎牙科技有限公司 Animation generation method, device, equipment and storage medium
CN111901581A (en) * 2020-08-28 2020-11-06 南京星邺汇捷网络科技有限公司 Video pixel processing system and method based on 2D video to 3D effect
CN112019911A (en) * 2020-09-08 2020-12-01 北京乐我无限科技有限责任公司 Webpage animation display method and device and electronic equipment
CN112153472A (en) * 2020-09-27 2020-12-29 广州博冠信息科技有限公司 Method and device for generating special picture effect, storage medium and electronic equipment
CN112153408A (en) * 2020-09-28 2020-12-29 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN112351283A (en) * 2020-12-24 2021-02-09 杭州米络星科技(集团)有限公司 Transparent video processing method
CN112399196A (en) * 2019-08-16 2021-02-23 阿里巴巴集团控股有限公司 Image processing method and device
CN112714357A (en) * 2020-12-21 2021-04-27 北京百度网讯科技有限公司 Video playing method, video playing device, electronic equipment and storage medium
CN113423016A (en) * 2021-06-18 2021-09-21 北京爱奇艺科技有限公司 Video playing method, device, terminal and server
CN113709554A (en) * 2021-08-26 2021-11-26 上海哔哩哔哩科技有限公司 Animation video generation method and device, and animation video playing method and device in live broadcast room
CN114598937A (en) * 2022-03-01 2022-06-07 上海哔哩哔哩科技有限公司 Animation video generation and playing method and device
CN115396730A (en) * 2022-07-21 2022-11-25 广州方硅信息技术有限公司 Video image processing method, computer device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1860785A (en) * 2003-10-15 2006-11-08 索尼株式会社 Reproducing device, reproducing method, reproducing program, and recording medium
CN103971391A (en) * 2013-02-01 2014-08-06 腾讯科技(深圳)有限公司 Animation method and device
CN105979282A (en) * 2016-06-02 2016-09-28 腾讯科技(深圳)有限公司 Animation frame processing method, animation frame processing server, terminal and system
CN106330672A (en) * 2016-08-22 2017-01-11 腾讯科技(深圳)有限公司 Instant messaging method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1860785A (en) * 2003-10-15 2006-11-08 索尼株式会社 Reproducing device, reproducing method, reproducing program, and recording medium
CN103971391A (en) * 2013-02-01 2014-08-06 腾讯科技(深圳)有限公司 Animation method and device
CN105979282A (en) * 2016-06-02 2016-09-28 腾讯科技(深圳)有限公司 Animation frame processing method, animation frame processing server, terminal and system
CN106330672A (en) * 2016-08-22 2017-01-11 腾讯科技(深圳)有限公司 Instant messaging method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HEAVI: "WebGL颜色与纹理", 《HTTPS://WWW.CNBLOGS.COM/W-WANGLEI/P/6659809.HTML》 *
东篱雪: "什么是Alpha通道?", 《HTTPS://WWW.CNBLOGS.COM/SUOGASUS/P/5311264.HTML》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097619A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 Animation effect implementation method, device and equipment in application program
CN110097619B (en) * 2019-04-30 2022-12-13 腾讯科技(深圳)有限公司 Animation effect implementation method, device and equipment in application program
CN110289861A (en) * 2019-05-20 2019-09-27 湖南大学 The half precision compressed sensing method of sampling
CN110289861B (en) * 2019-05-20 2021-09-07 湖南大学 Semi-precision compressed sensing sampling method
CN110288670A (en) * 2019-06-19 2019-09-27 杭州绝地科技股份有限公司 A kind of UI retouches the high-performance rendering method of side special efficacy
CN110288670B (en) * 2019-06-19 2023-06-23 杭州绝地科技股份有限公司 High-performance rendering method for UI (user interface) tracing special effect
CN110290398A (en) * 2019-06-21 2019-09-27 北京字节跳动网络技术有限公司 Video delivery method, device, storage medium and electronic equipment
CN110290398B (en) * 2019-06-21 2021-11-05 北京字节跳动网络技术有限公司 Video issuing method and device, storage medium and electronic equipment
CN110351592A (en) * 2019-07-17 2019-10-18 深圳市蓝鲸数据科技有限公司 Animation rendering method, device, computer equipment and storage medium
CN110351592B (en) * 2019-07-17 2021-09-10 深圳市蓝鲸数据科技有限公司 Animation presentation method and device, computer equipment and storage medium
CN112399196A (en) * 2019-08-16 2021-02-23 阿里巴巴集团控股有限公司 Image processing method and device
CN112399196B (en) * 2019-08-16 2022-09-02 阿里巴巴集团控股有限公司 Image processing method and device
CN110989878A (en) * 2019-11-01 2020-04-10 百度在线网络技术(北京)有限公司 Animation display method and device in applet, electronic equipment and storage medium
CN110989878B (en) * 2019-11-01 2021-07-20 百度在线网络技术(北京)有限公司 Animation display method and device in applet, electronic equipment and storage medium
CN111179386A (en) * 2020-01-03 2020-05-19 广州虎牙科技有限公司 Animation generation method, device, equipment and storage medium
CN111901581A (en) * 2020-08-28 2020-11-06 南京星邺汇捷网络科技有限公司 Video pixel processing system and method based on 2D video to 3D effect
CN112019911A (en) * 2020-09-08 2020-12-01 北京乐我无限科技有限责任公司 Webpage animation display method and device and electronic equipment
CN112153472A (en) * 2020-09-27 2020-12-29 广州博冠信息科技有限公司 Method and device for generating special picture effect, storage medium and electronic equipment
CN112153408A (en) * 2020-09-28 2020-12-29 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN112714357A (en) * 2020-12-21 2021-04-27 北京百度网讯科技有限公司 Video playing method, video playing device, electronic equipment and storage medium
CN112714357B (en) * 2020-12-21 2023-10-13 北京百度网讯科技有限公司 Video playing method, video playing device, electronic equipment and storage medium
CN112351283A (en) * 2020-12-24 2021-02-09 杭州米络星科技(集团)有限公司 Transparent video processing method
CN113423016A (en) * 2021-06-18 2021-09-21 北京爱奇艺科技有限公司 Video playing method, device, terminal and server
CN113709554A (en) * 2021-08-26 2021-11-26 上海哔哩哔哩科技有限公司 Animation video generation method and device, and animation video playing method and device in live broadcast room
CN114598937A (en) * 2022-03-01 2022-06-07 上海哔哩哔哩科技有限公司 Animation video generation and playing method and device
CN114598937B (en) * 2022-03-01 2023-12-12 上海哔哩哔哩科技有限公司 Animation video generation and playing method and device
CN115396730A (en) * 2022-07-21 2022-11-25 广州方硅信息技术有限公司 Video image processing method, computer device and storage medium

Similar Documents

Publication Publication Date Title
CN109272565A (en) Animation playing method, device, storage medium and terminal
CN106531149B (en) Information processing method and device
CN105183296B (en) interactive interface display method and device
TWI550548B (en) Exploiting frame to frame coherency in a sort-middle architecture
CN110111279B (en) Image processing method and device and terminal equipment
CN106127673B (en) A kind of method for processing video frequency, device and computer equipment
CN105808060B (en) A kind of method and apparatus of playing animation
CN104519404B (en) The player method and device of graphic interchange format file
CN104134230A (en) Image processing method, image processing device and computer equipment
CN109271327A (en) EMS memory management process and device
CN107770618A (en) A kind of image processing method, device and storage medium
CN105187692B (en) Video capture method and device
CN107507160A (en) A kind of image interfusion method, terminal and computer-readable recording medium
CN106204552B (en) A kind of detection method and device of video source
CN108846274A (en) A kind of safe verification method, device and terminal
CN106204423A (en) A kind of picture-adjusting method based on augmented reality, device and terminal
CN107845363B (en) A kind of display control method and mobile terminal
CN104737198B (en) The result of visibility test is recorded in input geometric object granularity
CN109146760A (en) A kind of watermark generation method, device, terminal and storage medium
CN107895352A (en) A kind of image processing method and mobile terminal
CN106296634B (en) A kind of method and apparatus detecting similar image
CN107886321A (en) A kind of method of payment and mobile terminal
CN107784232A (en) A kind of image processing method and mobile terminal
CN106504303B (en) A kind of method and apparatus playing frame animation
CN108037966A (en) A kind of interface display method, device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190125

RJ01 Rejection of invention patent application after publication