CN113115097A - Video playing method and device, electronic equipment and storage medium - Google Patents

Video playing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113115097A
CN113115097A CN202110343335.3A CN202110343335A CN113115097A CN 113115097 A CN113115097 A CN 113115097A CN 202110343335 A CN202110343335 A CN 202110343335A CN 113115097 A CN113115097 A CN 113115097A
Authority
CN
China
Prior art keywords
video
transparent
image
information
played
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110343335.3A
Other languages
Chinese (zh)
Other versions
CN113115097B (en
Inventor
矫志宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110343335.3A priority Critical patent/CN113115097B/en
Publication of CN113115097A publication Critical patent/CN113115097A/en
Application granted granted Critical
Publication of CN113115097B publication Critical patent/CN113115097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a video playing method, a video playing device, an electronic device and a storage medium. The method comprises the following steps: acquiring a video to be played; under the condition that the video to be played is determined to be a transparent video, sequentially extracting color information and transparency information of each transparent image frame in the transparent video, and generating a merged image sequence based on the color information and the transparency information; and sequentially displaying a plurality of merged images in the merged image sequence. The scheme can make the making and playing processes of the special effect animation relatively simple, and is beneficial to improving the processing efficiency of the special effect animation.

Description

Video playing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video playing, and in particular, to a video playing method and apparatus, an electronic device, and a storage medium.
Background
At present, special-effect animations are more and more widely applied to scenes such as video editing and running pages. In the related art, the display effect of the special effect animation is usually realized by means of a sequence frame picture, a dynamic picture (such as gif, webp, apng) or a lottiee animation library.
However, the above-mentioned method is often implemented only by relying on a large number of underlying processing libraries, so that not only are installation files and operation loads of an animation editing and playing program large, but also a storage space required in the program operation process is large, and therefore, installation and maintenance costs are high; and the making and playing processes of the special effect animation are complex and tedious, and the overall display efficiency of the special effect animation is low.
Disclosure of Invention
The present disclosure provides a video playing method, apparatus, electronic device and storage medium to at least solve the technical problems in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a video playing method is provided, including:
acquiring a video to be played;
under the condition that the video to be played is determined to be a transparent video, sequentially extracting color information and transparency information of each transparent image frame in the transparent video, and generating a merged image sequence based on the color information and the transparency information;
and sequentially displaying a plurality of merged images in the merged image sequence.
Optionally, the determining that the video to be played is a transparent video includes:
determining the video to be played as a transparent video under the condition that the type identifier of the video to be played is a preset transparent identifier; alternatively, the first and second electrodes may be,
and determining that the video to be played is a transparent video under the condition that the type information acquired in association with the video to be played is preset transparent information.
Optionally, obtaining the type identifier of the video to be played includes:
and acquiring the type identifier of the video to be played from the video information pre-associated to the video to be played.
Optionally, the transparent video includes a color image sequence and a transparent channel, where the transparent channel is configured to record a transparency value of each original color image in the color image sequence, and extract color information and transparency information of any transparent image frame in the transparent video, and the method includes:
taking the color value of each pixel point of the original color image in any transparent image frame as the color information of any transparent image frame; and the number of the first and second groups,
and taking the transparency value of the original color image in any transparent image frame as the transparency information of any transparent image frame.
Optionally, the generating a merged image sequence based on the color information and the transparency information includes:
and sequentially adjusting the transparency of each pixel point of the original color image in each transparent image frame based on the transparency information of each transparent image frame, and taking an ordered set formed by each adjusted image frame as the merged image sequence.
Optionally, the sequentially extracting color information and transparency information of each transparent image frame in the transparent video includes:
and calling a self-defined read Native component library to sequentially extract color information and transparency information of each transparent image frame in the transparent video.
Optionally, the multiple merged images include a base merged image and an interpolated merged image,
the method further comprises the following steps: acquiring interframe information of each transparent image frame;
generating a merged image sequence based on the color information and transparency information, comprising:
sequentially generating each basic merged image corresponding to each transparent image frame based on the color information and the transparency information;
and on the basis of the basic merged image, generating an interpolation merged image among all the basic merged images according to the inter-frame information.
Alternatively to this, the first and second parts may,
the method further comprises the following steps: acquiring the playing interval duration of adjacent transparent image frames in the transparent video;
the sequentially displaying the plurality of merged images in the merged image sequence comprises:
and sequentially displaying a plurality of merged images in the merged image sequence according to the playing interval duration.
Alternatively to this, the first and second parts may,
the sequentially displaying the plurality of merged images in the merged image sequence comprises: sequentially displaying a plurality of merged images in the merged image sequence according to a fixed playing interval duration;
the method further comprises the following steps: and under the condition that the currently played image is the tail opening and closing image in the merged image sequence, if the evoking time length of the first opening and closing image in the merged image sequence exceeds the interval time length, the first opening and closing image is evoked before the display of the tail opening and closing image is finished.
According to a second aspect of the embodiments of the present disclosure, a video playing apparatus is provided, including:
a video acquisition unit configured to acquire a video to be played;
the sequence generation unit is configured to sequentially extract color information and transparency information of each transparent image frame in the transparent video under the condition that the video to be played is determined to be the transparent video, and generate a combined image sequence based on the color information and the transparency information;
an image presentation unit configured to sequentially present a plurality of merged images in the merged image sequence.
Optionally, the sequence generating unit is further configured to:
determining the video to be played as a transparent video under the condition that the type identifier of the video to be played is a preset transparent identifier; alternatively, the first and second electrodes may be,
and determining that the video to be played is a transparent video under the condition that the type information acquired in association with the video to be played is preset transparent information.
Optionally, the sequence generating unit is further configured to:
and acquiring the type identifier of the video to be played from the video information pre-associated to the video to be played.
Optionally, the transparent video includes a color image sequence and a transparent channel, the transparent channel is used to record transparency values of each original color image in the color image sequence, and the sequence generating unit is further configured to:
taking the color value of each pixel point of the original color image in any transparent image frame as the color information of the any transparent image frame; and the number of the first and second groups,
and taking the transparency value of the original color image in any transparent image frame as the transparency information of any transparent image frame.
Optionally, the sequence generating unit is further configured to:
and sequentially adjusting the transparency of each pixel point of the original color image in each transparent image frame based on the transparency information of each transparent image frame, and taking an ordered set formed by each adjusted image frame as the merged image sequence.
Optionally, the sequence generating unit is further configured to:
and calling a self-defined read Native component library to sequentially extract color information and transparency information of each transparent image frame in the transparent video.
Optionally, the multiple merged images include a base merged image and an interpolated merged image, and the apparatus further includes:
an interframe information acquisition unit configured to acquire interframe information of the respective transparent image frames;
the sequence generation unit is further configured to:
sequentially generating each basic merged image corresponding to each transparent image frame based on the color information and the transparency information;
and on the basis of the basic merged image, generating an interpolation merged image among all the basic merged images according to the inter-frame information.
Optionally, the apparatus further comprises:
an interval duration acquisition unit configured to acquire a play interval duration of adjacent transparent image frames in the transparent video;
the image presentation unit is further configured to:
and sequentially displaying a plurality of merged images in the merged image sequence according to the playing interval duration.
Optionally, the image presenting unit is further configured to: sequentially displaying a plurality of merged images in the merged image sequence according to a fixed playing interval duration;
the device further comprises:
and the early evoking unit is configured to, under the condition that the currently played image is the tail merged image in the merged image sequence, if the evoking time length of the first merged image in the merged image sequence exceeds the interval time length, begin to evoke the first merged image before the display of the tail merged image is finished.
According to a third aspect of the embodiments of the present disclosure, an electronic device is provided, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video playing method as described in any of the embodiments of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, a storage medium is provided, where instructions in the storage medium, when executed by a processor of a video playback electronic device, enable the video playback electronic device to perform the video playback method described in any one of the above first aspects.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product, which includes a computer program and/or instructions, and when executed by a processor, the computer program and/or instructions implement the video playing method according to any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the embodiment of the disclosure, under the condition that the video to be played is the transparent video, the merged image sequence is directly generated through the color information and the transparency information of each transparent image frame extracted from the transparent video, and the plurality of merged images in the merged image sequence are sequentially displayed, so that the dynamic effect of the special effect animation is presented in the process of sequentially displaying each merged image. Obviously, the special-effect animation is sequentially displayed in the mode of displaying the transparent video without excessively depending on a bottom processing library to process the transparent video, so that the application program installation file of the playing scheme occupies smaller space and has relatively lower running load, and can be integrated in a real Native component library in the form of functional plug-ins; and the special effect animation is realized through the transparent video, so that the making and playing processes of the special effect animation are relatively simple, and the processing efficiency of the special effect animation is favorably improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow chart illustrating a video playback method according to an embodiment of the present disclosure;
fig. 2 is a flow chart illustrating another video playback method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a transparent video cover view according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating the ordering of transparent image frames in a transparent video according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram illustrating a transparent image frame according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating an ordering of merged images in a sequence of merged images, according to an embodiment of the present disclosure;
fig. 7 is a schematic block diagram illustrating a video playback device in accordance with an embodiment of the present disclosure;
fig. 8 is a block diagram illustrating an electronic device in accordance with an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
At the present stage, special-effect animations are more and more widely applied to scenes such as video editing and running pages, so that the playing methods of users in service scenes are increased, and the user experience is improved.
In order to realize the display effect of the special effect animation, various special effect animation display modes are provided in the related technology. The video editing software and the playing plug-in integrated therein are usually large in size, so that the amount of file data to be installed is large, further, a storage space with more devices is occupied, and therefore the installation and maintenance costs of the software are high.
The other scheme is to realize the display effect of special effect animation by means of sequential frame pictures, dynamic pictures (such as gif, webp and apng) or lottiee animation libraries and the like. However, such schemes still need to rely on more underlying processing libraries to be implemented, the making and playing processes of the special-effect animation are complex, and the overall processing efficiency of the special-effect animation is low.
In order to solve the above problems in the related art, the present disclosure provides a video display scheme, in which merged images are constructed according to color information and transparency information extracted from a transparent video and are sequentially displayed, so that a special-effect animation is presented by using the transparent video, which is a video with a special display form.
Fig. 1 is a flowchart illustrating a video playing method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the method applied to the display device may include the following steps 102-106.
And 102, acquiring a video to be played.
In this embodiment, the display device may take a variety of forms including, but not limited to, a cell phone, a tablet, a wearable device, a personal computer, and the like. The display device may obtain the video to be played in various ways, for example, the video to be displayed may be a video stored locally by the display device, and at this time, the display device may obtain the video to be displayed only by reading the video from the local storage space; the display equipment can also receive videos to be displayed and the like from other equipment; of course, the video to be displayed may also be a video generated by a user locally editing in the display device, and at this time, the display device only needs to obtain the video to be displayed from a corresponding video editing program, which is not limited in the embodiment of the present disclosure.
It should be noted that the video playing method described in the embodiment of the present disclosure may be integrated into an application program such as a player (or a functional component of a player) having a video playing function, or may be used as an independent functional component that can be called by the application program, and the embodiment of the present disclosure does not limit a specific implementation form of the method. For example, the method may be integrated into the RN module Native to the read Native component library, so as to generate an extended RN module based on the Native RN module, and further modify the read Native component library into a custom component library (i.e. the custom read Native component library described below). Therefore, the display equipment can call the self-defined read Native component library to display the transparent video, so that the display effect of the special effect animation is presented.
And 104, under the condition that the video to be played is determined to be a transparent video, sequentially extracting the color information and the transparency information of each transparent image frame in the transparent video, and generating a merged image sequence based on the color information and the transparency information.
The video to be displayed acquired by the display device may be a transparent video or a non-transparent video. It should be noted that the transparent video according to the present disclosure is a special video that carries transparent information (e.g., transparency values of pixel points) of each image frame in the video, so that the transparent video can be displayed according to the transparent information to present a special display effect of the transparent video. Similar to the general video, the transparent video is also composed of a plurality of video image frames in sequence; however, unlike the general form of video, any transparent image frame of transparent video contains two sub-images: an original color image and a transparent image. The original color image may be a color image in a general sense, such as an RGB color image, a gray scale image, and the like; the size of the transparent image is the same as that of the original color image, wherein the value of each pixel point is not the color value, but the corresponding transparency value of the pixel point at the same position in the original color image. In addition, the original color image and the transparent image may be placed up and down, left and right, and the like, and the relative positional relationship between the two is not limited in the present disclosure. The transparency value of the original color image in the scheme is not essentially different from the transparency of the image in the related art, and is not described again. The format of the transparent video may be · mov,. swf,. avi,. mp4, and specifically, an encoding format such as Animation and PNG may be adopted, which is not limited in the embodiment of the present disclosure.
In an embodiment, the display device may determine whether the acquired video to be displayed is a transparent video in a plurality of ways. As an exemplary embodiment, the video to be displayed may be pre-marked with a type identifier, so that the display device may determine that the video to be displayed is a transparent video under the condition that the type identifier of the video to be displayed is a preset transparent identifier. Generally, a video processing device usually or correspondingly obtains and stores related information (or called box information) of a video, and a transparent channel of a transparent video is usually called an alpha channel, so that an "alpha-video" tag can be added in advance in the box information of the video to be played as a flag of whether the video is the played video, so that when the video playing method disclosed by the present disclosure is applied to a player, after the player obtains the video to be played, the player can read the above-mentioned alpha-video tag from the box information thereof, and when the value of the field is a preset transparent flag, the video is determined to be the transparent video. The transparent mark may be a preset special character, such as "T", "transparent", and the like. In addition, the type identifier can be added by a producer of the video to be displayed in the process of producing the video, and can also be added by a publisher of the video to be displayed in the process of publishing the video. The type of the video uploaded by the publisher can be judged after the video is received by a video interaction platform (such as a short video platform), and a type identifier of a response is added to the video, so that after the display device acquires the video from the video interaction platform, whether the video is a transparent video can be determined according to whether the video identifier of the video is a transparent identifier. Through the preset type identification, the display equipment can realize the rapid identification of the video type, so that the determination and playing process of the transparent video is accelerated to a certain extent.
Further, the type identifier may be included in the video information of the video to be played, so that the display device may obtain the type identifier of the video from the video information pre-associated with the video to be played. The video information may be the box information of the video to be played, and at this time, the display device may extract the type identifier from the box information of the video, thereby determining the video type of the video to be displayed. In addition, the above video information may record necessary information that the player needs to know when playing the video to be played, such as a video name, a video encoding/decoding mode, an encoding/decoding protocol version number, and the like, which is not limited in this disclosure.
As another exemplary embodiment, when obtaining a video to be played, the display device may obtain a display parameter of the video in a correlated manner, and the display parameter may be added with type information indicating a video type in advance, so that the display device may determine that the corresponding video to be played is a transparent video when the obtained type information is preset transparent information. Still taking the player as an example, the player may directly associate and obtain the corresponding type information in a parameter entering manner when obtaining the video to be played, and then may determine whether the obtained video to be played is a transparent video according to the information, and perform corresponding subsequent processing according to the video type without analyzing the box information of the video, thereby further accelerating the processing speed.
Any video to be displayed acquired by the display device may be the transparent video, and certainly, may also be a non-transparent video. Therefore, under the conditions that the type identifier of the video to be displayed is a non-transparent identifier, the display parameter does not contain type information or the contained type information is not preset transparent information, and the like, the display device can determine that the video to be displayed is the non-transparent video, and then the display device can directly analyze and play the video to be displayed by adopting a method for processing a conventional video.
In one embodiment, the transparent video may include a color image sequence and a transparent channel, wherein the transparent channel is used to record transparency values of each original color image in the color image sequence, and at this time, the display device may determine color information and transparency information of any transparent image frame in the transparent video through the transparent channel. For example, the display device may use the color value of each pixel point of the original color image in any transparent image frame as the color information of the transparent image frame; and taking the transparency value of the original color image in any transparent image frame as the transparency information of the transparent image frame. Therefore, the color information of any transparent image frame extracted in the manner is the color value of the original color image in the transparent image frame, and the transparency information is the transparency value of the corresponding pixel point recorded by the transparent channel in the transparent image frame. By the mode, the display equipment can accurately acquire the color information and the transparency information of each original color image in the transparent image frame, so that the combined image can be conveniently generated subsequently.
Further, the display device may generate a merged image based on the acquired color information and transparency information to form a merged image sequence. For example, the presentation apparatus may sequentially adjust the transparency of each pixel point of the original color image in each transparent image frame based on the transparency information of each transparent image frame, and take the ordered set of the adjusted image frames as the merged image sequence. Therefore, by the method, the transparency of the original color image in the transparent image frame can be simply adjusted according to the transparency value recorded in the transparent channel of the transparent video, and the corresponding merged image can be obtained.
In addition, each composite image in the composite image frame, that is, each special effect picture image corresponding to the animation special effect that needs to be shown for the transparent video, is described. That is, in the process of presenting an animation special effect, the special effect picture corresponding to any moment may correspond to a special effect animation image, and each special effect animation image corresponding to the animation special effect is the above-mentioned each composite image, thereby constituting the composite image frame in sequence. For this reason, the display device can present the animation special effect while the display device sequentially displays the composite images in the composite image sequence.
In the field of video processing, video processing components in a read Native component library are often used for processing videos, but the Native read Native component library cannot realize the playing of the transparent videos, and the video playing method in the embodiment of the disclosure can be integrated in the Native library, so that the Native library is constructed into a customized read Native component library. Furthermore, the display device can call the customized read Native component library to sequentially extract the color information and the transparency information of each transparent image frame in the transparent video, so that the information acquisition path is further simplified, and meanwhile, the scheme of the embodiment of the disclosure has high compatibility and is beneficial to scheme popularization.
In an embodiment, the merged images in the merged image sequence may include a basic merged image and an interpolated merged image, so that the display device may obtain inter-frame information of each transparent image frame in the transparent video, such as a color variation amount of a pixel point, an inter-frame association degree, a gradient change of a feature value, and the display device may generate the merged image according to the inter-frame information. For example, the presentation apparatus may sequentially generate each basic combined image corresponding to each transparent image frame based on the color information and the transparency information acquired through the above process; an interpolated merged image between the respective base merged images may then be generated based on the base merged image from the inter-frame information. As can be seen, the basic merged image is generated by combining the original transparent image frame in the video to be played with the transparency information, and the interpolated merged image is generated from the basic merged image according to the inter-frame information, where the interpolated merged image is equivalent to a new image obtained by performing interpolation between adjacent basic merged images (similar to digital interpolation). The present disclosure does not limit the relative position and number of the interpolated merged image and the base merged image, for example, three original color images are included in the transparent video: in the case of Y1, Y2, and Y3, three basic merged images can be synthesized from these three original color images: HF1, HF2, and HF 3. Further, the presentation apparatus may generate interpolated merged images in front of HF1, HF2, and HF3, respectively, according to the interframe information: HI1, HI2 and HI3, the synthetic image sequences at this time being: HI1, HF1, HI2, HF2, HI3, HF 3; alternatively, the interpolated and merged image may be generated after H1, H2, and H3, and the synthesized image sequence in this case is: HF1, HI1, HF2, HI2, HF3, HI 3; alternatively, the interpolated merged image may also be generated only between the above-described adjacent base merged images: HF1 and HF2, the composite image sequence at this time being: HF1, HI1, HF2, HI2, HF 3; of course, other generation manners may also exist, and are not described in detail. By the mode, the display equipment realizes the expansion of the images on the basis of the original video to be played in the transparent image frame, so that the generated synthetic image sequence not only comprises the basic synthetic image, but also comprises the interpolation synthetic image, and pixel point differences among all the images in the synthetic image sequence are as small as possible, so that when all the synthetic images in the synthetic image sequence are displayed in sequence, a more exquisite and vivid animation special effect can be realized.
And 106, sequentially displaying a plurality of merged images in the merged image sequence.
In an embodiment, the display device may further obtain a playing interval duration of adjacent transparent image frames in the transparent video, and then sequentially merge a plurality of merged images in the image sequence according to the playing interval duration after the merged image sequence is obtained by merging. The playing interval duration may be recorded in the video information of the transparent video, may also be obtained by the display device in association with the transparent video, and may also be extracted from inter-frame information of the transparent image frame, which is not limited in the embodiment of the present disclosure. It can be understood that, when the number of the merged images in the merged image sequence is the same as the number of the transparent image frames, that is, only the basic merged image (obtained by adjusting the transparency of the original color image) exists in the merged image sequence, and no interpolated merged image (generated according to the basic color image and the inter-frame information) exists in the merged image sequence, the play interval duration is the play interval duration between each two adjacent basic merged images in the transparent video. The specific values of the playing interval duration are not limited in the present disclosure, and may be equal interval durations, such as 17ms (frame rate is 60fps), 20ms (frame rate is 50fps), 42ms (frame rate is 24fps), and the like; of course, in order to realize flexible and various animation special effects, the playing interval durations may also be different (i.e., variable frame rate playing).
In an embodiment, the display device may sequentially display a plurality of merged images in the merged image sequence according to a fixed play interval duration, where the play interval durations between any two merged images are the same. Because the process of rendering the animated special effect is usually cyclic, i.e. multiple merged images in the merged image sequence can be played in a cycle. However, since the first (i.e., the first) merged image of the merged image sequence needs to be played first to initiate a playback evocative program for the merged image sequence, the time taken from the initiation of the program to the presentation of the first merged image (i.e., the time taken for the evocative process) is generally long. In order to reduce the time consumption for evoking the first merged image as much as possible when cyclically displaying each merged image in the merged image sequence at equal interval duration, the display device can evoke the first merged image in advance. For example, in the case that the currently playing image is the last (i.e. last) merged image in the merged image sequence, if the evoking time length of the first merged image in the merged image sequence exceeds the current playing interval time length, the display device may begin to evoke the first merged image before the end of the display of the last merged image. Therefore, when the tail opening and closing images are displayed, the display of the first opening and closing images is started to be called, so that the waiting time after the tail opening and closing images are displayed is shortened as much as possible (namely the time interval from the display of the tail opening and closing images in the previous cycle to the display of the head opening and starting in the next cycle), the pause feeling is reduced, and the display efficiency of the dynamic video and the display effect of the special effect of the animation are improved.
According to the embodiment of the disclosure, under the condition that the video to be played is the transparent video, the merged image sequence is directly generated through the color information and the transparency information of each transparent image frame extracted from the transparent video, and the plurality of merged images in the merged image sequence are sequentially displayed, so that the dynamic effect of the special effect animation is presented in the process of sequentially displaying each merged image. Obviously, the special-effect animation is sequentially displayed in the mode of displaying the transparent video without excessively depending on a bottom processing library to process the transparent video, so that the application program installation file of the playing scheme occupies smaller space and has relatively lower running load, and can be integrated in a real Native component library in the form of functional plug-ins; and the special effect animation is realized through the transparent video, so that the making and playing processes of the special effect animation are relatively simple, and the processing efficiency of the special effect animation is favorably improved.
Fig. 2 is a flowchart illustrating another video playing method according to an exemplary embodiment of the present disclosure, and the method is applied to a display apparatus, as shown in fig. 2. Taking the method described in the embodiment of the present disclosure as an example, integrated in a player running in a presentation device, a corresponding video playing process will be described in detail with reference to fig. 2, and the process may include the following steps 202-216.
Step 202, the display device obtains a video to be displayed.
The display device may obtain the video to be played in various ways, for example, the video to be displayed may be a video stored locally by the display device, and at this time, the display device may obtain the video to be displayed only by reading the video from the local storage space; the display equipment can also receive videos to be displayed and the like from other equipment; of course, the video to be displayed may also be a video generated by a user locally editing in the display device, and at this time, the display device only needs to obtain the video to be displayed from a corresponding video editing program, which is not limited in the embodiment of the present disclosure.
It can be understood that, because the above video playing method of the embodiment of the present disclosure is integrated in a player, after acquiring a video to be displayed, a display device may provide the video to the player for playing. Correspondingly, after the player acquires the video to be played, the player can play the video to be played by executing the following steps, which are described by taking an execution subject as the player.
In step 204, the player determines whether the video to be played is a transparent video.
For the received video to be played, the player can determine the video type of the video to be played, that is, determine whether the video to be played is a transparent video, and then play the video according to the video type by using a corresponding playing method.
In an embodiment, the display device may determine whether the acquired video to be displayed is a transparent video in a plurality of ways. As an exemplary embodiment, the video to be displayed may be pre-marked with a type identifier, so that the player may determine that the video to be displayed is a transparent video when the type identifier of the video to be displayed is a preset transparent identifier. For example, the type identifier may be recorded in box information of a video to be played, for example, an "alpha-video" tag is added to the box information to serve as a flag indicating whether the video is a played video, so that after the video to be played is acquired, the player may read the alpha-video tag from the box information, and determine that the video is a transparent video when a value of the field is a preset transparent identifier. The transparent mark may be a preset special character, such as "T", "transparent", and the like. In addition, the type identifier can be added by a producer of the video to be displayed in the process of producing the video, and can also be added by a publisher of the video to be displayed in the process of publishing the video. The type of the video uploaded by the publisher can be judged after the video is received by a video interaction platform (such as a short video platform), and a type identifier of a response is added to the video, so that after the display device acquires the video from the video interaction platform, whether the video is a transparent video can be determined according to whether the video identifier of the video is a transparent identifier.
As another exemplary embodiment, when obtaining a video to be played, a player may associate and obtain a display parameter of the video, and type information indicating a video type may be added to the display parameter in advance, so that the player may determine that the corresponding video to be played is a transparent video when the obtained type information is preset transparent information. For example, the player may directly associate and acquire corresponding type information in a parameter entry form when acquiring the video to be played, and then may determine whether the acquired video to be played is a transparent video according to the information, and perform corresponding subsequent processing according to the video type without analyzing box information of the video, thereby further accelerating the processing speed.
Under the condition that the video to be played is determined to be the transparent video, the player can go to step 206 and start the playing process of the transparent video; in the case that the video Wie to be played is not transparent, the player may go to step 216 to directly play the video by using the conventional video playing method in the related art.
In step 206, the player determines the transparent image frames contained in the transparent video.
At this time, the player determines that the video to be played is a transparent video, and at this time, the player may determine a transparent image frame included in the transparent video. As shown in fig. 3, the cover map of the transparent video is a cartoon image of "Chang E", and it is assumed that the animation special effect corresponding to the transparent video is that the clothes of Chang E fly and the starlight around Chang E flickers.
As shown in fig. 4, the transparent video is composed of m transparent image frames T0, T1, and Tm arranged in sequence. Wherein, any transparent image frame contains two sub-images: an original color image and a transparent image. Taking the original color image and the transparent image as an example, the first transparent image frame T0 is shown in fig. 5 and includes an original color image 501 and a transparent image 502. The original color image 501 may be a color map in a general sense, such as an RGB color map, a gray scale map, etc.; the size of the transparent image 502 is the same as that of the original color image 501, except that the value of each pixel is not a color value, but a transparency value of a corresponding pixel at the same position in the original color image 501.
In step 208, the player obtains color information and transparency information of each transparent image frame in the transparent video.
As described above, each transparent image frame in the transparent video includes the corresponding original color image and the transparent image. The original color images of the transparent image frames can form a color image sequence, and the transparent video can further include a transparent channel, which is used for recording the transparency value of each original color image in the color image sequence, so that the player can determine the color information and the transparency information of any transparent image frame in the transparent video through the transparent channel. Still taking the first transparent image frame T0 shown in fig. 5 as an example, the player may use the color values of the pixels of the original color image 501 in T0 as the color information of T0; and the transparency value of the original color image 501 in T0 is taken as the transparency information of the T0. As can be seen, the color information of T0 is the color value of the original color image 501 in T0, and the transparency information is the transparency value of the corresponding pixel recorded in the transparent channel in T0 (i.e., the value of the corresponding pixel in the transparent image 502). By the mode, the player can accurately acquire the color information and the transparency information of each original color image in the transparent image frame, so that the combined image can be conveniently generated subsequently.
And step 120, the player generates a combined image sequence according to the information.
Further, the player may generate a merged image based on the acquired color information and transparency information to constitute a merged image sequence. For example, the player may sequentially adjust the transparency of each pixel point of the original color image in each transparent image frame based on the transparency information of each transparent image frame, and take the ordered set of each adjusted image frame as the merged image sequence. Still taking the first transparent image frame T0 as an example, the player may adjust the transparency of each pixel in the original color image of T0 based on the transparency information of T0 (i.e., the value of each pixel in the transparent image 502), so as to obtain the composite image H0. By processing the remaining m-1 transparent image frames T1, T2, Tm in the above manner, a combined image corresponding to each transparent image frame can be obtained: h1, H2. Thus, the merged image sequence S can be formed by arranging H0, H1, H2,. and Hm in sequence.
As can be seen, the merged images in the merged image sequence S are all basic merged images, that is, merged images sequentially corresponding to the transparent image frames are generated through the color information and the transparency information obtained through the above process. In fact, the player may also obtain inter-frame information of each transparent image frame in the transparent video, such as color variation of pixel points, inter-frame association degree, feature value gradient variation, and the like, from the above information, and generate an interpolated merged image according to the inter-frame information. The basic merged image is generated by combining the original transparent image frame in the video to be played with the transparency information, the interpolation merged image is generated by the basic merged image according to the inter-frame information, and the interpolation merged image is equivalent to a new image obtained by interpolation between the adjacent basic merged images.
The relative position and number of the interpolated merged image and the basic merged image are not limited in the present disclosure, and still taking fig. 6 as an example, the player may merge images at each basic: generating interpolation merging images among H0, H1,. and Hm, wherein the synthetic image sequence is as follows: h1, HI2, H2, HI3, ·, HI (m-1), Hm; of course, other generation manners may also exist, and are not described in detail. By the mode, the display equipment realizes the expansion of the images on the basis of the original video to be played in the transparent image frame, so that the generated synthetic image sequence not only comprises the basic synthetic image, but also comprises the interpolation synthetic image, and pixel point differences among all the images in the synthetic image sequence are as small as possible, so that when all the synthetic images in the synthetic image sequence are displayed in sequence, a more exquisite and vivid animation special effect can be realized.
In step 212, the player obtains the playing interval duration of the adjacent merged image in the merged image.
In step 214, the player displays the merged images in turn.
The player can also obtain the playing interval duration of adjacent transparent image frames in the transparent video, and then after the merged image sequence is obtained through merging, a plurality of merged images in the image sequence can be sequentially merged according to the playing interval duration. The playing interval duration may be recorded in the video information of the transparent video, or may be obtained by association when the player obtains the transparent video, or may be extracted from inter-frame information of the transparent image frame, which is not limited in the embodiment of the present disclosure. It can be understood that, when the number of the merged images in the merged image sequence is the same as the number of the transparent image frames, that is, only the basic merged image (obtained by adjusting the transparency of the original color image) exists in the merged image sequence, and no interpolated merged image (generated according to the basic color image and the inter-frame information) exists (as in fig. 6, only the basic merged image such as H0 exists, and no interpolated merged image such as HI2 exists), the play interval duration is the play interval duration between each adjacent basic merged images in the transparent video. The specific values of the playing interval duration are not limited in the present disclosure, and may be equal interval durations, such as 17ms (frame rate is 60fps), 20ms (frame rate is 50fps), 42ms (frame rate is 24fps), and the like; of course, in order to realize flexible and various animation special effects, the playing interval durations may also be different (i.e., variable frame rate playing).
Of course, when the basic merged image and the interpolated merged image exist in the merged image sequence at the same time (for example, in fig. 6, the basic merged image such as H0 and the interpolated merged image such as HI2 exist), the player may determine the playing interval duration between each adjacent merged image in all the merged images according to the inter-frame information, and sequentially display each merged image according to the determined duration, thereby implementing sequential display of each merged image, and further presenting a corresponding animation special effect. In the foregoing embodiment, the merged image shown in fig. 6 is shown in sequence: h0, H1, H Hm, the player presents an animated special effect of goddess filum flying in the clothes of goddess fil and twinkling stars around goddess fil in the display component (e.g., screen) of the display device.
The player presents the non-transparent video in a conventional manner, step 216.
Under the conditions that the type identifier of the video to be displayed is a non-transparent identifier, the display parameter does not include type information, or the included type information is not preset transparent information, and the like, the player can determine that the video to be displayed is the non-transparent video, and then the display device can directly analyze and play the video to be displayed by adopting a method for processing a conventional video, and the specific process can refer to the record in the related art, and is not repeated here.
It should be noted that the process of determining the video type may also be performed by a display device, so that the display device may provide the transparent video to a player for playing when determining that the video to be played is the transparent video, and provide the non-transparent video to other application programs for playing when determining that the video to be played is the non-transparent video (that is, the player in the present scheme may be only used for playing the transparent video); or, after determining the video type of the video to be played, the display device may associate and provide the corresponding type information and the video to be played to the player, so that the player performs corresponding playing processing according to the type information.
In addition, the processing and displaying processes of the steps 206-214 can be performed by the player sequentially for each transparent image frame, that is, after the previous transparent image frame is performed with the steps 208-214, the next transparent image frame is performed with the steps 208-214. Of course, it is also possible to sequentially perform steps 208 and 214 for each transparent image frame in turn, and finally ensure that each composite image is displayed in turn.
Correspondingly to the foregoing embodiment of the video playing method, the present disclosure also provides an embodiment of a video playing apparatus.
Fig. 7 is a schematic block diagram illustrating a video playback device according to an embodiment of the present disclosure. The video playing apparatus shown in this embodiment may be suitable for video playing applications such as a player, and the applications are suitable for terminals, and the terminals include, but are not limited to, mobile phones, tablet computers, wearable devices, and electronic devices such as personal computers. The video playing application can be an application program installed in the terminal or a web page version application integrated in the browser, and a user can play a video through the video playing application.
As shown in fig. 7, the video playback apparatus may include:
a video acquisition unit 701 configured to acquire a video to be played;
a sequence generating unit 702 configured to, in a case where it is determined that the video to be played is a transparent video, sequentially extract color information and transparency information of each transparent image frame in the transparent video, and generate a merged image sequence based on the color information and the transparency information;
an image presentation unit 703 configured to sequentially present a plurality of merged images in the merged image sequence.
Optionally, the sequence generating unit 702 is further configured to:
determining the video to be played as a transparent video under the condition that the type identifier of the video to be played is a preset transparent identifier; alternatively, the first and second electrodes may be,
and determining that the video to be played is a transparent video under the condition that the type information acquired in association with the video to be played is preset transparent information.
Optionally, the sequence generating unit 702 is further configured to:
and acquiring the type identifier of the video to be played from the video information pre-associated to the video to be played.
Optionally, the transparent video includes a color image sequence and a transparent channel, the transparent channel is used for recording transparency values of each original color image in the color image sequence, and the sequence generating unit 702 is further configured to:
taking the color value of each pixel point of the original color image in any transparent image frame as the color information of any transparent image frame; and the number of the first and second groups,
and taking the transparency value of the original color image in any transparent image frame as the transparency information of any transparent image frame.
Optionally, the sequence generating unit 702 is further configured to:
and sequentially adjusting the transparency of each pixel point of the original color image in each transparent image frame based on the transparency information of each transparent image frame, and taking an ordered set formed by each adjusted image frame as the merged image sequence.
Optionally, the sequence generating unit 702 is further configured to:
and calling a self-defined read Native component library to sequentially extract color information and transparency information of each transparent image frame in the transparent video.
Optionally, the multiple merged images include a base merged image and an interpolated merged image, and the apparatus further includes:
an inter-frame information acquiring unit 704 configured to acquire inter-frame information of the respective transparent image frames;
the sequence generation unit 702 is further configured to:
sequentially generating each basic merged image corresponding to each transparent image frame based on the color information and the transparency information;
and on the basis of the basic merged image, generating an interpolation merged image among all the basic merged images according to the inter-frame information.
Optionally, the apparatus further comprises:
an interval duration obtaining unit 705 configured to obtain a playing interval duration of adjacent transparent image frames in the transparent video;
the image presentation unit 703 is further configured to:
and sequentially displaying a plurality of merged images in the merged image sequence according to the playing interval duration.
Optionally, the image displaying unit 703 is further configured to: sequentially displaying a plurality of merged images in the merged image sequence according to a fixed playing interval duration;
the device further comprises:
and an early evoking unit 706 configured to, when the currently played image is a tail merged image in the merged image sequence, if the evoking duration of the first merged image in the merged image sequence exceeds the interval duration, begin to evoke the first merged image before the display of the tail merged image is finished.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure also provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video playing method according to any of the above embodiments.
Embodiments of the present disclosure also provide a storage medium, where instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to execute the video playing method according to any of the above embodiments.
Embodiments of the present disclosure also provide a computer program product comprising a computer program and/or instructions which, when executed by a processor, implement the above-mentioned video playing method.
Fig. 8 is a schematic block diagram illustrating an electronic device in accordance with an embodiment of the present disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 818.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the video playback method described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 818. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The image acquisition component 816 can be used to acquire image data of a subject to form an image about the subject and can perform the necessary processing on the image. The Image capturing component 816 may include a camera module, in which an Image Sensor (Sensor) senses light from a subject through a lens, and provides the obtained light sensing data to an Image Signal Processor (ISP), which generates an Image corresponding to the subject according to the light sensing data. The image sensor may be a CMOS sensor or a CCD sensor, and may also be an infrared sensor, a depth sensor, or the like; the camera module may be built in the electronic device 800, or may be an external module of the electronic device 800; the ISP may be built in the camera module or may be externally hung on the electronic device (not in the camera module).
The communication component 818 is configured to facilitate communications between the electronic device 800 and other devices in a wired or wireless manner. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 818 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 818 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an embodiment of the present disclosure, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described video playing method.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the above-described method is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It is noted that, in the present disclosure, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method and apparatus provided by the embodiments of the present disclosure are described in detail above, and the principles and embodiments of the present disclosure are explained herein by applying specific examples, and the above description of the embodiments is only used to help understanding the method and core ideas of the present disclosure; meanwhile, for a person skilled in the art, based on the idea of the present disclosure, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present disclosure should not be construed as a limitation to the present disclosure.

Claims (10)

1. A video playback method, comprising:
acquiring a video to be played;
under the condition that the video to be played is determined to be a transparent video, sequentially extracting color information and transparency information of each transparent image frame in the transparent video, and generating a merged image sequence based on the color information and the transparency information;
and sequentially displaying a plurality of merged images in the merged image sequence.
2. The method according to claim 1, wherein the determining that the video to be played is a transparent video comprises:
determining the video to be played as a transparent video under the condition that the type identifier of the video to be played is a preset transparent identifier; alternatively, the first and second electrodes may be,
and determining that the video to be played is a transparent video under the condition that the type information acquired in association with the video to be played is preset transparent information.
3. The method according to claim 2, wherein obtaining the type identifier of the video to be played comprises:
and acquiring the type identifier of the video to be played from the video information pre-associated to the video to be played.
4. The method according to claim 1, wherein the transparent video comprises a color image sequence and a transparent channel, the transparent channel is used for recording transparency values of each original color image in the color image sequence, and extracting color information and transparency information of any transparent image frame in the transparent video comprises:
taking the color value of each pixel point of the original color image in any transparent image frame as the color information of any transparent image frame; and the number of the first and second groups,
and taking the transparency value of the original color image in any transparent image frame as the transparency information of any transparent image frame.
5. The method of claim 4, wherein generating a merged image sequence based on the color information and transparency information comprises:
and sequentially adjusting the transparency of each pixel point of the original color image in each transparent image frame based on the transparency information of each transparent image frame, and taking an ordered set formed by each adjusted image frame as the merged image sequence.
6. The method according to claim 1, wherein the sequentially extracting color information and transparency information of each transparent image frame in the transparent video comprises:
and calling a self-defined read Native component library to sequentially extract color information and transparency information of each transparent image frame in the transparent video.
7. A video playback apparatus, comprising:
a video acquisition unit configured to acquire a video to be played;
the sequence generation unit is configured to sequentially extract color information and transparency information of each transparent image frame in the transparent video under the condition that the video to be played is determined to be the transparent video, and generate a combined image sequence based on the color information and the transparency information;
an image presentation unit configured to sequentially present a plurality of merged images in the merged image sequence.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video playback method of any of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video playback method of any of claims 1-6.
10. A computer program product comprising a computer program and/or instructions which, when executed by a processor, implement the video playback method of any one of claims 1 to 6.
CN202110343335.3A 2021-03-30 2021-03-30 Video playing method, device, electronic equipment and storage medium Active CN113115097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110343335.3A CN113115097B (en) 2021-03-30 2021-03-30 Video playing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110343335.3A CN113115097B (en) 2021-03-30 2021-03-30 Video playing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113115097A true CN113115097A (en) 2021-07-13
CN113115097B CN113115097B (en) 2023-05-09

Family

ID=76712811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110343335.3A Active CN113115097B (en) 2021-03-30 2021-03-30 Video playing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113115097B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473132A (en) * 2021-07-26 2021-10-01 Oppo广东移动通信有限公司 Transparent video compression method, device, storage medium and terminal
CN113645476A (en) * 2021-08-06 2021-11-12 广州博冠信息科技有限公司 Picture processing method and device, electronic equipment and storage medium
CN115022713A (en) * 2022-05-26 2022-09-06 京东科技信息技术有限公司 Video data processing method and device, storage medium and electronic equipment
CN117853377A (en) * 2024-02-08 2024-04-09 荣耀终端有限公司 Image processing method, electronic device, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851385A (en) * 2017-02-20 2017-06-13 北京金山安全软件有限公司 Video recording method and device and electronic equipment
CN108235055A (en) * 2017-12-15 2018-06-29 苏宁云商集团股份有限公司 Transparent video implementation method and equipment in AR scenes
CN111669646A (en) * 2019-03-07 2020-09-15 北京陌陌信息技术有限公司 Method, device, equipment and medium for playing transparent video
CN111695525A (en) * 2020-06-15 2020-09-22 恒信东方文化股份有限公司 360-degree clothes fitting display method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851385A (en) * 2017-02-20 2017-06-13 北京金山安全软件有限公司 Video recording method and device and electronic equipment
US20200007816A1 (en) * 2017-02-20 2020-01-02 Beijing Kingsoft Internet Security Software Co., Ltd. Video recording method, electronic device and storage medium
CN108235055A (en) * 2017-12-15 2018-06-29 苏宁云商集团股份有限公司 Transparent video implementation method and equipment in AR scenes
CN111669646A (en) * 2019-03-07 2020-09-15 北京陌陌信息技术有限公司 Method, device, equipment and medium for playing transparent video
CN111695525A (en) * 2020-06-15 2020-09-22 恒信东方文化股份有限公司 360-degree clothes fitting display method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473132A (en) * 2021-07-26 2021-10-01 Oppo广东移动通信有限公司 Transparent video compression method, device, storage medium and terminal
CN113473132B (en) * 2021-07-26 2024-04-26 Oppo广东移动通信有限公司 Transparent video compression method, device, storage medium and terminal
CN113645476A (en) * 2021-08-06 2021-11-12 广州博冠信息科技有限公司 Picture processing method and device, electronic equipment and storage medium
CN113645476B (en) * 2021-08-06 2023-10-03 广州博冠信息科技有限公司 Picture processing method and device, electronic equipment and storage medium
CN115022713A (en) * 2022-05-26 2022-09-06 京东科技信息技术有限公司 Video data processing method and device, storage medium and electronic equipment
CN117853377A (en) * 2024-02-08 2024-04-09 荣耀终端有限公司 Image processing method, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN113115097B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN113115097B (en) Video playing method, device, electronic equipment and storage medium
US20230230306A1 (en) Animated emoticon generation method, computer-readable storage medium, and computer device
CN108965982B (en) Video recording method and device, electronic equipment and readable storage medium
CN109961747B (en) Electronic ink screen display method and device and electronic equipment
EP3817395A1 (en) Video recording method and apparatus, device, and readable storage medium
CN108924464B (en) Video file generation method and device and storage medium
CN109168062B (en) Video playing display method and device, terminal equipment and storage medium
US9661132B2 (en) Method, apparatus, and storage medium for displaying a conversation interface
CN111031393A (en) Video playing method, device, terminal and storage medium
EP3796317A1 (en) Video processing method, video playing method, devices and storage medium
CN105426086A (en) Display processing method and device of searching functional block in page
CN111479158B (en) Video display method and device, electronic equipment and storage medium
CN113065008A (en) Information recommendation method and device, electronic equipment and storage medium
CN105744133A (en) Video fill-in light method and apparatus
JP3168253U (en) Reception guide device
CN113099297A (en) Method and device for generating click video, electronic equipment and storage medium
CN110719530A (en) Video playing method and device, electronic equipment and storage medium
CA2838878C (en) Method and apparatus for controlling contents in electronic device
CN113452929B (en) Video rendering method and device, electronic equipment and storage medium
CN104837020B (en) The method and apparatus for playing video
CN108983971A (en) Labeling method and device based on augmented reality
CN111612875A (en) Dynamic image generation method and device, electronic equipment and storage medium
CN112445348A (en) Expression processing method, device and medium
CN109413232B (en) Screen display method and device
CN107908324A (en) Method for showing interface and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant