CN112714357A - Video playing method, video playing device, electronic equipment and storage medium - Google Patents

Video playing method, video playing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112714357A
CN112714357A CN202011522819.6A CN202011522819A CN112714357A CN 112714357 A CN112714357 A CN 112714357A CN 202011522819 A CN202011522819 A CN 202011522819A CN 112714357 A CN112714357 A CN 112714357A
Authority
CN
China
Prior art keywords
pixel point
frame
video
material video
synthesized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011522819.6A
Other languages
Chinese (zh)
Other versions
CN112714357B (en
Inventor
于博洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011522819.6A priority Critical patent/CN112714357B/en
Publication of CN112714357A publication Critical patent/CN112714357A/en
Application granted granted Critical
Publication of CN112714357B publication Critical patent/CN112714357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application discloses a video playing method, a video playing device, electronic equipment and a storage medium, and relates to the field of computer vision. The specific implementation scheme is as follows: receiving a material video sent by a server; each frame in the material video comprises a first image part and a second image part; performing image synthesis on each frame in the material video according to the transparency value carried by each first pixel point in the first image part and the color value carried by each second pixel point in the second image part to obtain a synthesized frame corresponding to each frame in the material video; and rendering and displaying the corresponding composite frames in sequence according to the sequence of the frames in the material video. Therefore, the method can greatly reduce the resource volume, improve the performance of the GPU and improve the playing experience.

Description

Video playing method, video playing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to the field of computer vision, and in particular, to a video playing method, a video playing apparatus, an electronic device, and a storage medium.
Background
The video itself is composed of a set of coded sequential frame pictures. When the sequence frame picture is played every time, the picture needs to be re-analyzed, the performance of a Graphics Processing Unit (GPU) is consumed, and the volume of the sequence frame picture material is large.
Therefore, how to improve the performance of the GPU and reduce the resource size is an urgent problem to be solved at present.
Disclosure of Invention
The application provides a video playing method, a video playing device, an electronic device and a storage medium.
According to an aspect of the present application, there is provided a video playing method, including:
receiving a material video sent by a server; each frame in the material video comprises a first image part and a second image part;
performing image synthesis on each frame in the material video according to the transparency value carried by each first pixel point in the first image part and the color value carried by each second pixel point in the second image part to obtain a synthesized frame corresponding to each frame in the material video;
and rendering and displaying the corresponding composite frames in sequence according to the sequence of the frames in the material video.
According to another aspect of the present application, there is provided a video playback apparatus including:
the receiving module is used for receiving the material video sent by the server; each frame in the material video comprises a first image part and a second image part;
an obtaining module, configured to perform image synthesis on each frame in the material video according to a transparency value carried by each first pixel in the first image portion and a color value carried by each second pixel in the second image portion, so as to obtain a synthesized frame corresponding to each frame in the material video;
and the control module is used for rendering and displaying the corresponding composite frames in sequence according to the sequence of the frames in the material video.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the above-described method.
According to another aspect of the application, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method described above.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a video playing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another video playing method provided in an embodiment of the present application;
fig. 3 is a block diagram of a video playback device according to an embodiment of the present application;
fig. 4 is a block diagram of an electronic device provided according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A video playback method, a video playback apparatus, an electronic device, and a storage medium according to embodiments of the present application are described below with reference to the drawings.
Before the video playing method according to the embodiment of the present application is introduced, the next alpha channel is introduced.
Alpha Channel or Alpha Channel refers to the transparency and translucency of a picture. For example, a bitmap stored using 16 bits per pixel, for each pixel in the graph, may represent the red channel (R channel) with 5 bits, the green channel (G channel) with 5 bits, the blue channel (B channel) with 5 bits, and the last 1 bit being the alpha channel. In this case it either means transparent or not, because the alpha bit has only the possibility of two different representations, 0 or 1. As another example, a bitmap stored using 32 bits, each 8 bits representing a Red Green Blue (RGB) and alpha channel. In this case, neither light can represent transparent or opaque, and the alpha channel can also represent 256 levels of translucency, since the alpha channel has 8 bits and 256 different data representation possibilities are possible.
Fig. 1 is a schematic flowchart of a video playing method according to an embodiment of the present application.
The video playing method of the embodiment of the present application can be executed by the video playing device provided in the embodiment of the present application, and the device can be configured in an electronic device.
As shown in fig. 1, a video playing method according to an embodiment of the present application includes:
step S101, receiving a material video sent by a server; wherein each frame in the material video comprises a first image part and a second image part.
In this step, each pixel point in the material video carries four channels, which are RGB and alpha channels, respectively, the picture sizes of the first image portion and the second image portion are the same, and each first pixel point in the first image portion corresponds to each second pixel point in the second image portion one to one. Each first pixel point in the first image portion only stores one of the original contents of the video (the color value of the RGB channel) and does not store the transparency (the transparency value of the alpha channel is set to be 0), or only stores the transparency (the transparency value of the alpha channel) and does not store the original contents of the video (the color value of the RGB channel is set to be 0), and each second pixel point in the second image portion only stores the original contents of the video and does not store the transparency, or only stores the transparency and does not store the other of the original contents of the video.
Suppose that the terminal receives a material video sent by the server, the material video is like a mirror image video, and the material video is divided into two sides from the middle of the material video, one side, such as the left side, plays color information without an RGB channel and only transparency information of an alpha channel, and the other side, such as the right side, plays transparency information without an alpha channel and only color information of an RGB channel. That is, one side of the material video is transparent, and the other side is not transparent.
It should be noted that, in order to reduce the resource volume, the server may not store as many pictures, and the material video of MP4 is stored, so that each picture only records the difference between its front and back through the compression logic relationship inside MP4, and if a 200M material video is converted into MP4 format, it may not be possible to store even 1M, which can greatly reduce the resource volume.
And S102, carrying out image synthesis on each frame in the material video according to the transparency value carried by each first pixel point in the first image part and the color value carried by each second pixel point in the second image part to obtain a synthesized frame corresponding to each frame in the material video.
Generally, when a sequence frame picture of a video is played, the sequence frame picture itself contains information of an alpha channel, and colors can be seen, but the information of the alpha channel is lost after the sequence frame picture is converted into the video.
Specifically, for any frame in the material video, whether the transparency value carried by each first pixel point in the first image part is transparent or non-transparent is judged. If the transparency value carried by the first pixel point is transparent, the synthetic pixel point of the synthetic frame corresponding to the first pixel point of the frame is transparent; and if the transparency value carried by the first pixel point is non-transparent, finding a second pixel point corresponding to the first pixel point from the second image part according to the first pixel point, and obtaining the value of the synthesized pixel point of the synthesized frame corresponding to the first pixel point of the frame according to the relation between the first pixel point and the second pixel point, for example, the transparency value of the alpha channel of the first pixel point is multiplied by the color value of the RGB channel of the second pixel point.
It should be noted that each frame obtained by synthesis may not be a complete block, and a part of missing pixels may exist in the middle, and the missing pixels may achieve a hollow effect, and what is shown in the live video is a transparent effect.
And step S103, rendering and displaying the corresponding composite frames in sequence according to the sequence of the frames in the material video.
In the step, after the composite frame corresponding to each frame in the material video is obtained, the Metal color is used for mixing and rendering according to the sequence of each frame in the material video, after rendering, the screen-on function is realized, and the animation is displayed and is transparently played.
For example, the terminal plays the material video as a transparent video. If there are some elements under the video, the elements under the video can be seen through the video, and the video can be used in live broadcast, such as displaying gifts.
Therefore, the video playing method in the embodiment of the application receives a material video, which is sent by a server and contains a first image part and a second image part, performs image synthesis on each frame in the material video according to the transparency value carried by each first pixel point in the first image part and the color value carried by each second pixel point in the second image part to obtain a synthesized frame corresponding to each frame in the material video, and sequentially renders and displays the corresponding synthesized frame according to the sequence of each frame in the material video. Therefore, the method can greatly reduce the resource volume, improve the performance of the GPU, restore the alpha channel of the picture when playing the video and improve the playing experience.
Fig. 2 is a schematic flowchart of another video playing method provided in an embodiment of the present application.
As shown in fig. 2, the video playing method according to the embodiment of the present application includes:
in step S201, a compressed file of the material video transmitted by the server is received.
In order to further reduce the resource volume of the material video, the material video is stored in the server in advance in the form of a compressed file, so that the compressed file of the material video is directly called when needed.
Step S202, decompressing the compressed file to obtain a material video; the pixel points of each frame in the material video comprise a transparency channel for carrying a transparency value and a color channel for carrying a color value.
And after obtaining the compressed file, the terminal starts to decompress the file to obtain the material video.
The content of the material video refers to step S101, which is not described again in this embodiment.
Step S203, for any frame in the material video, determining a first target pixel point from the first image part according to the transparency value carried by each first pixel point in the first image part; and the transparency value carried by the first target pixel point represents non-transparency.
In this step, for any frame in the material video, it is determined whether the transparency value carried by each first pixel point in the first image portion is transparent or non-transparent, and the transparency value carried by the first pixel point is a non-transparent pixel point as a first target pixel point.
Step S204, a second target pixel point corresponding to the first target pixel point is determined from the second image part.
That is, according to the first target pixel point, the pixel point corresponding to the arrangement position in the second image part is found and used as the second target pixel point.
Step S205, generating a composite frame according to the second target pixel point; the synthesized frame comprises a plurality of synthesized pixel points, each synthesized pixel point is provided with a corresponding second target pixel point, and the color value carried by the color channel of each synthesized pixel point is determined according to the color value carried by the color channel of the corresponding second target pixel point.
The synthetic pixel points correspond to the second target pixel points one by one; the arrangement position of each synthesized pixel point in the synthesized frame is the same as the arrangement position of each second target pixel point in the second image part.
In this step, after the first target pixel point and the second target pixel point are obtained, the value of the synthesized pixel point of the synthesized frame corresponding to the first pixel point of the frame is obtained according to the relationship between the first target pixel point and the second pixel point, for example, the transparency value of the alpha channel of the first pixel point is multiplied by the color value of the RGB channel of the second pixel point.
And step S206, rendering and displaying the corresponding composite frames in sequence according to the sequence of the frames in the material video.
For a specific process, refer to step S103, which is not described in detail in this embodiment.
After the terminal obtains the video material, the information with transparency of the alpha channel is required to be recorded when the terminal analyzes the video material, the transparency information is removed when the video is played, the video material is in a buffering mode, and finally the video material is converted into texture by using Metal and played.
Specifically, when a video is played, transparency information recorded by an alpha channel of a left picture is found, which pixel needs to be displayed can be indicated according to the transparency information, which pixel does not need to be displayed (transparency is 0 information does not need to be displayed), pixel points corresponding to the transparency information arrangement positions are recorded from an RGB channel of a right picture according to the arrangement positions of the pixel points needing to be displayed, if the pixel points are found, the pixel points of the right picture are copied, synthesized pixel points are obtained through calculation, and the final playing effect is that the pixel points can not be displayed if the information of the alpha channel is transparent, so that the video has a final transparent effect, namely the material video supports the alpha channel, the alpha information can be restored during playing, the playing experience is improved, and the resource volume is greatly reduced, GPU performance is improved.
To sum up, the video playing method of the embodiment of the present application receives a compressed file of a material video sent by a server, decompresses the compressed file to obtain the material video with frames including a first image portion and a second image portion, determines a carried transparency value from the first image portion to represent a non-transparent first target pixel point according to a transparency value carried by each first pixel point in the first image portion for any one frame in the material video, determines a second target pixel point corresponding to the first target pixel point from the second image portion, generates a composite frame according to the second target pixel point, and sequentially renders and displays the corresponding composite frame according to the sequence of frames in the material video. Therefore, the method can realize some complex animations, such as complex animations including spring effect, 3D deformation, light shadow, particle special effect and the like, reduces development cost, can greatly reduce resource size, improves GPU performance, can restore an alpha channel of a picture when playing a video, improves playing experience, solves the problem of platform compatibility, and can have good performance on different systems.
In order to implement the foregoing embodiments, the present application further provides a video playing apparatus.
Fig. 3 is a schematic structural diagram of a video playback device according to an embodiment of the present application.
As shown in fig. 3, a video playback device 300 according to an embodiment of the present application includes: a receiving module 310, an obtaining module 320, and a control module 330.
The receiving module 310 is configured to receive a material video sent by a server; each frame in the material video comprises a first image part and a second image part;
the obtaining module 320 is configured to perform image synthesis on each frame in the material video according to the transparency value carried by each first pixel point in the first image portion and the color value carried by each second pixel point in the second image portion, so as to obtain a synthesized frame corresponding to each frame in the material video;
and the control module 320 is configured to render and display the corresponding composite frames in sequence according to the sequence of the frames in the material video.
In a possible implementation manner of this embodiment of the present application, the obtaining module 320 includes:
the first determining unit is used for determining a first target pixel point from the first image part according to the transparency value carried by each first pixel point in the first image part for any frame in the material video; the transparency value carried by the first target pixel point represents non-transparency;
the second determining unit is used for determining a second target pixel point corresponding to the first target pixel point from the second image part;
the generating unit is used for generating a synthesized frame according to the second target pixel point;
the synthesized frame comprises a plurality of synthesized pixel points, each synthesized pixel point is provided with a corresponding second target pixel point, and the color value carried by the color channel of each synthesized pixel point is determined according to the color value carried by the color channel of the corresponding second target pixel point.
The synthetic pixel points correspond to the second target pixel points one by one; the arrangement position of each synthesized pixel point in the synthesized frame is the same as the arrangement position of each second target pixel point in the second image part.
Each frame of the material video comprises a first image part and a second image part with the same picture size; each first pixel point in the first image part corresponds to each second pixel point in the second image part one by one.
In a possible implementation manner of this embodiment of the present application, the receiving module 310 includes:
the receiving unit is used for receiving a compressed file of the material video sent by the server;
the acquisition unit is used for decompressing the compressed file to obtain a material video; the pixel points of each frame in the material video comprise a transparency channel for carrying a transparency value and a color channel for carrying a color value.
It should be noted that the explanation of the foregoing embodiment of the video playing method is also applicable to the video playing apparatus of this embodiment, and therefore, the description thereof is omitted here.
According to the video playing device in the embodiment of the application, a receiving module receives a material video which is sent by a server and contains a first image part and a second image part, an obtaining module carries out image synthesis on each frame in the material video according to transparency values carried by each first pixel point in the first image part and color values carried by each second pixel point in the second image part to obtain a synthesized frame corresponding to each frame in the material video, and a control module renders and displays the corresponding synthesized frame in sequence according to the sequence of each frame in the material video. Therefore, the device can greatly reduce the resource volume, improve the performance of the GPU, restore the alpha channel of the picture when playing the video, improve the playing experience, solve the problem of platform compatibility and have good performance on different systems. In addition, some complex animations such as spring effect, 3D deformation, light shadow, particle special effect and the like can be realized, and development cost is reduced.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
FIG. 4 shows a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 4, the device 400 includes a computing unit 401 that can perform various appropriate actions and processes in accordance with a computer program stored in a ROM (Read-Only Memory) 402 or a computer program loaded from a storage unit 408 into a RAM (Random Access Memory) 403. In the RAM 403, various programs and data required for the operation of the device 400 can also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An I/O (Input/Output) interface 405 is also connected to the bus 404.
A number of components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing Unit 401 include, but are not limited to, a CPU (Central Processing Unit), a GPU (graphics Processing Unit), various dedicated AI (Artificial Intelligence) computing chips, various computing Units running machine learning model algorithms, a DSP (Digital Signal Processor), and any suitable Processor, controller, microcontroller, and the like. The calculation unit 401 executes the respective methods and processes described above, such as a video playback method. For example, in some embodiments, the video playback method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the video playback method described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the video playback method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, Integrated circuitry, FPGAs (Field Programmable Gate arrays), ASICs (Application-Specific Integrated circuits), ASSPs (Application Specific Standard products), SOCs (System On Chip, System On a Chip), CPLDs (Complex Programmable Logic devices), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an EPROM (Electrically Programmable Read-Only-Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only-Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a CRT (Cathode Ray Tube) or LCD (Liquid Crystal Display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network), WAN (Wide Area Network), internet, and blockchain Network.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS").
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (13)

1. A video playback method, comprising:
receiving a material video sent by a server; each frame in the material video comprises a first image part and a second image part;
performing image synthesis on each frame in the material video according to the transparency value carried by each first pixel point in the first image part and the color value carried by each second pixel point in the second image part to obtain a synthesized frame corresponding to each frame in the material video;
and rendering and displaying the corresponding composite frames in sequence according to the sequence of the frames in the material video.
2. The video playing method according to claim 1, wherein the image synthesizing according to the transparency value carried by each first pixel point in the first image portion and the color value carried by each second pixel point in the second image portion comprises:
for any frame in the material video, determining a first target pixel point from the first image part according to the transparency value carried by each first pixel point in the first image part; the transparency value carried by the first target pixel point represents non-transparency;
determining a second target pixel point corresponding to the first target pixel point from the second image part;
generating the synthesized frame according to the second target pixel point;
the synthesized frame comprises a plurality of synthesized pixel points, each synthesized pixel point is provided with a corresponding second target pixel point, and the color value carried by the color channel of each synthesized pixel point is determined according to the color value carried by the color channel of the corresponding second target pixel point.
3. The video playback method according to claim 2, wherein the synthesized pixel point corresponds to the second target pixel point one to one; the arrangement position of each synthesized pixel point in the synthesized frame is the same as the arrangement position of each second target pixel point in the second image part.
4. The video playback method according to any one of claims 1 to 3, wherein each frame of the material video contains a first image portion and the second image portion of the same picture size; and each first pixel point in the first image part corresponds to each second pixel point in the second image part one to one.
5. The video playing method according to claim 1, wherein the receiving of the material video transmitted by the server includes:
receiving a compressed file of the material video sent by the server;
decompressing the compressed file to obtain the material video; and the pixel points of each frame in the material video comprise a transparency channel for carrying a transparency value and a color channel for carrying a color value.
6. A video playback apparatus comprising:
the receiving module is used for receiving the material video sent by the server; each frame in the material video comprises a first image part and a second image part;
an obtaining module, configured to perform image synthesis on each frame in the material video according to a transparency value carried by each first pixel in the first image portion and a color value carried by each second pixel in the second image portion, so as to obtain a synthesized frame corresponding to each frame in the material video;
and the control module is used for rendering and displaying the corresponding composite frames in sequence according to the sequence of the frames in the material video.
7. The video playback device of claim 6, wherein the obtaining module comprises:
a first determining unit, configured to determine, for any frame in the material video, a first target pixel point from the first image portion according to a transparency value carried by each first pixel point in the first image portion; the transparency value carried by the first target pixel point represents non-transparency;
a second determining unit, configured to determine a second target pixel point corresponding to the first target pixel point from the second image portion;
the generating unit is used for generating the synthesized frame according to the second target pixel point;
the synthesized frame comprises a plurality of synthesized pixel points, each synthesized pixel point is provided with a corresponding second target pixel point, and the color value carried by the color channel of each synthesized pixel point is determined according to the color value carried by the color channel of the corresponding second target pixel point.
8. The video playback device according to claim 7, wherein the synthesized pixel point corresponds to the second target pixel point one to one; the arrangement position of each synthesized pixel point in the synthesized frame is the same as the arrangement position of each second target pixel point in the second image part.
9. The video playback device according to any one of claims 6 to 8, wherein each frame of the material video contains a first image portion and the second image portion of the same picture size; and each first pixel point in the first image part corresponds to each second pixel point in the second image part one to one.
10. The video playback device of claim 6, wherein the receiving module comprises:
the receiving unit is used for receiving the compressed file of the material video sent by the server;
the acquisition unit is used for decompressing the compressed file to obtain the material video; and the pixel points of each frame in the material video comprise a transparency channel for carrying a transparency value and a color channel for carrying a color value.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-5.
CN202011522819.6A 2020-12-21 2020-12-21 Video playing method, video playing device, electronic equipment and storage medium Active CN112714357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011522819.6A CN112714357B (en) 2020-12-21 2020-12-21 Video playing method, video playing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011522819.6A CN112714357B (en) 2020-12-21 2020-12-21 Video playing method, video playing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112714357A true CN112714357A (en) 2021-04-27
CN112714357B CN112714357B (en) 2023-10-13

Family

ID=75544955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011522819.6A Active CN112714357B (en) 2020-12-21 2020-12-21 Video playing method, video playing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112714357B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173157A (en) * 2021-12-10 2022-03-11 广州博冠信息科技有限公司 Video stream transmission method and device, electronic equipment and storage medium
CN114679620A (en) * 2022-03-25 2022-06-28 湖南快乐阳光互动娱乐传媒有限公司 Video playing method and device with alpha channel and electronic equipment
CN114915839A (en) * 2022-04-07 2022-08-16 广州方硅信息技术有限公司 Rendering processing method for inserting video support elements, electronic terminal and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010052906A1 (en) * 2000-02-29 2001-12-20 Yu-Ling Chen Alpha-channel compositing system
EP1556835A1 (en) * 2002-10-30 2005-07-27 Canon Kabushiki Kaisha Method of background colour removal for porter and duff compositing
WO2013108493A1 (en) * 2012-01-17 2013-07-25 本田技研工業株式会社 Image processing device
US20170352171A1 (en) * 2016-06-01 2017-12-07 Adobe Systems Incorporated Coverage based Approach to Image Rendering using Opacity Values
CN107818548A (en) * 2017-10-27 2018-03-20 上海京颐科技股份有限公司 A kind of image processing method, device, equipment and computer-readable medium
US20180084292A1 (en) * 2016-09-18 2018-03-22 Shanghai Hode Information Technology Co.,Ltd. Web-based live broadcast
CN108295467A (en) * 2018-02-06 2018-07-20 网易(杭州)网络有限公司 Rendering method, device and the storage medium of image, processor and terminal
CN109272565A (en) * 2017-07-18 2019-01-25 腾讯科技(深圳)有限公司 Animation playing method, device, storage medium and terminal
CN111968214A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud rendering method and device, electronic equipment and storage medium
CN112070863A (en) * 2019-06-11 2020-12-11 腾讯科技(深圳)有限公司 Animation file processing method and device, computer readable storage medium and computer equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010052906A1 (en) * 2000-02-29 2001-12-20 Yu-Ling Chen Alpha-channel compositing system
EP1556835A1 (en) * 2002-10-30 2005-07-27 Canon Kabushiki Kaisha Method of background colour removal for porter and duff compositing
WO2013108493A1 (en) * 2012-01-17 2013-07-25 本田技研工業株式会社 Image processing device
US20170352171A1 (en) * 2016-06-01 2017-12-07 Adobe Systems Incorporated Coverage based Approach to Image Rendering using Opacity Values
US20180084292A1 (en) * 2016-09-18 2018-03-22 Shanghai Hode Information Technology Co.,Ltd. Web-based live broadcast
CN109272565A (en) * 2017-07-18 2019-01-25 腾讯科技(深圳)有限公司 Animation playing method, device, storage medium and terminal
CN107818548A (en) * 2017-10-27 2018-03-20 上海京颐科技股份有限公司 A kind of image processing method, device, equipment and computer-readable medium
CN108295467A (en) * 2018-02-06 2018-07-20 网易(杭州)网络有限公司 Rendering method, device and the storage medium of image, processor and terminal
CN112070863A (en) * 2019-06-11 2020-12-11 腾讯科技(深圳)有限公司 Animation file processing method and device, computer readable storage medium and computer equipment
CN111968214A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud rendering method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173157A (en) * 2021-12-10 2022-03-11 广州博冠信息科技有限公司 Video stream transmission method and device, electronic equipment and storage medium
CN114679620A (en) * 2022-03-25 2022-06-28 湖南快乐阳光互动娱乐传媒有限公司 Video playing method and device with alpha channel and electronic equipment
CN114915839A (en) * 2022-04-07 2022-08-16 广州方硅信息技术有限公司 Rendering processing method for inserting video support elements, electronic terminal and storage medium
CN114915839B (en) * 2022-04-07 2024-04-16 广州方硅信息技术有限公司 Rendering processing method for inserting video support element, electronic terminal and storage medium

Also Published As

Publication number Publication date
CN112714357B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
US10110936B2 (en) Web-based live broadcast
CN112714357B (en) Video playing method, video playing device, electronic equipment and storage medium
RU2677584C1 (en) Exploiting frame to frame coherency in architecture of image construction with primitives sorting at intermediate stage
US20200007602A1 (en) Remote desktop video streaming alpha-channel
US10440360B2 (en) Video processing system
CN110989878B (en) Animation display method and device in applet, electronic equipment and storage medium
US11882297B2 (en) Image rendering and coding method and related apparatus
CN113946402A (en) Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
US9153201B2 (en) Real-time order-independent transparent rendering
US9721359B2 (en) Apparatus and method of decompressing rendering data and recording medium thereof
CN113655975B (en) Image display method, image display device, electronic apparatus, and medium
CN110782387B (en) Image processing method and device, image processor and electronic equipment
CN111614906B (en) Image preprocessing method and device, electronic equipment and storage medium
CN115988265A (en) Rendering method and device of display picture and terminal equipment
CN112991412B (en) Liquid crystal instrument sequence frame animation performance optimization method and liquid crystal instrument
CN113691835B (en) Video implantation method, device, equipment and computer readable storage medium
CN115861510A (en) Object rendering method, device, electronic equipment, storage medium and program product
CN114760526A (en) Video rendering method and device, electronic equipment and storage medium
CN114882149A (en) Animation rendering method and device, electronic equipment and storage medium
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN112565869A (en) Window fusion method, device and equipment for video redirection
CN113160377B (en) Method, apparatus, device and storage medium for processing image
CN113627363A (en) Video file processing method, device, equipment and storage medium
CN114996491A (en) Method and system for optimizing display performance of all-liquid-crystal instrument
CN113691866A (en) Video processing method, video processing device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant