CN115022713A - Video data processing method and device, storage medium and electronic equipment - Google Patents

Video data processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115022713A
CN115022713A CN202210581162.3A CN202210581162A CN115022713A CN 115022713 A CN115022713 A CN 115022713A CN 202210581162 A CN202210581162 A CN 202210581162A CN 115022713 A CN115022713 A CN 115022713A
Authority
CN
China
Prior art keywords
video
image
played
frame
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210581162.3A
Other languages
Chinese (zh)
Inventor
常川
朱海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202210581162.3A priority Critical patent/CN115022713A/en
Publication of CN115022713A publication Critical patent/CN115022713A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a video data processing method and device, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a video to be played; performing frame extraction processing on a video to be played to obtain a plurality of video images; and for each video image, acquiring a processed image containing the transparency information of the video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image. Obtaining a plurality of video images by performing frame extraction on a video to be played, and processing each video image to obtain a processed image containing transparency information; each video image and the corresponding processing image are synthesized to obtain a plurality of synthesized images, the synthesized images are images with transparency information, and the videos with the transparency effect are played by rendering and displaying the synthesized images, so that high-quality video playing service is provided for users, and user experience is improved.

Description

Video data processing method and device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a method and an apparatus for processing video data, a storage medium, and an electronic device.
Background
The video with the transparent effect can be widely applied to a plurality of scenes such as video conferences and live broadcasts, and brings richer imagination space for the content presentation of the scenes. By using the video, the flexibility of front-end display can be greatly improved, a vivid visual effect is brought for the video, random replacement of background pictures, layered display of special effects and the video, superposition of multiple videos and the like are achieved, the background pictures are very natural after superposition, no sense of incongruity exists, and active and cool effects are easily achieved by scene display.
The video of the h.264 or h.265 coding standard is generally used at present, and the video has no transparent effect when being played, which brings a very poor use experience to users using the video of the h.264 or h.265 coding standard.
Disclosure of Invention
In view of this, the present invention provides a video data processing method and apparatus, a storage medium, and an electronic device, where after processing a video without a transparent effect, the method provided by the present invention can magnify the video with the transparent effect, and display the video with the transparent effect to a user, so as to provide a better service for the user and improve the user experience.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
the first aspect of the present invention discloses a video data processing method, which includes:
acquiring a video to be played;
performing frame extraction processing on the video to be played to obtain a plurality of video images;
and for each video image, acquiring a processed image containing transparency information of the video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image.
In the above method, preferably, the frame extraction processing on the video to be played to obtain a plurality of video images includes:
determining a frame extraction frequency based on the frame rate of the video to be played;
extracting video frames from the video to be played based on the frame extraction frequency to obtain each extracted video frame;
determining whether the video to be played has the key video frames which are not extracted;
if the video to be played has the key video frames which are not extracted, determining each extracted video frame and each key video frame which is not extracted in the video to be played as a video image;
and if the video to be played does not have the key video frames which are not extracted, determining all the extracted video frames as video images.
In the above method, preferably, the acquiring a processed image including transparency information of the video image includes:
acquiring a copy image of the video image;
extracting RGB information of each pixel point in the copied image;
processing the RGB information of each pixel point in the copied image to obtain the transparency of each pixel point;
and obtaining a processed image of the video image based on the transparency of each pixel point in the copied image.
In the above method, it is preferable that the synthesizing the video image and the processed image to obtain a synthesized image includes:
and for each pixel point in the video image, determining a target pixel point corresponding to the pixel point in the processed image, and adding the transparency of the target pixel point to the RGB information of the pixel point to obtain a synthetic image.
In the above method, preferably, the playing the composite image includes:
rendering the composite image in a preset browser so as to play the composite image.
A second aspect of the present invention discloses a video data processing apparatus, including:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a video to be played;
the frame extraction processing unit is used for carrying out frame extraction processing on the video to be played to obtain a plurality of video images;
and the synthesizing unit is used for acquiring a processed image containing transparency information of the video image for each video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image.
The above apparatus, optionally, the frame extraction processing unit includes:
the first determining subunit is used for determining the frame extraction frequency based on the frame rate of the video to be played;
the extraction subunit is used for extracting video frames from the video to be played based on the frame extraction frequency to obtain each extracted video frame;
the second determining subunit is used for determining whether the video to be played has the key video frames which are not extracted;
a third determining subunit, configured to determine, if there are key video frames that are not extracted in the video to be played, each extracted video frame and each key video frame that is not extracted in the video to be played as a video image;
and the fourth determining subunit is configured to determine, if there is no unextracted key video frame in the video to be played, each extracted video frame as a video image.
The above apparatus, optionally, the synthesis unit includes:
an acquisition subunit configured to acquire a copy image of the video image;
the extraction subunit is used for extracting the RGB information of each pixel point in the copied image;
the first obtaining subunit is used for processing the RGB information of each pixel point in the copied image to obtain the transparency of each pixel point;
and the second obtaining subunit is used for obtaining the processed image of the video image based on the transparency of each pixel point in the copied image.
The above apparatus, optionally, the synthesis unit includes:
and the adding subunit is used for determining a target pixel point corresponding to each pixel point in the processed image for each pixel point in the video image, and adding the transparency of the target pixel point into the RGB information of the pixel point to obtain a synthesized image.
The above apparatus, optionally, the synthesis unit includes:
and the rendering subunit is used for rendering the synthetic image in a preset browser so as to play the synthetic image.
A third aspect of the present invention discloses a storage medium, which includes stored instructions, wherein when the instructions are executed, a device on which the storage medium is located is controlled to execute the video data processing method described above.
In a fourth aspect, the present invention discloses an electronic device comprising a memory, and one or more instructions, wherein the one or more instructions are stored in the memory and configured to be executed by the one or more processors to perform the video data processing method as described above.
Compared with the prior art, the invention has the following advantages:
the invention provides a video data processing method and device, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a video to be played; performing frame extraction processing on a video to be played to obtain a plurality of video images; and for each video image, acquiring a processed image containing the transparency information of the video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image. Obtaining a plurality of video images by performing frame extraction on a video to be played, and processing each video image to obtain a processed image containing transparency information; each video image and the corresponding processing image are synthesized to obtain a plurality of synthesized images, the synthesized images are images with transparency information, and the synthesized images are rendered and displayed to play videos with transparent effects, so that high-quality video playing services are provided for users, and user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for performing frame extraction processing on a video to be played to obtain a plurality of video images according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for obtaining a processed image of a video image according to an embodiment of the present invention;
FIG. 4 is an exemplary diagram of a video image provided by an embodiment of the present invention;
FIG. 5 is an exemplary diagram of a processed image of a video image provided by an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Technical terms that may be involved in the present invention are explained as follows:
YUV: the color coding method is to compile the type of the true-color space (color space), where "Y" represents the brightness (Luma or Luma), i.e. the gray scale value, and "U" and "V" represent the Chroma (Chroma or Chroma) for describing the color and saturation of the image, and is used to specify the color of the pixel.
Alpha channel: refers to the transparency and translucency of a picture.
HEVC: the abbreviation of high Efficiency Video Coding, which we often say H265 Coding, No. 26/1/2013, HEVC formally becomes an international standard.
SVAC safety protection monitoring digital video audio coding and decoding technical standard is established by the first research institute of the ministry of public security and the ministry of central micro-electronics, and in addition, more than 40 scientific research institutes, universities and securities business companies also contribute to the technical standard.
WebM: WebM is proposed by Google and is an open, free media file format. The WebM movie format is actually a new container format developed based on the Matroska (i.e., MKV) container format, and includes VP8 movie tracks and Ogg Vorbis audio tracks therein.
Video tag: and a built-in component provided by the browser and used for video playing.
Canvas label: and the built-in component is provided by the browser and used for dynamically generating pictures under the control of a program.
The invention is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multi-processor apparatus, distributed computing environments that include any of the above devices or equipment, and the like. The execution subject of the invention is a server or a processor, and is used for executing the video data processing method provided by the embodiment of the invention.
Referring to fig. 1, a flowchart of a method for processing video data according to an embodiment of the present invention is specifically described as follows:
s101: and acquiring a video to be played.
The video to be played is a video in coding formats such as h.264 and h.265, for example: VP8, VP9, and the like.
When the video to be played is acquired, a user may input a video playing instruction to the browser, so as to acquire the video to be played according to the video playing instruction.
S102: and performing frame extraction processing on the video to be played to obtain a plurality of video images.
The video to be played is composed of a plurality of video frames, the extracted video frames are used as video images, furthermore, screenshot operation can also be carried out on the video to be played, and the intercepted images are used as the video images.
In order to further explain the process of performing frame extraction processing on a video to be played to obtain a plurality of video images, the invention provides a flow chart of a method for performing frame extraction processing on a video to be played to obtain a plurality of video images, and the method specifically refers to fig. 2, and specifically explains the following:
s201: and determining the frame extraction frequency based on the frame rate of the video to be played.
It should be noted that the frame rates of the videos to be played are different, for example, 30 frames/second, 90 frames/second, or 120 frames/second.
Different frame rates correspond to different frame extraction frequencies; for example, when the frame rate of the video to be played is 30 frames/second, the corresponding frame extraction frequency may be 24 frames/second; for another example, when the frame rate of the video to be played is 90 frames/second, the corresponding frame extraction frequency may be 50 frames/second.
When the frame extraction frequency is determined according to the frame rate of the video to be played, the frame rate of the video to be played can traverse each preset frame rate in the preset frame rate table, and the preset frame extraction frequency corresponding to the preset frame rate consistent with the frame rate of the video to be played is used as the frame extraction frequency.
The preset frame rate table comprises a plurality of groups of data, each group of data comprises a preset frame rate and a preset frame extraction frequency, and the preset frame extraction rate is smaller than or larger than the preset frame rate; it should be noted that the preset decimation frequencies of different preset frame rates may be the same.
S202: and extracting video frames from the video to be played based on the frame extraction frequency to obtain each extracted video frame.
When extracting video frames from a video to be played based on the frame extraction frequency, the video frames may be extracted according to a preset frame extraction rule, for example, when the frame extraction frequency is 24 frames/second, in each video frame of each second, the first 24 video frames in the second may all be used as extracted video frames, or one frame may be extracted every other frame. The frame extraction rules herein are merely exemplary illustrations of the present invention, which include, but are not limited to, the specific rules illustrated.
S203: determining whether the video to be played has the key video frames which are not extracted; if the video to be played has the key video frames which are not extracted, executing S204; if the video to be played does not have the key video frames which are not extracted, S205 is executed.
The method comprises the steps of extracting a frame after a video from a video to be played based on a frame extraction frequency, and judging whether an unextracted key video frame exists in the video to be played, wherein the specific process can be that frame type identifications of all the remaining video frames in the video to be played are obtained, if the identification represented as the key frame exists in all the frame type identifications, the video to be played can be determined to have the unextracted key video frame, and if the identification represented as the key frame does not exist in all the frame type identifications, the video to be played can be determined to not have the unextracted key video frame.
S204: and determining each extracted video frame and each key video frame which is not extracted in the video to be played as a video image.
S205: and determining each extracted video frame as a video image.
In the method provided by the embodiment of the invention, when a plurality of video images are acquired from the video to be played, frame extraction processing is carried out on the video to be played based on the frame extraction frequency to obtain each extracted video frame, and when the key video frames which are not extracted exist in the video to be played, each key video frame which is not extracted and each extracted video frame are determined as the video images, so that the key video frames of the video to be played can be prevented from being omitted, and the subsequent display effect can be improved.
It should be noted that, in the process of performing frame extraction processing on the video to be played to obtain a plurality of video images, a screenshot technology may also be used to obtain the plurality of video images, and the images are captured from the video to be played through a preset screenshot frequency, and each captured image is determined as a video image. Preferably, the screenshot frequency here can be set according to actual needs, and is usually set to 24 pieces/second. Further, when the screenshot process is performed on the video to be played, a native Canvas tag of the browser can be used for screenshot.
S103: and for each video image, acquiring a processed image containing the transparency information of the video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image.
In the method provided by the embodiment of the invention, each acquired video image is processed to obtain a composite image of each video image.
Referring to fig. 3, a flowchart of a method for obtaining a processed image of a video image according to an embodiment of the present invention is specifically described as follows:
s301: a duplicate image of the video image is acquired.
The copied image is an image obtained by copying a video image, and referring to fig. 4, an exemplary diagram of a video image according to an embodiment of the present invention is provided, where each pixel in the diagram includes RGB information.
S302: and extracting the RGB information of each pixel point in the copied image.
The RBG information is color information of three channels of red, green and blue of the pixel point.
S303: and processing the RGB information of each pixel point in the copied image to obtain the transparency of each pixel point.
It should be noted that the transparency of the pixel points may be an Alpha value, and after the RGB information of each pixel point is processed, the Alpha value of each pixel point is obtained.
S304: and obtaining a processed image of the video image based on the transparency of each pixel point in the copied image.
Further, the processed image is composed of the transparency of each pixel point in the copied image, and the processed image at this time contains the transparency information of the video image, and preferably, the transparency information is the transparency of each pixel point. Referring to fig. 5, an exemplary diagram of a processed image of a video image according to an embodiment of the present invention is provided, where each pixel point in the diagram corresponds to each pixel point in the video image in a one-to-one manner.
In the method provided by the embodiment of the invention, after the processed image of the video image is obtained, for each pixel point in the video image, a target pixel point corresponding to the pixel point is determined in the processed image, and the transparency of the target pixel point is added to the RGB information of the pixel point, so that a synthesized image is obtained. It should be noted that each pixel point in the video image may determine a target pixel point among the pixel points of the processed image, specifically, when a pixel point located in a first column and a second row in the video image determines a target pixel point in the processed image, the position of the target pixel point in the processed image is the first column and the second row.
By adding transparency into each pixel point of the video image, a composite image containing transparency information can be obtained. In the method provided by the embodiment of the invention, after the synthetic image is obtained, the synthetic image can be rendered in a browser to display the synthetic image. The video images are continuously processed, and the obtained composite images are subjected to high-frequency rendering and displaying, so that the video is played, and the played video has a transparent effect.
In the method provided by the embodiment of the invention, a video to be played is obtained; performing frame extraction processing on a video to be played to obtain a plurality of video images; and for each video image, acquiring a processed image containing the transparency information of the video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image. Obtaining a plurality of video images by performing frame extraction on a video to be played, and processing each video image to obtain a processed image containing transparency information; each video image and the corresponding processing image are synthesized to obtain a plurality of synthesized images, the synthesized images are images with transparency information, and the videos with the transparency effect are played by rendering and displaying the synthesized images, so that high-quality video playing service is provided for users, and user experience is improved.
To further illustrate the implementation process of the present invention in practical application, the present invention is illustrated by a scenario example, the overall implementation process of the present invention is not limited to the content of the illustrated example, and the specific content of the example is as follows:
a) and playing the specifically coded Video by using a native Video tag of the browser, playing sound, but not displaying pictures.
b) Video images of Video are intercepted at high frequency using the browser native Canvas tag.
c) Processing the intercepted video image to obtain two images, wherein one image is an original image of the video image, and specifically refer to fig. 4; the other diagram is a diagram containing Alpha information, i.e., a processed image of a video image, and specifically, refer to fig. 5.
d) Aiming at the video image and the processed image, RGB information is extracted from each pixel point in the video image, and the Alpha value of each pixel point in the video image is modified based on the transparency of each pixel point in the processed image, so that a synthetic image for realizing the transparent effect is generated. The content of this section is substantially a combination of the video image and the processed image.
e) And continuously generated synthetic pictures are drawn in a new Cavans of a browser at high frequency, so that the video playing effect is realized.
It should be noted that the manner provided by the implementation of the present invention may be applicable to a variety of different browsers, for example, browsers such as IE9, IE10, Chrome21, and Chrome27, and the present invention is not illustrated one by one. The invention extracts RGB information and transparent (Alpha) information from the video frame by processing the video frame in the video which does not support the transparent picture, resynthesizes the picture which supports the transparency, and repeats the process at high speed, thereby realizing the transparent video playing method of the browser end which is free from the constraint of the video coding standard, and further playing the transparent video in the browser under the commonly applied mainstream coding standards of H.264, H.265 and the like.
Corresponding to the method shown in fig. 1, an embodiment of the present invention further provides a video data processing apparatus, which is used to support the implementation of the method shown in fig. 1 in practical applications, and the apparatus may be disposed in an intelligent device.
Referring to fig. 6, a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention is specifically described as follows:
an obtaining unit 601, configured to obtain a video to be played;
a frame extracting processing unit 602, configured to perform frame extracting processing on the video to be played to obtain multiple video images;
a synthesizing unit 603, configured to obtain, for each video image, a processed image including transparency information of the video image, synthesize the video image and the processed image to obtain a synthesized image, and play the synthesized image.
In the device provided by the embodiment of the invention, a video to be played is obtained; performing frame extraction processing on a video to be played to obtain a plurality of video images; and for each video image, acquiring a processed image containing the transparency information of the video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image. Obtaining a plurality of video images after performing frame extraction on a video to be played, and processing each video image to obtain a processed image containing transparency information; each video image and the corresponding processing image are synthesized to obtain a plurality of synthesized images, the synthesized images are images with transparency information, and the videos with the transparency effect are played by rendering and displaying the synthesized images, so that high-quality video playing service is provided for users, and user experience is improved.
In the apparatus provided in the embodiment of the present invention, the frame extracting processing unit 602 may be configured to:
the first determining subunit is used for determining the frame extraction frequency based on the frame rate of the video to be played;
the extraction subunit is used for extracting video frames from the video to be played based on the frame extraction frequency to obtain each extracted video frame;
the second determining subunit is used for determining whether the video to be played has the key video frames which are not extracted;
a third determining subunit, configured to determine, if there are key video frames that are not extracted in the video to be played, each extracted video frame and each key video frame that is not extracted in the video to be played as video images;
and the fourth determining subunit is configured to determine, if there is no unextracted key video frame in the video to be played, each extracted video frame as a video image.
In the apparatus provided in the embodiment of the present invention, the synthesizing unit 603 may be configured to:
an acquisition subunit configured to acquire a copy image of the video image;
the extraction subunit is used for extracting the RGB information of each pixel point in the copied image;
the first obtaining subunit is used for processing the RGB information of each pixel point in the copied image to obtain the transparency of each pixel point;
and the second obtaining subunit is used for obtaining a processed image of the video image based on the transparency of each pixel point in the copied image.
In the apparatus provided in the embodiment of the present invention, the synthesizing unit 603 may be configured to:
and the adding subunit is used for determining a target pixel point corresponding to each pixel point in the processed image for each pixel point in the video image, and adding the transparency of the target pixel point into the RGB information of the pixel point to obtain a synthesized image.
In the apparatus provided in the embodiment of the present invention, the synthesizing unit 603 may be configured to:
and the rendering subunit is used for rendering the synthetic image in a preset browser so as to play the synthetic image.
The embodiment of the invention also provides a storage medium, which comprises a stored instruction, wherein when the instruction runs, the device where the storage medium is located is controlled to execute the video data processing method.
The electronic device of the present invention is shown in fig. 7, and specifically includes a memory 701 and one or more instructions 702, where the one or more instructions 702 are stored in the memory 701, and are configured to be executed by one or more processors 703 to perform the video data processing method described above by using the one or more instructions 702.
The specific implementation procedures and derivatives thereof of the above embodiments are within the scope of the present invention.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments, which are substantially similar to the method embodiments, are described in a relatively simple manner, and reference may be made to some descriptions of the method embodiments for relevant points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing video data, comprising:
acquiring a video to be played;
performing frame extraction processing on the video to be played to obtain a plurality of video images;
and for each video image, acquiring a processed image containing transparency information of the video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image.
2. The method according to claim 1, wherein the frame-extracting process for the video to be played to obtain a plurality of video images comprises:
determining a frame extraction frequency based on the frame rate of the video to be played;
extracting video frames from the video to be played based on the frame extraction frequency to obtain each extracted video frame;
determining whether the video to be played has the key video frames which are not extracted;
if the video to be played has the key video frames which are not extracted, determining each extracted video frame and each key video frame which is not extracted in the video to be played as a video image;
and if the video to be played does not have the key video frames which are not extracted, determining all the extracted video frames as video images.
3. The method of claim 1, wherein said obtaining a processed image containing transparency information of said video image comprises:
acquiring a copy image of the video image;
extracting RGB information of each pixel point in the copied image;
processing the RGB information of each pixel point in the copied image to obtain the transparency of each pixel point;
and obtaining a processed image of the video image based on the transparency of each pixel point in the copied image.
4. The method of claim 3, wherein said combining the video image and the processed image to obtain a combined image comprises:
and for each pixel point in the video image, determining a target pixel point corresponding to the pixel point in the processed image, and adding the transparency of the target pixel point to the RGB information of the pixel point to obtain a synthetic image.
5. The method of claim 1, wherein said playing said composite image comprises:
rendering the composite image in a preset browser so as to play the composite image.
6. A video data processing apparatus, comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a video to be played;
the frame extracting processing unit is used for carrying out frame extracting processing on the video to be played to obtain a plurality of video images;
and the synthesizing unit is used for acquiring a processed image containing the transparency information of the video image for each video image, synthesizing the video image and the processed image to obtain a synthesized image, and playing the synthesized image.
7. The apparatus of claim 6, wherein the frame extraction processing unit comprises:
the first determining subunit is used for determining the frame extraction frequency based on the frame rate of the video to be played;
the extraction subunit is used for extracting video frames from the video to be played based on the frame extraction frequency to obtain each extracted video frame;
the second determining subunit is used for determining whether the video to be played has the key video frames which are not extracted;
a third determining subunit, configured to determine, if there are key video frames that are not extracted in the video to be played, each extracted video frame and each key video frame that is not extracted in the video to be played as a video image;
and the fourth determining subunit is configured to determine, if there is no unextracted key video frame in the video to be played, each extracted video frame as a video image.
8. The apparatus of claim 6, wherein the synthesis unit comprises:
an acquisition subunit configured to acquire a copy image of the video image;
the extraction subunit is used for extracting the RGB information of each pixel point in the copied image;
the first obtaining subunit is used for processing the RGB information of each pixel point in the copied image to obtain the transparency of each pixel point;
and the second obtaining subunit is used for obtaining the processed image of the video image based on the transparency of each pixel point in the copied image.
9. A storage medium comprising stored instructions, wherein the instructions, when executed, control a device on which the storage medium resides to perform a video data processing method according to any one of claims 1 to 5.
10. An electronic device comprising a memory, and one or more instructions, wherein the one or more instructions are stored in the memory and configured to be executed by the one or more processors to perform the method of video data processing according to any one of claims 1-5.
CN202210581162.3A 2022-05-26 2022-05-26 Video data processing method and device, storage medium and electronic equipment Pending CN115022713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210581162.3A CN115022713A (en) 2022-05-26 2022-05-26 Video data processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210581162.3A CN115022713A (en) 2022-05-26 2022-05-26 Video data processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115022713A true CN115022713A (en) 2022-09-06

Family

ID=83071813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210581162.3A Pending CN115022713A (en) 2022-05-26 2022-05-26 Video data processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115022713A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231834A (en) * 2011-06-27 2011-11-02 深圳市茁壮网络股份有限公司 Animated portable network graphics (APNG) file processing method and device for digital television system
WO2017113600A1 (en) * 2015-12-30 2017-07-06 深圳Tcl数字技术有限公司 Video playing method and device
WO2018184458A1 (en) * 2017-04-08 2018-10-11 腾讯科技(深圳)有限公司 Picture file processing method and device, and storage medium
CN113115097A (en) * 2021-03-30 2021-07-13 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and storage medium
CN113423016A (en) * 2021-06-18 2021-09-21 北京爱奇艺科技有限公司 Video playing method, device, terminal and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231834A (en) * 2011-06-27 2011-11-02 深圳市茁壮网络股份有限公司 Animated portable network graphics (APNG) file processing method and device for digital television system
WO2017113600A1 (en) * 2015-12-30 2017-07-06 深圳Tcl数字技术有限公司 Video playing method and device
WO2018184458A1 (en) * 2017-04-08 2018-10-11 腾讯科技(深圳)有限公司 Picture file processing method and device, and storage medium
CN113115097A (en) * 2021-03-30 2021-07-13 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and storage medium
CN113423016A (en) * 2021-06-18 2021-09-21 北京爱奇艺科技有限公司 Video playing method, device, terminal and server

Similar Documents

Publication Publication Date Title
WO2018045927A1 (en) Three-dimensional virtual technology based internet real-time interactive live broadcasting method and device
US20180192063A1 (en) Method and System for Virtual Reality (VR) Video Transcode By Extracting Residual From Different Resolutions
CN108063976B (en) Video processing method and device
CN107241646B (en) Multimedia video editing method and device
US9224156B2 (en) Personalizing video content for Internet video streaming
US20220188357A1 (en) Video generating method and device
CN107040808B (en) Method and device for processing popup picture in video playing
CN111899322A (en) Video processing method, animation rendering SDK, device and computer storage medium
KR20180076720A (en) Video transmitting device and video playing device
CN108833938A (en) Method and apparatus for selecting video cover
CN108882055B (en) Video live broadcast method and system, and method and device for synthesizing video stream
CN109547724B (en) Video stream data processing method, electronic equipment and storage device
CN105491396A (en) Multimedia information processing method and server
CN110012336A (en) Picture configuration method, terminal and the device at interface is broadcast live
US20170161875A1 (en) Video resolution method and apparatus
US10636178B2 (en) System and method for coding and decoding of an asset having transparency
CN112262570A (en) Method and system for automatic real-time frame segmentation of high-resolution video streams into constituent features and modification of features in individual frames to create multiple different linear views from the same video source simultaneously
CN110213640B (en) Virtual article generation method, device and equipment
CN110769241B (en) Video frame processing method and device, user side and storage medium
CN113315982A (en) Live broadcast method, computer storage medium and equipment
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
CN115022713A (en) Video data processing method and device, storage medium and electronic equipment
CN110662082A (en) Data processing method, device, system, mobile terminal and storage medium
RU2690888C2 (en) Method, apparatus and computing device for receiving broadcast content
CN105307001A (en) Method and device for real-time displaying release information on video program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination