CN109922360B - Video processing method, device and storage medium - Google Patents

Video processing method, device and storage medium Download PDF

Info

Publication number
CN109922360B
CN109922360B CN201910173535.1A CN201910173535A CN109922360B CN 109922360 B CN109922360 B CN 109922360B CN 201910173535 A CN201910173535 A CN 201910173535A CN 109922360 B CN109922360 B CN 109922360B
Authority
CN
China
Prior art keywords
screen buffer
video frame
drawing view
independent
original video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910173535.1A
Other languages
Chinese (zh)
Other versions
CN109922360A (en
Inventor
夏海雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910173535.1A priority Critical patent/CN109922360B/en
Publication of CN109922360A publication Critical patent/CN109922360A/en
Application granted granted Critical
Publication of CN109922360B publication Critical patent/CN109922360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention discloses a video processing method, a video processing device and a storage medium, wherein the method comprises the following steps: rendering a first original video frame of a video to an independent screen buffer; the first original video frame is obtained by decoding in a hardware decoding mode; determining a first drawing view for video frame presentation and a first screen buffer corresponding to the first drawing view; rendering the first raw video frame to the first screen buffer based on the independent screen buffer; displaying the first original video frame in the rendered first screen buffer through the first drawing view.

Description

Video processing method, device and storage medium
Technical Field
The present invention relates to media playing technologies, and in particular, to a video processing method, an apparatus, and a storage medium.
Background
After a video hard decoding Application Programming Interface (API) MediaCodec is introduced into a mobile device of an Android system (Android), an Android video player mainly uses MediaCodec decoding, however, in the related art, in the video playing process by using the MediaCodec hard decoding mode, if switching of a playing scene or a playing mode is involved, for example, switching from a normal playing mode to a color blindness playing mode needs to be performed on different pages to be linked for playing, a black screen or a pause is caused, and user experience is low.
Disclosure of Invention
Embodiments of the present invention provide a video processing method, an apparatus, and a storage medium, which can implement seamless connection of pages of a video in different playing scenes and different playing modes, and ensure smoothness of video playing.
In a first aspect, an embodiment of the present invention provides a video processing method, where the method includes:
rendering a first original video frame of a video to an independent screen buffer; the first original video frame is obtained by decoding in a hardware decoding mode;
determining a first drawing view for video frame presentation and a first screen buffer corresponding to the first drawing view;
rendering the first raw video frame to the first screen buffer based on the independent screen buffer;
displaying the first original video frame in the rendered first screen buffer through the first drawing view.
In another aspect, an embodiment of the present invention provides a video processing apparatus, where the apparatus includes:
a first rendering module to render a first original video frame of a video to an independent screen buffer; the first original video frame is obtained by decoding in a hardware decoding mode;
the device comprises a determining module, a display module and a display module, wherein the determining module is used for determining a first drawing view used for displaying a video frame and a first screen buffer area corresponding to the first drawing view;
a second rendering module to render the first original video frame to the first screen buffer based on the independent screen buffer;
and the display module is used for displaying the rendered first original video frame in the first screen buffer area through the first drawing view.
In another aspect, an embodiment of the present invention provides a video processing apparatus, where the apparatus includes:
a memory configured to hold a program for video processing;
and a processor configured to execute the program, wherein the program executes the video processing method provided by the embodiment of the invention.
In another aspect, an embodiment of the present invention provides a storage medium, which stores an executable program, and when the executable program is executed by a processor, the video processing method provided in the embodiment of the present invention is implemented.
The application of the embodiment of the invention has the following beneficial effects:
by applying the video processing method, the device and the storage medium of the embodiment of the invention, because two screen buffer areas, an independent screen buffer area and a first screen buffer area exist, the video frame is subjected to two times of rendering before being displayed by the drawing view, the first rendering is the off-screen rendering of the original video frame after the compressed video frame data is decoded, namely the first original video frame is rendered to the independent screen buffer area, the second rendering is the on-screen rendering after the drawing view currently displayed by the video frame is determined, namely the first original video frame is rendered to the first screen buffer area after being obtained from the independent screen buffer area, the off-screen rendering and the on-screen rendering are mutually independent, so that even in the video playing process, the switching of the playing scene/playing mode causes the switching of the drawing view for displaying the video frame and the corresponding first screen buffer area, and because the decoding of the compressed video frame is not influenced, the original video frame obtained by adopting the hard solution mode can be continuously rendered to the independent screen buffer area, so that the seamless connection of the video in different playing scenes/different playing modes is realized, the playing fluency is ensured, and the user experience is improved.
Drawings
FIG. 1 is a flow chart illustrating video processing in the related art;
FIG. 2 is a block diagram of a video processing system according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a hardware configuration of a video processing apparatus according to an embodiment of the present invention;
fig. 4 is a first flowchart illustrating a video processing method according to an embodiment of the present invention;
fig. 5 is a second flowchart illustrating a video processing method according to an embodiment of the present invention;
fig. 6 is a third schematic flowchart of a video processing method according to an embodiment of the present invention;
fig. 7 is a fourth schematic flowchart of a video processing method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the examples provided herein are merely illustrative of the present invention and are not intended to limit the present invention. In addition, the following embodiments are provided as partial embodiments for implementing the present invention, not all embodiments for implementing the present invention, and the technical solutions described in the embodiments of the present invention may be implemented in any combination without conflict.
It should be noted that, in the embodiments of the present invention, the terms "comprises", "comprising" or any other variation thereof are intended to cover a non-exclusive inclusion, so that a method or apparatus including a series of elements includes not only the explicitly recited elements but also other elements not explicitly listed or inherent to the method or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other related elements in a method or apparatus including the element (e.g., steps in a method or elements in an apparatus, such as units that may be part of a circuit, part of a processor, part of a program or software, etc.).
For example, the video processing method provided by the embodiment of the present invention includes a series of steps, but the video processing method provided by the embodiment of the present invention is not limited to the described steps, and similarly, the video processing apparatus provided by the embodiment of the present invention includes a series of modules, but the apparatus provided by the embodiment of the present invention is not limited to include the explicitly described modules, and may further include a unit that is required to obtain related information or perform processing based on the information.
In the description that follows, references to the terms "first", "second", and the like, are intended only to distinguish similar objects and not to indicate a particular ordering for the objects, it being understood that "first", "second", and the like may be interchanged under certain circumstances or sequences of events to enable embodiments of the invention described herein to be practiced in other than the order illustrated or described herein.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) The Surface corresponds to a screen buffer, each window (window) corresponds to a Surface, any View (View) is drawn on a Canvas (Canvas) of the Surface, and the Surface in the Android is considered to be a place for drawing graphics (graphics) or images (image) and is used for managing data of display contents.
2) The control system comprises a Surface View, a User Interface (UI) control under an android platform, a User Interface (UI) control and a User Interface (User Interface) control, wherein the UI control is used for showing the place of data in the Surface and controlling the position and the size of the View in the Surface, the Surface is created when a window of the Surface View is visible, and the Surface is destroyed when the window of the Surface View is hidden.
3) And the original buffer area is used for storing the pixel data of the current window.
4) MediaCodec, a hardware decoder under the android platform, can be used to decode complete compressed video frame data in h.264 format.
5) The I frame, a key frame in video coding, namely a full-frame compression coding frame, is generated without referring to other pictures, and can be independently decoded, namely, a complete image can be reconstructed by only using the data of the I frame during decoding.
6) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
In some embodiments, referring to fig. 1, fig. 1 is a schematic flow chart of video processing in the related art, where a player of a mobile device includes a video-audio separator (demux) and a decoder (MediaCodec), in practical implementation, the demux separates video compressed data (e.g., h.264) and audio compressed data (e.g., AAC), and then sends the compressed video frames to the MediaCodec, and at initialization of the MediaCodec, a relevant parameter configuration is implemented to associate a Surface for rendering original video frames, and after the MediaCodec receives the compressed video frame data sent by the demux, the media codec performs hard decoding to obtain corresponding original video frames, and then renders the decoded original video frames to an associated screen buffer (Surface), and a drawing view (Surface view) belonging to the screen buffer is displayed through the screen buffer. However, with this video processing method, if a play scene/play mode switching occurs during the video playing process, such as switching to a color blindness mode, a Virtual Reality (VR) mode, an image enhancement scene, a super-resolution scene, etc., the MediaCodec involves Surface update during the decoding process, which results in restarting the decoder, whereas for the MediaCodec, decoding must be started from the I frame, and a black screen or pause is caused during the switching, which results in low user experience.
Fig. 2 is an alternative architecture diagram of the video processing system 100 according to an embodiment of the present invention, referring to fig. 2, in order to support an exemplary application, the terminals (including the terminal 400-1 and the terminal 400-2) are connected to the server 200 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and the data transmission is implemented using a wireless link.
The server 200 is configured to send a video file to be played to the terminal 400;
the terminal (terminal 400-1 and/or terminal 400-2) is used for decoding the video file after receiving the media file, and rendering a first original video frame of the video to the independent screen buffer area; the first original video frame is obtained by decoding in a hardware decoding mode; determining a first drawing view for video frame presentation and a first screen buffer corresponding to the first drawing view; rendering the first original video frame to a first screen buffer based on the independent screen buffer; and displaying the first original video frame in the rendered first screen buffer area through the first drawing view so as to realize video playing.
In one embodiment, a client (e.g., a video playing client) is disposed on the terminal, and the terminal can receive a media file sent by the server 200 through the client, decode the video file, and render a first original video frame of the video to an independent screen buffer; determining a first drawing view for video frame presentation and a first screen buffer corresponding to the first drawing view; rendering the first original video frame to a first screen buffer based on the independent screen buffer; and displaying the first original video frame in the rendered first screen buffer area through the first drawing view so as to realize video playing.
Next, a video processing apparatus according to an embodiment of the present invention will be described. The video processing apparatus provided by the embodiment of the present invention may be implemented as hardware or a combination of hardware and software, and various exemplary implementations of the apparatus provided by the embodiment of the present invention are described below.
The hardware structure of the video processing apparatus according to the embodiment of the present invention is described in detail below, and it is understood that fig. 3 only shows an exemplary structure of the video processing apparatus and not a whole structure, and a part of or the whole structure shown in fig. 3 may be implemented as necessary.
The video processing apparatus 20 provided in the embodiment of the present invention includes: at least one processor 201, memory 202, user interface 203, and at least one network interface 204. The various components in the video processing device 20 are coupled together by a bus system 205. It will be appreciated that the bus system 205 is used to enable communications among the components. The bus system 205 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 205 in fig. 3.
The user interface 203 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 202 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory.
Memory 202 in embodiments of the present invention is used to store various types of data to support the operation of video processing device 20. Examples of such data include: any executable instructions for operating on the video processing apparatus 20, such as executable instructions, may be included in the executable instructions, and the program implementing the video processing method of the embodiments of the present invention may be included in the executable instructions.
The video processing method disclosed by the embodiment of the invention can be applied to the processor 201 or implemented by the processor 201. The processor 201 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the video processing method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 201. The Processor 201 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 201 may implement or perform the methods, steps and logic blocks disclosed by the embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 202, and the processor 201 reads the information in the memory 202, and performs the steps of the video processing method provided by the embodiment of the present invention in combination with the hardware thereof.
Next, a video processing method according to an embodiment of the present invention will be described. In some embodiments, referring to fig. 4, fig. 4 is a flowchart illustrating a video processing method according to an embodiment of the present invention, in practical implementation, the video processing method may be implemented by a terminal, for example, by the terminal 400-1 in fig. 2, and with reference to fig. 2 and fig. 4, the video processing method according to an embodiment of the present invention includes:
step 301: rendering a first original video frame of a video to an independent screen buffer area by the terminal; the first original video frame is obtained by decoding in a hardware decoding mode.
In practical application, a terminal operates by adopting an android system, a video playing client is arranged on the terminal, and the terminal plays videos through the video playing client, and specifically can play local video files of the terminal or video files received from a server.
When the terminal plays the video, a decoder (MediaCodec) decodes a first compressed video frame of the video in a hardware decoding mode to obtain a first original video frame, and the decoder performs rendering on the first original video frame, namely rendering to an independent screen buffer area (namely independent Surface).
Wherein the independent screen buffer is associated with the decoder for off-screen rendering of the decoder, i.e. for rendering of original video frames of the decoded video by the decoder.
In one embodiment, before a decoder decodes a video file, the decoder needs to be initialized to implement the configuration of relevant parameters and associate the decoder with an independent screen buffer, so that a hardware decoder renders an original video frame to the independent screen buffer after decoding a compressed video frame to obtain a corresponding original video frame, and the original video frame is used as the transfer of the original video frame.
In one embodiment, before initializing the decoder, the terminal needs to create an independent screen buffer, and specifically, the creation of the independent screen buffer can be implemented as follows: initializing an Open Graphics Library (OpenGL) context, generating a texture Identification (ID), creating a drawing texture (Surface texture) instance from the texture ID, and then generating an independent screen buffer (independent Surface) based on the drawing texture instance.
Here, the texture ID may be generated according to EGL, where EGL is a layer of interface between a graphics rendering API (e.g., OpenGL ES (OpenGL for Embedded Systems)) and a local platform window system, ensures platform independence of OpenGL ES, and may be used for creating rendering surface, creating graphics context, managing rendering configuration, and the like.
In practical implementation, the terminal may further include a video and audio separator (Demuxer) for separating the video compression data and the audio compression data of the video file, and then sending the video compression data to the decoder for decoding and off-screen rendering.
Step 302: the method comprises the steps of determining a first drawing view for displaying a video frame and a first screen buffer area corresponding to the first drawing view.
In one embodiment, the terminal may perform the determination of the first drawing view and the first screen buffer by: acquiring User Interface (UI) layout information of a video, wherein the UI layout information at least comprises drawing view information corresponding to the video; based on the UI layout information, selecting a corresponding drawing view from a plurality of drawing views (SurfaceView) as a first drawing view, and determining a screen buffer corresponding to the first drawing view as a first screen buffer.
Here, the drawing view is used for video frame display, that is, for screen display (video frame content is displayed on a screen), in actual implementation, there may be a plurality of drawing views for video frame display, each drawing view corresponding to a different playing scene/playing mode, such as a default playing scene, a color blindness scene, a VR scene, a super-resolution scene, etc.; the first drawing view is a drawing view currently used for displaying a first original video frame and can be determined through UI layout information of the video, the UI layout information comprises drawing view information corresponding to a current playing scene/playing mode, and the drawing view required by the current video frame display and a screen buffer area of the drawing view can be determined based on the drawing view information; in practical applications, the UI layout information may also include other user interface information, such as window information.
In practical application, switching of drawing views may be involved in a video playing process, for example, in the video playing process, a user switches a playing mode to a color blindness mode, which results in that the drawing views displayed by a video need to be switched, and a terminal receives a drawing view switching instruction triggered by the user, where the drawing view switching instruction is used to instruct to switch a first drawing view to a second drawing view; and the terminal updates the drawing view information in the UI layout information based on the instruction, so that the drawing view information is rendered to a screen buffer area of a second drawing view when the next video frame is rendered.
Step 303: based on the independent screen buffer, rendering the first original video frame to the first screen buffer.
In one embodiment, the terminal may implement rendering of the first original video frame to the first screen buffer based on: the terminal obtains a texture ID (texture ID) corresponding to the independent screen buffer area, extracts a first original video frame from the independent screen buffer area based on the texture ID, and renders the extracted first original video frame to the first screen buffer area.
In practical implementation, before the terminal renders the first original video frame to the first screen buffer, the texture ID is also generated, and a drawing texture (surface texture) instance is created based on the texture ID, and a separate screen buffer is created based on the drawing texture instance.
Step 304: and displaying the first original video frame in the rendered first screen buffer area through the first drawing view.
In practical applications, since the first drawing view is used for displaying data in the first screen buffer, when the first original video frame is rendered to the first screen buffer, the first original video frame can be directly displayed through the first drawing view.
In one embodiment, when a first drawing view is switched to a second drawing view, a terminal acquires a second original video frame based on an independent screen buffer area, renders the second original video frame to a second screen buffer area corresponding to the second drawing view, and displays the second original video frame in the rendered second screen buffer area through the second drawing view; because the decoding of the decoder and the rendering of the independent screen buffer area are not influenced in the process of switching the drawing view, the situations of blocking and black screen can not occur in the process of playing the video, the seamless switching of the view is realized, and the fluency of playing is ensured.
In an embodiment, referring to fig. 5, fig. 5 is a flowchart illustrating a video processing method according to an embodiment of the present invention, and the video processing method is applied to a terminal of an android operating system including a decoder (MediaCodec) and an independent rendering module (OpenGLRender), and the video processing method according to the embodiment of the present invention includes:
step 401: the independent rendering module generates a texture ID and creates an independent screen buffer according to the texture ID.
In actual implementation, before initializing the decoder, an independent rendering module needs to be initialized, specifically, the independent rendering module initializes an OpenGL context, then generates a texture ID according to the EGL, creates a drawing texture (surface texture) instance according to the generated texture ID, and generates an independent screen buffer according to the drawing texture instance.
Step 402: the decoder is initialized.
Here, in the initialization process, the decoder performs relevant parameter configuration to associate the decoder with the independent screen buffer, so that the hardware decoder renders the original video frame to the independent screen buffer after decoding the compressed video frame to obtain the corresponding original video frame, which is used as the relay of the original video frame.
Step 403: the decoder decodes the input first compressed video frame in a hardware decoding mode to obtain a first original video frame.
In practical application, the terminal further comprises a video-audio separator for separating the video compression data and the audio compression data of the video file, and further sending the video compression data to a decoder for decoding and off-screen rendering.
Step 404: the decoder renders the first original video frame to an independent screen buffer and notifies an independent rendering module.
Step 405: the independent rendering module determines a first drawing view for displaying the video frame and a first screen buffer area corresponding to the first drawing view.
Here, the independent rendering module acquires UI layout information of the video, selects a corresponding drawing view from the plurality of drawing views as a first drawing view based on the UI layout information, and determines a screen buffer corresponding to the first drawing view as a first screen buffer.
Step 406: the independent rendering module obtains a first original video frame from an independent screen buffer based on the texture ID.
Step 407: the independent rendering module renders the first original video frame to a first screen buffer, and the first original video frame is displayed through a first drawing view.
Step 408: the independent rendering module obtains a second original video frame from the independent screen buffer based on the texture ID when determining that the first drawing view is switched to the second drawing view.
Step 409: the independent rendering module renders the second original video frame to a second screen buffer of a second drawing view, and the second original video frame is shown through the second drawing view.
In an embodiment, referring to fig. 6, fig. 6 is a schematic diagram of a video processing method provided by an embodiment of the present invention, and is applied to a terminal including a decoder (MediaCodec) and an independent rendering module (openglrenderer), and when the MediaCodec is initialized, an independent screen buffer is created by using the openglrenderer
(Surface1), after the video-audio separator (Demuxer) is unpacked, compressed video frame data are sent to MediaCodec for hard decoding, then the compressed video frame data are rendered on an independent screen buffer area through the MediaCodec, then OpenGLRender is notified, the OpenGLRender renders the data in the independent screen buffer area to a screen buffer area (Surface) corresponding to a drawing view (Surface view) currently performing video display according to actual conditions, namely, the OpenGLRender renders independently, and how to render specifically is irrelevant to a decoder, namely, even if switching of the Surface view and the Surface is involved in a video playing process, normal decoding of the decoder is not influenced. In the video playing process, the upper screen rendering of the OpenGLRender is separated from the decoding rendering (off-screen rendering) of the MediaCodec, so that no matter how the SurfaceView for video display is changed, only the new SurfaceView needs to be updated to the independent rendering module, the decoder does not sense, the decoder does not need to interrupt, data is decoded continuously, and the rendering module only needs to render the data to the new Surface, so that seamless switching of pictures is achieved.
Specifically, fig. 7 is a schematic flowchart of a video processing method according to an embodiment of the present invention, and referring to fig. 7, the video processing method according to the embodiment of the present invention includes:
step 501: the independent rendering module (OpenGLRender) generates a texture ID (TextureId).
Here, in actual implementation, before initializing the decoder MediaCodec, OpenGLRender is initialized first, specifically, OpenGL context is initialized, and TextureId (texture ID) of EGL is generated in response to a rendering environment created by a user.
Step 502: an independent screen buffer is created based on the texture ID.
The OpenGLRender creates a Surface texture instance according to the texture generated by the EGL, and generates an independent screen buffer (Surface1) according to the Surface texture instance to serve as decoding data relay.
Step 503: the decoder is initialized.
And initializing a decoder MediaCodec, and setting the generated new Surface1 to the MediaCodec when configuring parameters, so as to realize the association between the MediaCodec and the Surface1 and perform off-screen rendering of decoded data to the Surface1 after the MediaCodec is decoded.
Step 504: MediaCodec decodes the input compressed data, renders the decoded data to an independent screen buffer, and notifies OpenGLRender.
Here, after the initialization of the MediaCodec decoder is completed, when one frame of data comes out from the Demuxer and starts decoding, the MediaCodec first renders the decoded data onto Surface1, and then notifies OpenGLRender processing.
Step 505: selecting a drawing view (SurfaceView) for video display, and determining a corresponding screen buffer area (Surface).
The OpenGLRender is also the bound off-screen rendering Surface1, and a real on-screen Surface needs to be updated, so the Surface view to be rendered is selected and set from the UI layout, and then the Surface is taken out and updated (associated) to the OpenGLRender, and the update time can be any time.
Step 506: the OpenGLRender acquires the decoded data and renders the decoded data to a screen buffer area to be displayed through a corresponding drawing view.
Here, in actual implementation, OpenGLRender obtains decoded data from Surface1 according to TextureId, and directly renders the decoded data on the selected Surface, and finally reaches the screen.
By applying the embodiment of the invention, the on-screen rendering SurfaceView can be updated independently and separated from the decoder, and the seamless switching of pictures can be realized for scenes such as interrupt switching, page switching and the like, so that the user experience is improved.
Next, a video processing apparatus provided in an embodiment of the present invention is described, where the video processing apparatus provided in the embodiment of the present invention can be implemented in a terminal, and referring to fig. 8, the video processing apparatus provided in the embodiment of the present invention includes:
a first rendering module 81 for rendering a first original video frame of the video to an independent screen buffer; the first original video frame is obtained by decoding in a hardware decoding mode;
a determining module 82, configured to determine a first drawing view for performing video frame presentation and a first screen buffer corresponding to the first drawing view;
a second rendering module 83 for rendering the first original video frame to the first screen buffer based on the independent screen buffer;
a display module 84, configured to display the rendered first original video frame in the first screen buffer through the first drawing view.
Here, in practical implementation, the first rendering module may be implemented by a hardware decoder MediaCodec in the terminal, and the determination module, the second rendering module, and the presentation module may be provided to the independent rendering module OpenGLRender, and implemented by the independent rendering module.
In some embodiments, the apparatus further comprises an acquisition module;
the acquisition module is used for acquiring a second original video frame based on the independent screen buffer area when the first drawing view is switched to a second drawing view;
the second rendering module is further configured to render the second original video frame to a second screen buffer corresponding to the second drawing view;
the display module is further configured to display the rendered second original video frame in the second screen buffer through the second drawing view.
In some embodiments, the determining module is specifically configured to obtain UI layout information of the user interface of the video, where the UI layout information at least includes drawing view information corresponding to the video;
and selecting the first drawing view and a first screen buffer area corresponding to the first drawing view from a plurality of drawing views based on the UI layout information.
In some embodiments, the apparatus further comprises:
an update module, configured to receive a drawing view switching instruction, where the drawing view switching instruction is used to instruct to switch the first drawing view to a second drawing view;
updating the drawing view information in the UI layout information based on the drawing view switching instruction.
In some embodiments, the second rendering module is specifically configured to obtain a texture identifier ID corresponding to the independent screen buffer;
extracting the first original video frame from the independent screen buffer based on the texture ID;
rendering the extracted first original video frame to the first screen buffer area.
In some embodiments, the apparatus further comprises:
a first initialization module to generate the texture ID;
creating a drawing texture instance based on the texture ID;
creating the independent screen buffer based on the drawing texture instance.
In some embodiments, the apparatus further comprises:
a second initialization module for initializing a hardware decoder for decoding an input compressed video frame;
and associating the hardware decoder and the independent screen buffer area, so that the hardware decoder renders the original video frame to the independent screen buffer area after decoding the compressed video frame to obtain the corresponding original video frame.
The application of the above embodiment of the invention has the following technical effects:
1, by creating an independent screen buffer area, a decoder can render an original video frame obtained by decoding to the independent screen buffer area, so that off-screen rendering is realized, even if in the video playing process, the switching of a playing scene/playing mode causes the switching of a drawing view for displaying the video frame and a corresponding first screen buffer area, and the original video frame obtained by decoding can still be rendered to the independent screen buffer area because the decoding of the compressed video frame by the decoder is not influenced, so that the seamless connection of pages of the video in different playing scenes/different playing modes is realized, and the playing fluency is ensured;
and 2, after the original video frame is obtained from the independent screen buffer area, the original video frame is rendered to the screen buffer area for displaying the video frame, the on-screen rendering is realized, the on-screen rendering is independent from the off-screen rendering, the hard decoding of a decoder is not influenced in the video playing process, and the rendering efficiency is improved.
An embodiment of the present invention further provides a video processing apparatus, where the apparatus includes:
a memory for storing an executable program;
and the processor is used for realizing the video processing method provided by the embodiment of the invention when executing the executable program stored in the memory.
The embodiment of the invention also provides a storage medium which stores an executable program, and the executable program is executed by a processor to realize the video processing method provided by the embodiment of the invention.
Here, it should be noted that: similar to the above description of the method, the beneficial effects of the method are described, and no further description is given, so as to refer to the description of the method embodiment of the present invention for the technical details not disclosed in the video processing apparatus according to the embodiment of the present invention.
All or part of the steps of the embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Random Access Memory (RAM), a Read-Only Memory (ROM), a magnetic disk, and an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a RAM, a ROM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (11)

1. A method of video processing, the method comprising:
in the video playing process, decoding each compressed video frame of a video by a decoder in a hardware decoding mode to obtain a corresponding original video frame;
rendering each original video frame of a video obtained by decoding to an independent screen buffer area associated with the decoder, wherein the independent screen buffer area is used for performing off-screen rendering on each original video frame;
acquiring UI layout information of the user interface of the video, wherein the UI layout information at least comprises drawing view information corresponding to a current playing scene;
selecting a drawing view corresponding to a current playing scene from a plurality of drawing views as a first drawing view for displaying a first original video frame based on the UI layout information, wherein each drawing view corresponds to a different playing scene, and
determining a first screen buffer corresponding to the first drawing view based on the UI layout information, wherein the first screen buffer is used for performing on-screen rendering on the first original video frame, and the first screen buffer and the independent screen buffer are independent of each other;
rendering the first raw video frame to the first screen buffer based on the independent screen buffer;
displaying the first original video frame in the rendered first screen buffer through the first drawing view;
when the first drawing view is switched to a second drawing view, a second original video frame is obtained based on the independent screen buffer area, the second original video frame is rendered to a second screen buffer area corresponding to the second drawing view, the second screen buffer area is used for performing on-screen rendering on the second original video frame, the second screen buffer area and the independent screen buffer area are mutually independent, and the second original video frame in the rendered second screen buffer area is displayed through the second drawing view.
2. The method of claim 1, wherein the method further comprises:
receiving a drawing view switching instruction, wherein the drawing view switching instruction is used for instructing to switch the first drawing view to a second drawing view;
updating the drawing view information in the UI layout information based on the drawing view switching instruction.
3. The method of claim 1, wherein said rendering the first raw video frame to the first screen buffer based on the independent screen buffer comprises:
acquiring a texture ID corresponding to the independent screen buffer area;
extracting the first original video frame from the independent screen buffer based on the texture ID;
rendering the extracted first original video frame to the first screen buffer area.
4. The method of claim 3, wherein the method further comprises:
generating the texture ID;
creating a drawing texture instance based on the texture ID;
creating the independent screen buffer based on the drawing texture instance.
5. The method of claim 1, wherein the method further comprises:
initializing the decoder for decoding an input compressed video frame;
and associating the decoder and the independent screen buffer area, so that the decoder renders the original video frame to the independent screen buffer area after decoding the compressed video frame to obtain the corresponding original video frame.
6. A video processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for decoding each compressed video frame of the video in a hardware decoding mode through a decoder to obtain a corresponding original video frame in the video playing process;
the first rendering module is used for rendering each original video frame of a video obtained by decoding to an independent screen buffer area associated with the decoder, and the independent screen buffer area is used for performing off-screen rendering on each original video frame; the determining module is used for acquiring UI layout information of the video, wherein the UI layout information at least comprises drawing view information corresponding to the current playing scene;
the determining module is further configured to select, based on the UI layout information, a drawing view corresponding to a current playing scene from a plurality of drawing views as a first drawing view for displaying a first original video frame, where each drawing view corresponds to a different playing scene, and determine, based on the UI layout information, a first screen buffer corresponding to the first drawing view, where the first screen buffer is used for performing on-screen rendering on the first original video frame, and the first screen buffer and the independent screen buffer are independent of each other;
a second rendering module to render the first original video frame to the first screen buffer based on the independent screen buffer;
a display module, configured to display the rendered first original video frame in the first screen buffer through the first drawing view;
the obtaining module is further configured to obtain a second original video frame based on the independent screen buffer when the first drawing view is switched to a second drawing view;
the second rendering module is further configured to render the second original video frame to a second screen buffer corresponding to the second drawing view, where the second screen buffer is used to perform on-screen rendering on the second original video frame, and the second screen buffer and the independent screen buffer are independent from each other;
the display module is further configured to display the rendered second original video frame in the second screen buffer through the second drawing view.
7. The apparatus of claim 6, wherein the apparatus further comprises:
an update module, configured to receive a drawing view switching instruction, where the drawing view switching instruction is used to instruct to switch the first drawing view to a second drawing view;
updating the drawing view information in the UI layout information based on the drawing view switching instruction.
8. The apparatus of claim 6,
the second rendering module is specifically configured to obtain a texture ID corresponding to the independent screen buffer;
extracting the first original video frame from the independent screen buffer based on the texture ID;
rendering the extracted first original video frame to the first screen buffer area.
9. The apparatus of claim 8, wherein the apparatus further comprises:
a first initialization module to generate the texture ID;
creating a drawing texture instance based on the texture ID;
creating the independent screen buffer based on the drawing texture instance.
10. A video processing apparatus, characterized in that the apparatus comprises:
a memory configured to hold a program for video processing;
a processor configured to execute the program, wherein the program when executed performs the video processing method of any of claims 1 to 5.
11. A storage medium comprising a stored program, characterized in that the program when executed performs the video processing method of any one of claims 1 to 5.
CN201910173535.1A 2019-03-07 2019-03-07 Video processing method, device and storage medium Active CN109922360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910173535.1A CN109922360B (en) 2019-03-07 2019-03-07 Video processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910173535.1A CN109922360B (en) 2019-03-07 2019-03-07 Video processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN109922360A CN109922360A (en) 2019-06-21
CN109922360B true CN109922360B (en) 2022-02-11

Family

ID=66963862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910173535.1A Active CN109922360B (en) 2019-03-07 2019-03-07 Video processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN109922360B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110602551A (en) * 2019-08-22 2019-12-20 福建星网智慧科技股份有限公司 Media playing method, player, equipment and storage medium of android frame layer
CN113382196B (en) * 2020-02-25 2022-06-03 杭州海康消防科技有限公司 Scene switching method, system and device and video comprehensive processing platform
CN111679738B (en) * 2020-05-29 2023-06-23 阿波罗智联(北京)科技有限公司 Screen switching method and device, electronic equipment and storage medium
CN113411660B (en) * 2021-01-04 2024-02-09 腾讯科技(深圳)有限公司 Video data processing method and device and electronic equipment
CN112929740B (en) * 2021-01-20 2023-06-27 广州虎牙科技有限公司 Method, device, storage medium and equipment for rendering video stream
CN113411661B (en) * 2021-06-11 2023-01-31 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and program product for recording information
CN113436344B (en) * 2021-06-25 2024-08-23 广联达科技股份有限公司 Reference view display method, system and image display device
CN113724355B (en) * 2021-08-03 2024-05-07 北京百度网讯科技有限公司 Chart drawing method and device for video and electronic equipment
CN113946373B (en) * 2021-10-11 2023-06-09 成都中科合迅科技有限公司 Virtual reality multiple video stream rendering method based on load balancing
CN114222185B (en) * 2021-12-10 2024-04-05 洪恩完美(北京)教育科技发展有限公司 Video playing method, terminal equipment and storage medium
CN113923507B (en) * 2021-12-13 2022-07-22 北京蔚领时代科技有限公司 Low-delay video rendering method and device for Android terminal
CN117041668B (en) * 2023-10-08 2023-12-08 海马云(天津)信息技术有限公司 Method and device for optimizing rendering performance of terminal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9014530B2 (en) * 2008-08-12 2015-04-21 2236008 Ontario Inc. System having movie clip object controlling an external native application
CN106600670A (en) * 2016-10-19 2017-04-26 上海斐讯数据通信技术有限公司 Hardware acceleration control method and system in view drafting
CN108093292B (en) * 2016-11-21 2020-09-11 阿里巴巴集团控股有限公司 Method, device and system for managing cache
CN106534880A (en) * 2016-11-28 2017-03-22 深圳Tcl数字技术有限公司 Video synthesis method and device
CN106598514B (en) * 2016-12-01 2020-06-09 惠州Tcl移动通信有限公司 Method and system for switching virtual reality mode in terminal equipment
CN106888169A (en) * 2017-01-06 2017-06-23 腾讯科技(深圳)有限公司 Video broadcasting method and device
CN109168068B (en) * 2018-08-23 2020-06-23 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and computer readable medium
CN109194960B (en) * 2018-11-13 2020-12-18 北京奇艺世纪科技有限公司 Image frame rendering method and device and electronic equipment

Also Published As

Publication number Publication date
CN109922360A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109922360B (en) Video processing method, device and storage medium
CN110166810B (en) Video rendering engine switching method, device and equipment and readable storage medium
CN105979339B (en) Window display method and client
EP2637083A1 (en) Method and device for displaying startup interface of multimedia terminal
US20150046941A1 (en) Video display device, video display method, and program
US20130147787A1 (en) Systems and Methods for Transmitting Visual Content
CN104954848A (en) Intelligent terminal display graphic user interface control method and device
HRP20000488A2 (en) Processing of digital picture data in a decoder
JP2010123081A (en) Image processing apparatus, image processing method and program
JP2018521550A (en) Method, client and computer storage medium for playing video
CN112929740B (en) Method, device, storage medium and equipment for rendering video stream
CN109788212A (en) A kind of processing method of segmenting video, device, terminal and storage medium
US9457275B2 (en) Information processing device
JP2003050694A (en) Presentation system, image display device, its program and recording medium
CN106980503B (en) Page processing method, device and equipment
CN112804578A (en) Atmosphere special effect generation method and device, electronic equipment and storage medium
US20150040157A1 (en) Video display device, video display method, and program
CN110708591A (en) Image processing method and device and electronic equipment
WO2003041405A1 (en) Data reception apparatus
CN112019858B (en) Video playing method and device, computer equipment and storage medium
US20090080802A1 (en) Information processing apparatus and method for generating composite image
US20050021552A1 (en) Video playback image processing
JP2000148134A (en) Image display method and image processing device
AU2011338800B2 (en) Video stream presentation system and protocol
US7443403B2 (en) Navigation control in an image having zoomable areas

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant