CN113099309A - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN113099309A
CN113099309A CN202110338953.9A CN202110338953A CN113099309A CN 113099309 A CN113099309 A CN 113099309A CN 202110338953 A CN202110338953 A CN 202110338953A CN 113099309 A CN113099309 A CN 113099309A
Authority
CN
China
Prior art keywords
video
terminal
target service
processed
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110338953.9A
Other languages
Chinese (zh)
Inventor
方鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110338953.9A priority Critical patent/CN113099309A/en
Publication of CN113099309A publication Critical patent/CN113099309A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a video processing method and a video processing device, wherein the video processing method comprises the following steps: receiving a video processing request, wherein the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed; respectively applying for distributing corresponding rendering areas for the video to be processed and each target service; generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a target service control corresponding to each target service in a rendering area corresponding to each target service; and synthesizing and rendering the video player and each target business control according to the video rule to generate a rendered video. By the video processing method, the videos can be flexibly synthesized and rendered according to different requirements, and the use experience of a user is improved.

Description

Video processing method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method. The application also relates to a video processing apparatus, a computing device, and a computer-readable storage medium.
Background
With the development of the technology, more and more application scenes are available for screen recording, screen projection or stream pushing in the intelligent terminal of the android system, for example, a user wants to share certain specific operations, and the content of the terminal is displayed on other terminals in a screen projection or stream pushing manner.
However, in the existing android system intelligent terminal, screen recording generally only can record full-screen video, and cannot record video in a designated area or a special area.
Disclosure of Invention
In view of this, the present application provides a video processing method. The application also relates to a video processing device, a computing device and a computer readable storage medium, which are used for solving the problems that in the prior art, in the processes of screen recording, screen projection or stream pushing and the like, a recording or pushing area cannot be specified, and the interaction mode is inflexible.
According to a first aspect of the embodiments of the present application, there is provided a video processing method applied to a first terminal, including:
receiving a video processing request, wherein the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed;
respectively applying for distributing corresponding rendering areas for the video to be processed and each target service;
generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a target service control corresponding to each target service in a rendering area corresponding to each target service;
and synthesizing and rendering the video player and each target business control according to the video rule to generate a rendered video.
According to a second aspect of embodiments of the present application, there is provided a video processing apparatus including:
the video processing system comprises a receiving module, a processing module and a processing module, wherein the receiving module is configured to receive a video processing request, and the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed;
the distribution module is configured to apply for distributing corresponding rendering areas for the video to be processed and each target service respectively;
the generating module is configured to generate a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generate a target service control corresponding to each target service in a rendering area corresponding to each target service;
and the synthesis rendering module is configured to synthesize and render the video player and each target business control according to the video rule to generate a rendered video.
According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the video processing method when executing the computer instructions.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the video processing method.
The video processing method provided by the application receives a video processing request, wherein the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed; respectively applying for distributing corresponding rendering areas for the video to be processed and each target service; generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a target service control corresponding to each target service in a rendering area corresponding to each target service; and synthesizing and rendering the video player and each target business control according to the video rule to generate a rendered video. According to the embodiment of the application, the videos are combined and rendered flexibly according to different requirements, and the user experience is improved.
Secondly, in the process of performing subsequent processing on the rendered video, the content on the first terminal can be pushed or projected to the second terminal according to requirements, so that the content displayed on the second terminal can be different from the content displayed on the first terminal, flexible customization of the pushed or projected content is realized, and the use experience of a user is further improved.
Drawings
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a video processing method according to an embodiment of the present application;
fig. 3 is a processing flow chart of a video processing method applied to a scene where an interactive video of a mobile phone end is played on a television according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
And (3) synthetic rendering: the method synthesizes the sub-views of different levels to an image cache region for output, and can flexibly render the view, the video or other rendering results on the android system to a canvas according to requirements.
OpenGL: the OpenGL three-dimensional graphics API is a subset of OpenGL three-dimensional graphics APIs and is designed for embedded devices such as mobile phones, tablets and game hosts.
OpenGLES: for the OpenGL subset optimized by the mobile terminal, taking the bullet screen as an example, the bullet screen at the android terminal can be displayed by using a view or OpenGLES.
FBO: the FrameBuffer is called an image buffer area for short.
Bullet screen: and the comments fly through the screen during watching the video.
Panoramic video: a new video format, 360 degrees around, the viewer can adjust the viewing angle or adjust the viewing position of the video.
A flow pusher: the video stream is pushed to the specified address.
A synthesizer: and synthesizing the audio track and the video track to output a video.
Interactive video: a video that can interact in real time in conjunction with video content.
In the present application, a video processing method is provided, and the present application relates to a video processing apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 shows a flowchart of a video processing method according to an embodiment of the present application, where the video processing method is applied to a first terminal, and specifically includes the following steps:
step 102: receiving a video processing request, wherein the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed.
The first terminal is an intelligent terminal device used by a user, and the first terminal is an operation terminal with an android system, such as an android mobile phone, an android tablet computer, an intelligent television configured with an android system, or other intelligent devices configured with an android system.
In the application, a user can interact with a video player, a barrage or a designated view to generate a corresponding video processing request, for example, the user can click an interaction control in the video player to generate a corresponding video animation in a video; or the user can send the barrage in the barrage control and display the barrage in the video, and the like.
Carrying a video to be processed, a video rule and at least one target service corresponding to the video to be processed in a video processing request, wherein the video to be processed is the video processed by the video processing request; the video rule specifically refers to a processing rule when a video is processed; at least one target service corresponding to the video to be processed is a service related to the video to be processed, such as barrage, video interaction and the like.
In a specific embodiment provided by the application, for example, a user sends a barrage, and receives a video processing request, where the video processing request carries a video V to be processed, a video rule, and a barrage corresponding to the video to be processed.
Step 104: and respectively applying and distributing corresponding rendering areas for the video to be processed and each target service.
In practical application, the LocalSurface encapsulates a drawing Surface texture (Surface texture) and a drawing Surface (Surface), and the Surface is a drawing Surface of an android system. LocalSurface is an area that can be internally generated or externally injected to directly render images or create rendering environments. In practical application, a video to be processed and each target service correspond to a LocalSurface respectively.
Specifically, the applying for allocating a corresponding rendering area to the video to be processed and each target service respectively includes:
starting a rendering thread in response to the video processing request;
applying for a rendering area corresponding to the video to be processed and each target service to the rendering thread according to the video to be processed and each target service;
and the rendering thread allocates a corresponding rendering area for the video to be processed and each target service.
The rendering thread is specifically a computer thread used for rendering and generating an image, the thread is a minimum unit which can be operated and scheduled by an operating system, and the rendering thread is used for generating a rendering image corresponding to the image rendering request.
After the video to be processed and each target service are obtained, according to the video to be processed and each target service, applying for a LocalSurface corresponding to the video to be processed and each target service to the rendering thread, and allocating the LocalSurface to the video to be processed and each target service by the rendering thread according to the application, namely allocating a corresponding memory area to the video to be processed and each target service in a memory.
Optionally, the step of allocating, by the rendering thread, a corresponding rendering area to the video to be processed and each target service includes:
distributing corresponding drawing surface textures for the video to be processed and each target service;
creating a drawing surface corresponding to the video to be processed and each target service on each drawing surface texture;
and distributing a corresponding drawing surface for the video to be processed and each target service.
In practical application, when a rendering thread receives an allocation request of a rendering area, a corresponding drawing Surface texture (Surface texture) is allocated in an internal memory according to a video to be processed and attribute information of each target service, wherein the Surface texture is a Surface image buffer entity of an android system and can be used for creating a drawing Surface (Surface), the Surface specifically refers to a drawing Surface of the android system, the Surface texture inevitably exists under the condition that the Surface exists, and the drawing Surface (Surface) and the drawing Surface texture (Surface texture) jointly form the rendering area (LocalSurface).
In a specific embodiment provided by the present application, along with the above example, a corresponding rendering region Q1 is allocated to the video V to be processed, and a corresponding rendering region Q2 is allocated to the barrage.
Step 106: and generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a target service control corresponding to each target service in a rendering area corresponding to each target service.
The rendering region (LocalSurface) is used to generate a video player corresponding to the video to be processed and a target service control corresponding to each target service, for example, the rendering region corresponding to the video to be processed is used to generate a video player for playing the video to be processed, the rendering region corresponding to the bullet screen is used to generate a bullet screen control for displaying the bullet screen, the rendering region corresponding to the subtitle is used to generate a subtitle control for displaying the subtitle, and the rendering region corresponding to the video interaction is used to generate a video interaction control for interacting with a user, and so on.
In a specific embodiment provided by the present application, a video player P corresponding to a to-be-processed video V is generated in a rendering area Q1, and a bullet screen control C for playing a bullet screen is generated in a rendering area Q2.
Step 108: and synthesizing and rendering the video player and each target business control according to the video rule to generate a rendered video.
The video rules comprise a combination rule of the video player and each target service control, such as whether a certain target service control is displayed, the position relation between the target service control and the video player, the style of the target service control and the like, the video player and the target service control are combined according to the video rules, videos to be processed are displayed in the video player, and corresponding target services are displayed in the target service control.
The rendered video is a video generated after rendering is completed, and the video is not limited specifically, and may be a common video, an interactive video, a panoramic video, and the like.
Specifically, synthesizing and rendering the video player and each target service control according to the video rule to generate a rendered video, including:
determining a target business control to be synthesized in each target business control according to the video rule;
and coding the video player and the target service control to be synthesized to generate a rendered video.
The target service control to be synthesized specifically refers to a target service control which needs to be merged with the video player according to the video rule, for example, if a certain user wants to watch a bullet screen, the bullet screen is selected to be displayed, and at this time, the bullet screen control is the target service control to be synthesized; and if the user does not want to watch the subtitle, the user selects to close the subtitle, and the subtitle control is not the target control to be synthesized.
After a target service control to be synthesized for synthesizing with a video player is determined, the video player and the target service control to be synthesized are combined, the video player plays a video to be processed at the moment, a target service is displayed in the target service control to be synthesized, and then the target service is coded to generate a corresponding rendered video.
In a specific embodiment provided by the application, the above example is used, it is determined that the bullet screen control C is in a display state according to the video rule, that is, the bullet screen control C is a target service control to be synthesized, the video player P and the bullet screen control C are combined according to the video rule, the bullet screen control C is arranged above the video player P in a floating manner, at this time, the video V to be processed is played in the video player P, and bullet screen information corresponding to the video V to be processed is displayed in the bullet screen control C. And after the video is coded, generating a rendered video, wherein the rendered video comprises a video V to be processed and barrage information.
Optionally, the video rule includes a video output position;
after compositely rendering the video player and each of the target business controls according to the video rules to generate a rendered video, the method further comprises:
and outputting the rendered video to the video output position.
Wherein outputting the rendered video to the video output location comprises:
outputting the rendered video to the first terminal; or
And outputting the rendered video to a second terminal.
In practical application, the output positions of the generated rendered videos are different according to different watching requirements of users, for example, if a user wants to store a video locally at a first terminal for later watching, the rendered video can be stored in a storage device of the first terminal; for another example, the user may wish to project the video to other terminal devices for display, such as a television, other user's terminal devices, and so on. The output position of the rendered video is contained in the video rule, namely the video output position of the rendered video is carried in the video rule, and after the rendered video is generated, the rendered video is output to the video output position set.
Specifically, the video output position may be a first terminal or a second terminal, and when the video output position is the first terminal, the video is displayed on a screen of the first terminal or stored in a storage device corresponding to the first terminal; and when the video output position is the second terminal, outputting the rendered video to the second terminal in a screen projection mode, a stream pushing mode and the like. The second terminal can be a television, a tablet computer, a computer or other intelligent terminal equipment.
In a specific embodiment provided by the present application, outputting the rendered video to a second terminal includes:
acquiring a link address of the second terminal;
and outputting the rendered video to the second terminal through the link address.
In the embodiment provided by the application, the rendered video is pushed to the second terminal in a stream pushing mode, specifically, the link address of the second terminal is obtained, the second terminal at the moment can be a server, the rendered video is taken as a live video as an example, the first terminal can be a mobile phone used by a main broadcast, the second terminal can be a live server, the main broadcast uses the mobile phone to generate the live video, the link address of the live server is obtained, the live video is pushed to the live server through the link address, and the live server sends the live video to the audience client for watching the live broadcast.
In another embodiment provided herein, outputting the rendered video to a second terminal includes:
establishing a communication connection with the second terminal;
outputting the rendered video to the second terminal over the communication connection.
In the embodiment provided by the application, the rendered video can be pushed to the second terminal in a screen-casting manner, specifically, a first terminal is used to establish a communication connection with the second terminal, the communication connection can be a nested character connection, a local area network connection and the like, and the rendered video is output to the second terminal through the communication connection.
Optionally, outputting the rendered video to a second terminal includes:
receiving an output rendering instruction for the second terminal;
determining a second terminal target service control and a first terminal target service control in each target service control according to the output rendering instruction;
synthesizing a second terminal rendering video according to each second terminal target service control, and synthesizing a first terminal rendering video according to each first terminal target service control;
and outputting the rendered video of the second terminal to the second terminal, and outputting the rendered video of the first terminal to the first terminal.
The rendering video is played at the second terminal, the content of the rendering video which is sent to the second terminal for playing can be flexibly customized according to the requirements of a user, the rendering video is divided into a first terminal rendering video and a second terminal rendering video, the first terminal rendering video is sent and displayed at the first terminal, the second terminal rendering video is sent and displayed at the second terminal, and the content displayed at the first terminal is different from the content displayed at the second terminal.
The target service control includes, but is not limited to, a video-independent control, such as an audio control, a pop-up control, an information display control (e.g., introduction information that can be used to display a video), and the like, and may also be a control for processing a video, such as a mirror image flipping control, a double speed playing control, and the like.
For the same rendered video, the corresponding first terminal target service control and the corresponding second terminal target service control can be the same or different; if the first terminal target service control and the second terminal target service control are the same, rendering can be performed only once, and the first terminal target service control and the second terminal target service control are simultaneously output to the first terminal and the second terminal; and if the first terminal target service control and the second terminal target service control are different, rendering is respectively carried out according to different target service controls, and the rendered data are output to different terminals.
For example, a user can project a video and a bullet screen in a mobile phone to a television, input new bullet screen information in the mobile phone and display the bullet screen information on the television after the new bullet screen information is sent; for another example, after the user casts a video in the mobile phone onto a television, the user clicks an interactive control in the video of the mobile phone, and the generated interactive animation effect is only displayed on the television screen and does not need to be displayed at the mobile phone end.
It should be noted that when the rendered video or the rendered video of the second terminal is output to the second terminal, the rendered video or the rendered video of the second terminal can be flexibly customized according to the requirements of the user, where the flexible customization includes, but is not limited to, mirror image turning, double-speed playing, screen flipping, etc., for example, the rendered video that is not customized is displayed in the first terminal, and simultaneously, the rendered video is sent to the second terminal and then mirror image turning is performed, or the rendered video of the second terminal is sent to the second terminal and then played 1.5 times, etc., so that the use experience of the user is improved.
The video processing method provided by the application receives a video processing request, wherein the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed; respectively applying for distributing corresponding rendering areas for the video to be processed and each target service; generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a target service control corresponding to each target service in a rendering area corresponding to each target service; and synthesizing and rendering the video player and each target business control according to the video rule to generate a rendered video. According to the embodiment of the application, the videos are combined and rendered flexibly according to different requirements, and the user experience is improved.
Secondly, in the process of performing subsequent processing on the rendered video, the content on the first terminal can be pushed or projected to the second terminal according to requirements, so that the content displayed on the second terminal can be different from the content displayed on the first terminal, flexible customization of the pushed or projected content is realized, and the use experience of a user is further improved.
The video processing method provided by the present application is further explained with reference to fig. 2, and fig. 2 shows a schematic structural diagram of the video processing method provided by the present application.
As shown in fig. 2, in the video processing process of the present application, a user may interact with a player, a popup, a view control, and the like through a first terminal, for example, click on start, fast forward, fast backward, pause of the player, send popup information, like click on popup information, and operate a video interaction control, such as long-press of a video click button, and trigger triple operations of click, coin insertion, and collection when the long-press time exceeds a preset threshold.
After a user interacts with the player, the barrage and/or the view control, a corresponding video is generated, synthesis rendering is carried out according to video rules to generate a rendered video, the rendered video is processed through coding, synthesis, stream pushing and the like to generate a final video, and the final video can be stored in the first terminal or can be transmitted to the second terminal through stream pushing or screen projection.
The following describes the video processing method further with reference to fig. 3, by taking a scene in which the video processing method provided by the present application plays an interactive video of a mobile phone terminal on a television as an example. Fig. 3 shows a processing flow chart of a video processing method applied to a scene where an interactive video of a mobile phone end is played on a television, which is provided by an embodiment of the present application, and the method applied to the mobile phone specifically includes the following steps:
step 302: receiving a video processing request, wherein the video processing request carries a video V to be processed, a video rule and a praise attention interaction corresponding to the video to be processed.
In a specific embodiment provided by the application, a user casts a video V on a television through a mobile phone, and in the process of watching the video V, if the video V is considered to be very wonderful, a follow-up attention operation is executed at the mobile phone end, and a corresponding animation is generated after the follow-up attention operation, at this time, a video processing request for the video V is carried with the video V to be processed, a video rule and follow-up attention interaction for the video to be processed.
Step 304: a rendering thread L is started in response to the video processing request.
In a specific embodiment provided by the present application, following the above example, in response to the video processing request, the rendering thread L is started, and the rendering thread L includes a rendering loop.
Step 306: and applying for a rendering area corresponding to the to-be-processed video and the like attention interaction from the rendering thread L according to the to-be-processed video V and the like attention interaction.
In a specific embodiment provided by the present application, following the above example, the rendering regions corresponding to the to-be-processed video V and the like attention interaction are applied to the rendering thread L, namely the rendering region Q1 corresponding to the to-be-processed video V and the rendering region Q2 corresponding to the like attention interaction.
Step 308: and the rendering thread allocates a corresponding rendering area for the video to be processed and the attention-paying attention interaction.
In a specific embodiment provided by the present application, following the above example, the rendering thread allocates a rendering region Q1 and a rendering region Q2 to the video V to be processed and the attention-focused interaction respectively.
Step 310: and generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a like attention interaction control corresponding to the like attention interaction in a rendering area corresponding to the like attention interaction.
In a specific embodiment provided by the present application, following the above example, a video player corresponding to the to-be-processed video is generated in the rendering region Q1, and a like attention interaction control corresponding to a like attention interaction is generated in the rendering region Q2, wherein the video player plays the to-be-processed video V, and an animation corresponding to the like attention is generated in the like attention interaction control.
Step 312: and synthesizing and rendering the video player and the like attention interaction control according to the video rule to generate a rendered video.
In a specific embodiment provided by the application, following the above example, according to a video rule, a video player and a like attention interaction control are subjected to composite rendering, a composite rendering region includes a picture corresponding to a to-be-processed video V and a like attention, and then the picture is subjected to encoding processing, so as to generate a composite rendered rendering video.
Step 314: and projecting the rendered video to a television which is in communication connection with a mobile phone for playing.
In a specific embodiment provided by the application, the rendered video after the synthetic rendering is projected to a television which establishes a communication connection with a mobile phone for playing, so that a user executes an operation of paying attention in favor of the mobile phone end and plays an animation corresponding to paying attention in favor of the television end. The use interactive experience of the user is improved.
The video processing method provided by the application receives a video processing request, wherein the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed; respectively applying for distributing corresponding rendering areas for the video to be processed and each target service; generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a target service control corresponding to each target service in a rendering area corresponding to each target service; and synthesizing and rendering the video player and each target business control according to the video rule to generate a rendered video. According to the embodiment of the application, the videos are combined and rendered flexibly according to different requirements, and the user experience is improved.
Secondly, in the process of performing subsequent processing on the rendered video, the content on the first terminal can be pushed or projected to the second terminal according to requirements, so that the content displayed on the second terminal can be different from the content displayed on the first terminal, flexible customization of the pushed or projected content is realized, and the use experience of a user is further improved.
Corresponding to the above video processing method embodiment, the present application further provides an embodiment of a video processing apparatus, and fig. 4 shows a schematic structural diagram of a video processing apparatus provided in an embodiment of the present application. As shown in fig. 4, the apparatus is applied to a first terminal, and includes:
a receiving module 402, configured to receive a video processing request, where the video processing request carries a video to be processed, a video rule, and at least one target service corresponding to the video to be processed;
an allocating module 404, configured to apply for allocating corresponding rendering regions to the video to be processed and each of the target services respectively;
a generating module 406, configured to generate a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generate a target service control corresponding to each target service in a rendering area corresponding to each target service;
a composite rendering module 408 configured to composite render the video player and each of the target business controls according to the video rules to generate a rendered video.
Optionally, the allocating module 404 is further configured to:
starting a rendering thread in response to the video processing request;
applying for a rendering area corresponding to the video to be processed and each target service to the rendering thread according to the video to be processed and each target service;
and the rendering thread allocates a corresponding rendering area for the video to be processed and each target service.
Optionally, the allocating module 404 is further configured to:
distributing corresponding drawing surface textures for the video to be processed and each target service;
creating a drawing surface corresponding to the video to be processed and each target service on each drawing surface texture;
and distributing a corresponding drawing surface for the video to be processed and each target service.
Optionally, the composition rendering module 408 is further configured to:
determining a target business control to be synthesized in each target business control according to the video rule;
and coding the video player and the target service control to be synthesized to generate a rendered video.
Optionally, the video rule includes a video output position;
the device further comprises:
an output module configured to output the rendered video to the video output location.
Optionally, the output module is further configured to:
outputting the rendered video to the first terminal; or
And outputting the rendered video to a second terminal.
Optionally, the output module is further configured to:
acquiring a link address of the second terminal;
and outputting the rendered video to the second terminal through the link address.
Optionally, the output module is further configured to:
establishing a communication connection with the second terminal;
outputting the rendered video to the second terminal over the communication connection.
Optionally, the output module is further configured to:
receiving an output rendering instruction for the second terminal;
determining a second terminal target service control and a first terminal target service control in each target service control according to the output rendering instruction;
synthesizing a second terminal rendering video according to each second terminal target service control, and synthesizing a first terminal rendering video according to each first terminal target service control;
and outputting the rendered video of the second terminal to the second terminal, and outputting the rendered video of the first terminal to the first terminal.
The video processing device receives a video processing request, wherein the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed; respectively applying for distributing corresponding rendering areas for the video to be processed and each target service; generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a target service control corresponding to each target service in a rendering area corresponding to each target service; and synthesizing and rendering the video player and each target business control according to the video rule to generate a rendered video. According to the embodiment of the application, the videos are combined and rendered flexibly according to different requirements, and the user experience is improved.
Secondly, in the process of performing subsequent processing on the rendered video, the content on the first terminal can be pushed or projected to the second terminal according to requirements, so that the content displayed on the second terminal can be different from the content displayed on the first terminal, flexible customization of the pushed or projected content is realized, and the use experience of a user is further improved.
The above is a schematic scheme of a video processing apparatus of the present embodiment. It should be noted that the technical solution of the video processing apparatus belongs to the same concept as the technical solution of the video processing method, and details that are not described in detail in the technical solution of the video processing apparatus can be referred to the description of the technical solution of the video processing method.
Fig. 5 illustrates a block diagram of a computing device 500 provided according to an embodiment of the present application. The components of the computing device 500 include, but are not limited to, a memory 510 and a processor 520. Processor 520 is coupled to memory 510 via bus 530, and database 550 is used to store data.
Computing device 500 also includes access device 540, access device 540 enabling computing device 500 to communicate via one or more networks 560. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 540 may include one or more of any type of network interface, e.g., a Network Interface Card (NIC), wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the application, the above-described components of computing device 500 and other components not shown in FIG. 5 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 5 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 500 may also be a mobile or stationary server.
Wherein the steps of the video processing method are implemented by processor 520 when executing the computer instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video processing method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video processing method.
An embodiment of the present application further provides a computer readable storage medium, which stores computer instructions, and the computer instructions, when executed by a processor, implement the steps of the video processing method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the above-mentioned video processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the above-mentioned video processing method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (12)

1. A video processing method applied to a first terminal comprises the following steps:
receiving a video processing request, wherein the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed;
respectively applying for distributing corresponding rendering areas for the video to be processed and each target service;
generating a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generating a target service control corresponding to each target service in a rendering area corresponding to each target service;
and synthesizing and rendering the video player and each target business control according to the video rule to generate a rendered video.
2. The video processing method of claim 1, wherein applying for allocation of corresponding rendering regions for the video to be processed and each of the target services respectively comprises:
starting a rendering thread in response to the video processing request;
applying for a rendering area corresponding to the video to be processed and each target service to the rendering thread according to the video to be processed and each target service;
and the rendering thread allocates a corresponding rendering area for the video to be processed and each target service.
3. The video processing method of claim 2, wherein the rendering thread allocates a corresponding rendering area for the video to be processed and each of the target services, comprising:
distributing corresponding drawing surface textures for the video to be processed and each target service;
creating a drawing surface corresponding to the video to be processed and each target service on each drawing surface texture;
and distributing a corresponding drawing surface for the video to be processed and each target service.
4. The video processing method of claim 1, wherein compositing and rendering the video player and each of the target business controls according to the video rules to generate a rendered video comprises:
determining a target business control to be synthesized in each target business control according to the video rule;
and coding the video player and the target service control to be synthesized to generate a rendered video.
5. The video processing method of claim 1, wherein the video rule includes a video output location;
after compositely rendering the video player and each of the target business controls according to the video rules to generate a rendered video, the method further comprises:
and outputting the rendered video to the video output position.
6. The video processing method of claim 5, wherein outputting the rendered video to the video output location comprises:
outputting the rendered video to the first terminal; or
And outputting the rendered video to a second terminal.
7. The video processing method of claim 6, wherein outputting the rendered video to a second terminal comprises:
acquiring a link address of the second terminal;
and outputting the rendered video to the second terminal through the link address.
8. The video processing method of claim 6, wherein outputting the rendered video to a second terminal comprises:
establishing a communication connection with the second terminal;
outputting the rendered video to the second terminal over the communication connection.
9. The video processing method of claim 6, wherein outputting the rendered video to a second terminal comprises:
receiving an output rendering instruction for the second terminal;
determining a second terminal target service control and a first terminal target service control in each target service control according to the output rendering instruction;
synthesizing a second terminal rendering video according to each second terminal target service control, and synthesizing a first terminal rendering video according to each first terminal target service control;
and outputting the rendered video of the second terminal to the second terminal, and outputting the rendered video of the first terminal to the first terminal.
10. A video processing apparatus applied to a first terminal, comprising:
the video processing system comprises a receiving module, a processing module and a processing module, wherein the receiving module is configured to receive a video processing request, and the video processing request carries a video to be processed, a video rule and at least one target service corresponding to the video to be processed;
the distribution module is configured to apply for distributing corresponding rendering areas for the video to be processed and each target service respectively;
the generating module is configured to generate a video player corresponding to the video to be processed in a rendering area corresponding to the video to be processed, and generate a target service control corresponding to each target service in a rendering area corresponding to each target service;
and the synthesis rendering module is configured to synthesize and render the video player and each target business control according to the video rule to generate a rendered video.
11. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-9 when executing the computer instructions.
12. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 9.
CN202110338953.9A 2021-03-30 2021-03-30 Video processing method and device Pending CN113099309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110338953.9A CN113099309A (en) 2021-03-30 2021-03-30 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110338953.9A CN113099309A (en) 2021-03-30 2021-03-30 Video processing method and device

Publications (1)

Publication Number Publication Date
CN113099309A true CN113099309A (en) 2021-07-09

Family

ID=76671117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110338953.9A Pending CN113099309A (en) 2021-03-30 2021-03-30 Video processing method and device

Country Status (1)

Country Link
CN (1) CN113099309A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449334A (en) * 2022-01-29 2022-05-06 上海哔哩哔哩科技有限公司 Video recording method, video recording device, electronic equipment and computer storage medium
WO2023030306A1 (en) * 2021-08-31 2023-03-09 维沃移动通信有限公司 Method and apparatus for video editing, and electronic device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101911125A (en) * 2008-01-18 2010-12-08 高通股份有限公司 Multi-buffer support for off-screen surfaces in a graphics processing system
CN103247068A (en) * 2013-04-03 2013-08-14 上海晨思电子科技有限公司 Rendering method and device
CN106406710A (en) * 2016-09-30 2017-02-15 维沃移动通信有限公司 Screen recording method and mobile terminal
CN106550277A (en) * 2015-09-23 2017-03-29 天津三星电子有限公司 A kind of method and display device of loading barrage
US10021458B1 (en) * 2015-06-26 2018-07-10 Amazon Technologies, Inc. Electronic commerce functionality in video overlays
CN108521584A (en) * 2018-04-20 2018-09-11 广州虎牙信息科技有限公司 Interactive information processing method, device, main broadcaster's side apparatus and medium
CN109963166A (en) * 2017-12-22 2019-07-02 上海全土豆文化传播有限公司 Online Video edit methods and device
CN110418155A (en) * 2019-08-08 2019-11-05 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus, computer readable storage medium and computer equipment
CN112363791A (en) * 2020-11-17 2021-02-12 深圳康佳电子科技有限公司 Screen recording method and device, storage medium and terminal equipment
CN112433689A (en) * 2020-11-11 2021-03-02 深圳市先智物联科技有限公司 Data transmission method and device for same-screen device, same-screen device and medium
CN112492401A (en) * 2019-09-11 2021-03-12 腾讯科技(深圳)有限公司 Video-based interaction method and device, computer-readable medium and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101911125A (en) * 2008-01-18 2010-12-08 高通股份有限公司 Multi-buffer support for off-screen surfaces in a graphics processing system
CN103247068A (en) * 2013-04-03 2013-08-14 上海晨思电子科技有限公司 Rendering method and device
US10021458B1 (en) * 2015-06-26 2018-07-10 Amazon Technologies, Inc. Electronic commerce functionality in video overlays
CN106550277A (en) * 2015-09-23 2017-03-29 天津三星电子有限公司 A kind of method and display device of loading barrage
CN106406710A (en) * 2016-09-30 2017-02-15 维沃移动通信有限公司 Screen recording method and mobile terminal
CN109963166A (en) * 2017-12-22 2019-07-02 上海全土豆文化传播有限公司 Online Video edit methods and device
CN108521584A (en) * 2018-04-20 2018-09-11 广州虎牙信息科技有限公司 Interactive information processing method, device, main broadcaster's side apparatus and medium
CN110418155A (en) * 2019-08-08 2019-11-05 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus, computer readable storage medium and computer equipment
CN112492401A (en) * 2019-09-11 2021-03-12 腾讯科技(深圳)有限公司 Video-based interaction method and device, computer-readable medium and electronic equipment
CN112433689A (en) * 2020-11-11 2021-03-02 深圳市先智物联科技有限公司 Data transmission method and device for same-screen device, same-screen device and medium
CN112363791A (en) * 2020-11-17 2021-02-12 深圳康佳电子科技有限公司 Screen recording method and device, storage medium and terminal equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030306A1 (en) * 2021-08-31 2023-03-09 维沃移动通信有限公司 Method and apparatus for video editing, and electronic device
CN114449334A (en) * 2022-01-29 2022-05-06 上海哔哩哔哩科技有限公司 Video recording method, video recording device, electronic equipment and computer storage medium

Similar Documents

Publication Publication Date Title
US11037600B2 (en) Video processing method and apparatus, terminal and medium
WO2022048097A1 (en) Single-frame picture real-time rendering method based on multiple graphics cards
CN111970532B (en) Video playing method, device and equipment
CN115830190A (en) Animation processing method and device
CN111491174A (en) Virtual gift acquisition and display method, device, equipment and storage medium
CN112019907A (en) Live broadcast picture distribution method, computer equipment and readable storage medium
CN112637622A (en) Live broadcasting singing method, device, equipment and medium
CN112637670B (en) Video generation method and device
CN107040808B (en) Method and device for processing popup picture in video playing
WO2023104102A1 (en) Live broadcasting comment presentation method and apparatus, and device, program product and medium
US20210368214A1 (en) Method and mobile terminal for processing data
CN111899322A (en) Video processing method, animation rendering SDK, device and computer storage medium
CN113099309A (en) Video processing method and device
CN110784674B (en) Video processing method, device, terminal and storage medium
CN112950757B (en) Image rendering method and device
CN112291590A (en) Video processing method and device
KR20180038256A (en) Method, and system for compensating delay of virtural reality stream
KR20210095160A (en) A technology configured to provide a user interface through the representation of two-dimensional content through three-dimensional display objects rendered in a navigable virtual space
CN113141537A (en) Video frame insertion method, device, storage medium and terminal
CN110012336A (en) Picture configuration method, terminal and the device at interface is broadcast live
CN112019906A (en) Live broadcast method, computer equipment and readable storage medium
CN114422816A (en) Live video processing method and device, electronic equipment and storage medium
WO2024104333A1 (en) Cast picture processing method and apparatus, electronic device, and storage medium
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination