CN112153408A - Live broadcast rendering method and device, electronic equipment and storage medium - Google Patents
Live broadcast rendering method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112153408A CN112153408A CN202011045066.4A CN202011045066A CN112153408A CN 112153408 A CN112153408 A CN 112153408A CN 202011045066 A CN202011045066 A CN 202011045066A CN 112153408 A CN112153408 A CN 112153408A
- Authority
- CN
- China
- Prior art keywords
- rendering
- transparent grid
- grid area
- video frame
- transparent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 133
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000012545 processing Methods 0.000 claims description 21
- 239000000470 constituent Substances 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 22
- 230000000694 effects Effects 0.000 abstract description 15
- 238000010586 diagram Methods 0.000 description 9
- 230000000977 initiatory effect Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 239000003999 initiator Substances 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/503—Blending, e.g. for anti-aliasing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/62—Semi-transparency
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Image Generation (AREA)
Abstract
The application provides a live broadcast rendering method, a live broadcast rendering device, electronic equipment and a storage medium, and relates to the technical field of Internet, wherein a transparent grid area is determined in a live broadcast video frame by using a transparency parameter corresponding to the live broadcast video frame; determining a rendering sequence corresponding to each basic composition unit according to the depth space parameters corresponding to all the basic composition units in the transparent grid area; therefore, in the process of rendering the live video frame, all pixel points in the transparent grid area can be sequentially rendered according to the rendering sequence determined by the depth space parameters, and some pixel points cannot be discarded, so that the rendering effect is improved.
Description
Technical Field
The application relates to the technical field of internet, in particular to a live broadcast rendering method and device, electronic equipment and a storage medium.
Background
In a scene such as web live broadcast, tools such as openGL (Open Graphics Library) can be utilized to render some special effects in a live video frame so as to enrich live content in the live video frame and improve live broadcast effect.
However, when the rendered live video frame includes some objects with transparency, some abnormal regions may occur in the rendered live video frame, resulting in poor rendering effect.
Disclosure of Invention
An object of the present application is to provide a live broadcast rendering method, apparatus, electronic device, and storage medium, which can improve rendering effect.
In order to achieve the purpose, the technical scheme adopted by the application is as follows:
in a first aspect, the present application provides a live rendering method, including:
determining a transparent grid (mesh) area in a live video frame according to a transparency parameter corresponding to the live video frame;
determining a rendering sequence corresponding to each basic composition unit according to the depth space parameters corresponding to all the basic composition units in the transparent grid area;
and rendering all the basic composition units in the transparent grid area in sequence according to the rendering sequence corresponding to each basic composition unit in the transparent grid area.
In a second aspect, the present application provides a live broadcast rendering apparatus, the apparatus comprising:
the processing module is used for determining a transparent grid area in a live video frame according to a transparency parameter corresponding to the live video frame;
the processing module is further configured to determine, according to the depth space parameters corresponding to all the basic constituent units in the transparent grid region, a rendering order corresponding to each of the basic constituent units;
and the rendering module is used for rendering all the basic composition units in the transparent grid area in sequence according to the rendering sequence corresponding to each basic composition unit in the transparent grid area.
In a third aspect, the present application provides an electronic device, including a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU);
the CPU is used for determining a transparent grid area in a live video frame according to a transparency parameter corresponding to the live video frame;
the CPU is further used for determining a rendering sequence corresponding to each basic composition unit according to the depth space parameters corresponding to all the basic composition units in the transparent grid area;
and the GPU is used for rendering all the basic composition units in the transparent grid area in sequence according to the rendering sequence corresponding to each basic composition unit in the transparent grid area.
In a fourth aspect, the present application provides an electronic device comprising a memory for storing one or more programs; a processor; the one or more programs, when executed by the processor, implement the live rendering method described above.
In a fifth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the live rendering method described above.
According to the live broadcast rendering method and device, the electronic equipment and the storage medium, the transparent grid area is determined in the live broadcast video frame by using the transparency parameter corresponding to the live broadcast video frame; determining a rendering sequence corresponding to each basic composition unit according to the depth space parameters corresponding to all the basic composition units in the transparent grid area; therefore, in the process of rendering the live video frame, all pixel points in the transparent grid area can be sequentially rendered according to the rendering sequence determined by the depth space parameters, and some pixel points cannot be discarded, so that the rendering effect is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly explain the technical solutions of the present application, the drawings needed for the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also derive other related drawings from these drawings without inventive effort.
Fig. 1 is a schematic view illustrating an interactive scene of a live broadcast system provided in the present application;
FIG. 2 illustrates a rendered schematic view of a video frame;
FIG. 3 is a block diagram of a schematic configuration of an electronic device provided herein;
FIG. 4 illustrates an exemplary flow chart of a live rendering method provided herein;
FIG. 5 is a schematic diagram illustrating a video frame rendered by the live rendering method provided in the present application;
fig. 6 shows a schematic structural block diagram of a live broadcast rendering apparatus provided in the present application.
In the figure: 100-an electronic device; 101-a memory; 102-a processor; 103-a communication interface; 300-a live rendering device; 301-a processing module; 302-rendering module.
Detailed Description
To make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the present application will be clearly and completely described below with reference to the accompanying drawings in some embodiments of the present application, and it is obvious that the described embodiments are some, but not all embodiments of the present application. The components of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art based on a part of the embodiments in the present application without any creative effort belong to the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic view illustrating an interactive scene of a live broadcast system provided in the present application, which may be a live broadcast platform for live broadcast such as internet in some embodiments. The live broadcast system can comprise a server, a live broadcast initiating terminal and a live broadcast receiving terminal, wherein the server can be communicated with the live broadcast receiving terminal and the live broadcast initiating terminal respectively, and the server can provide live broadcast service for the live broadcast receiving terminal and the live broadcast initiating terminal. For example, the anchor may provide a live stream online in real time to the viewer through the live initiator and transmit the live stream to the server, and the live receiver may pull the live stream from the server for online viewing or playback.
In some implementations, the live receiver and the live initiator may be used interchangeably. For example, a anchor of a live originator may use the live originator to provide live video services to viewers, or as viewers to view live video provided by other anchors. For another example, a viewer at a live receiver may also use the live receiver to watch live video provided by a concerned anchor, or serve as the anchor to provide live video services to other viewers.
In some embodiments, the live receiver and the live initiator may include, but are not limited to, a mobile device, a tablet computer, a laptop computer, or any combination of two or more thereof. In some embodiments, the mobile device may include, but is not limited to, a wearable device, a smart mobile device, an augmented reality device, and the like, or any combination thereof. In some embodiments, the smart mobile device may include, but is not limited to, a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, or a point of sale (POS) device, or the like, or any combination thereof.
In addition, in some possible embodiments, there may be zero, one, or more live receivers and live initiators, only one of which is shown in fig. 1, accessing the server. The live broadcast receiving end and the live broadcast initiating end can be provided with internet products for providing internet live broadcast services, for example, the internet products can be application programs APP, Web webpages, applets and the like used in a computer or a smart phone and related to the internet live broadcast services.
In some embodiments, the server may be a single physical server or a server group consisting of a plurality of physical servers for performing different data processing functions. The set of servers can be centralized or distributed (e.g., the servers can be a distributed system). In some possible embodiments, such as where the server employs a single physical server, the physical server may be assigned different logical server components based on different live service functions.
It will be appreciated that the live system shown in fig. 1 is only one possible example, and that in other possible embodiments of the present application, the live system may also include only some of the components shown in fig. 1 or may also include other components.
In a live video scene such as that shown in fig. 1, for a live video frame captured by a live initiating end, some special effects may be rendered in the live video frame, for example, some bubbles or balloon effects may be added in the live video frame to enrich live content in the live video frame.
In a scene where, for example, a bubble effect is rendered to a live video frame, as shown in fig. 2, some objects with transparency, such as the balloon rendered in fig. 2, may be present in the live video frame.
However, in the process of rendering an object with transparency to a live video frame by using a tool such as openGL, since there may be a plurality of pixels in the same pixel point, the same pixel point generally only retains the pixel with the smallest depth value during rendering of openGL, and other pixels are discarded, so that an abnormal area as shown in fig. 2 may be generated during rendering an area where the object with transparency is located, resulting in a poor rendering effect.
Therefore, based on some defects of the above rendering scheme, a possible implementation manner provided by the present application is: determining a transparent grid area in a live video frame by using a transparency parameter corresponding to the live video frame; determining a rendering sequence corresponding to each basic composition unit according to the depth space parameters corresponding to all the basic composition units in the transparent grid area; therefore, in the process of rendering the live video frame, all pixel points in the transparent grid area can be sequentially rendered according to the rendering sequence determined by the depth space parameters, and the rendering effect is improved.
Referring to fig. 3, fig. 3 shows a schematic block diagram of an electronic device 100 provided in the present application, and in some embodiments, the electronic device 100 may include a memory 101, a processor 102, and a communication interface 103, and the memory 101, the processor 102, and the communication interface 103 are electrically connected to each other directly or indirectly to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 101 may be configured to store software programs and modules, such as program instructions/modules corresponding to the live broadcast rendering apparatus provided in the present application, and the processor 102 executes the software programs and modules stored in the memory 101 to execute various functional applications and data processing, thereby executing the steps of the live broadcast rendering method provided in the present application. The communication interface 103 may be used for communicating signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Programmable Read-Only Memory (EEPROM), and the like.
The processor 102 may be an integrated circuit chip having signal processing capabilities. The Processor 102 may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It will be appreciated that the configuration shown in fig. 3 is merely illustrative and that electronic device 100 may include more or fewer components than shown in fig. 3 or have a different configuration than shown in fig. 3. The components shown in fig. 3 may be implemented in hardware, software, or a combination thereof.
In addition, in some embodiments, the electronic device 100 shown in fig. 3 may be used as the live broadcast initiating terminal in fig. 1, for example, a main broadcast of the live broadcast initiating terminal selects a rendering effect, and the live broadcast initiating terminal renders the captured live broadcast video frame and sends the rendered live broadcast video frame to the server; in another embodiment, the electronic device 100 may also be used as the server in fig. 1, for example, a main broadcast at a live broadcast initiating end or audiences at a live broadcast receiving end send a rendering request to the server by operating respective terminals, so that the server responds to the received rendering request, executes the live broadcast rendering method provided by the present application, and renders a live broadcast video frame in a live broadcast stream; moreover, the electronic device 100 may also serve as a live broadcast receiving end in fig. 1, and the live broadcast receiving end performs the live broadcast rendering method provided in the present application by responding to a rendering request input by a viewer, renders a live broadcast video frame in a live broadcast stream, and displays the rendered live broadcast video frame.
The electronic device 100 shown in fig. 3 is taken as an exemplary execution subject, and a live rendering method provided by the present application is exemplarily described below.
Referring to fig. 4, fig. 4 shows an exemplary flowchart of a live rendering method provided in the present application, and as a possible implementation, the live rendering method may include the following steps:
And step 205, rendering all the basic composition units in the transparent grid area in sequence according to the rendering sequence corresponding to each basic composition unit in the transparent grid area.
In some embodiments, each pixel point of each live video frame may include parameters of four channels RGBA, namely, four image processing channels Red (Red), Green (Green), Blue (Blue), and Alpha.
The parameter of the alpha channel can be used for representing the transparency parameter of each pixel point in the live video frame; for example, in some embodiments, the parameter of the alpha channel may be generally used to indicate the opacity of the corresponding pixel, and when the parameter of the alpha channel is 0%, the corresponding pixel is a transparent pixel; when the parameter of the alpha channel is 100%, the corresponding pixel point is an opaque pixel point.
Based on this, in the process of executing the live broadcast rendering method provided by the application, the electronic device can determine a transparent grid area in the live broadcast video frame by using the transparency parameter corresponding to each pixel point in the live broadcast video frame; for example, the electronic device may determine, according to the parameter of the alpha channel corresponding to each pixel point, pixel points whose parameters of all corresponding alpha channels are 100% as the non-transparent grid region; and determining all pixel points with the parameters of the corresponding alpha channels less than 100% as transparent grid areas.
Of course, it can be understood that, in the process of dividing the transparent grid region by the electronic device, in order to improve the dividing fineness of the transparent grid region, the electronic device may determine a plurality of transparent grid regions based on the parameters of the alpha channels corresponding to the respective pixel points, and the parameters of the alpha channels corresponding to all the pixel points of each transparent grid region are the same or are located in the same set parameter interval.
Next, the electronic device determines a transparent mesh region in the live video frame, and may determine a rendering order corresponding to each basic component unit according to a depth space parameter corresponding to each basic component unit (i.e., a triangle region of each transparent mesh region) constituting the transparent mesh region in a depth coordinate system, for example, the depth space parameter may be a depth value of each basic component unit in the depth coordinate system.
It is understood that, in the live video frame, the depth value of each pixel point can be used to indicate the distance between the pixel point and the camera in the depth coordinate system.
Then, in the process of rendering the live video frame, the electronic device may first render the non-transparent grid region in the live video frame, and then render the transparent grid region; in addition, in the process of rendering the transparent grid area, all the basic composition units in the transparent grid area may be sequentially rendered according to the rendering sequence corresponding to each basic composition unit in the transparent grid area.
It should be noted that, in some embodiments, the electronic device provided in the present application may be configured with a CPU and a GPU; the steps 201 and 203 may be executed by the CPU, and after the CPU obtains the rendering sequence corresponding to each basic component unit in the transparent grid area, the CPU sends the rendering sequence corresponding to each basic component unit in the transparent grid area to the GPU; then, the GPU performs step 205 according to the received rendering order corresponding to each basic component unit and by combining the blending mode (such as normal, screen, or overlay) selected by the user, and sequentially renders all the basic component units in the non-transparent grid region and the transparent grid region in the live video frame, thereby obtaining the rendered video frame as illustrated in fig. 5; referring to fig. 5, a video frame rendered according to the live broadcast rendering method provided by the present application overcomes the rendering anomaly shown in fig. 2.
Based on the design, the live broadcast rendering method provided by the application determines the transparent grid area in the live broadcast video frame by using the transparency parameter corresponding to the live broadcast video frame; determining a rendering sequence corresponding to each basic composition unit according to the depth space parameters corresponding to all the basic composition units in the transparent grid area; therefore, in the process of rendering the live video frame, all pixel points in the transparent grid area can be sequentially rendered according to the rendering sequence determined by the depth space parameters, and some pixel points cannot be discarded, so that the rendering effect is improved.
In some embodiments, in order to avoid that, after the electronic device renders the live video frame, a pixel point with a longer distance in the depth space is blocked by a pixel point with a shorter distance, in the process of executing step 203, the electronic device may calculate, for all basic constituent units in the transparent mesh region, a reference depth value corresponding to each basic constituent unit, where the reference depth value may be used to indicate a distance of the corresponding basic constituent unit in the depth space; for example, referring to the above example, the reference depth value corresponding to each basic component unit may be used to indicate the distance between each basic component unit and the camera in the depth coordinate system.
In some embodiments, taking any one of all basic constituent units included in the transparent mesh area as an example of a target basic constituent unit, for the target basic constituent unit, the electronic device may calculate an average depth value of all vertices included in the target basic constituent unit, and use the average depth value as a reference depth value corresponding to the target basic constituent unit.
For example, in the above example in which the basic constituent element is a triangle area, the electronic device may average depth values corresponding to three vertices of the triangle area, and use the average value as a reference depth value corresponding to the triangle area.
Of course, it is understood that the above-mentioned manner of calculating the average depth value of all vertices included in the target basic component unit, so as to use the calculated average depth value as the reference depth value of the target basic component unit is only an example, and in some other possible embodiments of the present application, the electronic device may also calculate the reference depth value of the target basic component unit in other manners; for example, the electronic device may further traverse the respective depth values corresponding to all the pixel points included in the target basic composition unit, and use an average value obtained from the respective depth values corresponding to all the pixel points as the reference depth value corresponding to the target basic composition unit; the method for calculating the reference depth value of the target basic composition unit is not limited in the present application, as long as the reference depth value of the target basic composition unit can be calculated.
Then, the electronic device sorts all the basic composition units of the transparent grid area according to the respective corresponding reference depth values according to the calculated respective corresponding reference depth values of each basic composition unit, so that the electronic device can sequentially determine the rendering sequence of each basic composition unit according to the arrangement sequence of the reference depth values from large to small.
That is to say, in some embodiments, in the process of rendering the transparent grid area, the electronic device may render the corresponding basic component unit with the larger reference depth value first and then render the corresponding basic component unit with the smaller reference depth value according to the respective reference depth values of each basic component unit in the transparent grid area, so as to prevent the pixel points with the farther distance in the depth space from being blocked by the pixel points with the closer distance.
In addition, with reference to the above example, in order to improve the fineness of dividing the transparent grid areas, after the electronic device performs step 201, there may be a plurality of transparent grid areas determined in the live video frame, that is: the electronic device may partition a plurality of transparent mesh regions as shown in fig. 5 in the live video frame by performing step 201.
In some embodiments, the electronic device may refer to the above exemplary manner, and the basis of the multiple transparent grid regions divided by the electronic device may be to divide pixel points corresponding to the same alpha channel parameter into the same transparent grid region, or to divide pixel points corresponding to the same set alpha channel parameter interval into the same transparent grid region; the division mode of the same transparent grid area is not limited.
Based on the above, for the multiple transparent grid areas divided by the electronic device, the electronic device may determine the rendering order of each transparent grid area according to the depth space parameter corresponding to each transparent grid area; then, in the process of rendering all the transparent grid areas in the live video frame, the electronic device can sequentially render each transparent grid area according to the respective rendering sequence of each transparent grid area; therefore, even if the different transparent grid areas are overlapped, some pixel points of the overlapped part cannot be discarded without rendering, and the rendering effect is improved.
In some embodiments, in order to avoid mutual occlusion between the transparent grid regions with the overlapping regions, the electronic device may sequentially render each transparent grid region based on a respective depth distance of each transparent grid region in the depth space in the process of determining the respective rendering order of each transparent grid region.
For example, in the process of determining the rendering order of each transparent grid region, the electronic device may determine the rendering order of each transparent grid region according to the size of the bounding box corresponding to each transparent grid region in the depth space.
In the above-mentioned example, where any one of the plurality of transparent grid regions is taken as the target transparent grid region, the bounding box of the target transparent grid region in the depth space may be a minimum cube that can surround the target transparent grid region in the depth space, and a size (e.g., a volume) of the minimum cube may be a size of a corresponding bounding box of the target transparent grid region in the depth space.
Exemplarily, in the process of determining the rendering order of each transparent grid region according to the size of the bounding box corresponding to each transparent grid region, the electronic device may sequentially determine the rendering order of each transparent grid region according to the size of the bounding box corresponding to each transparent grid region in the depth space, and the order of the bounding boxes corresponding to each transparent grid region from large to small; therefore, in the process of rendering according to the rendering sequence corresponding to each transparent grid area, the electronic equipment can render the transparent grid area with the farther corresponding depth value first and then render the transparent grid area with the closer corresponding depth value, so that the transparent grid area with the farther depth is prevented from being shielded by the transparent grid area with the closer depth, and the rendering effect is improved.
It should be noted that, for a plurality of transparent grid regions included in a live video frame, in the process of executing the live rendering method provided by the present application, a CPU of an electronic device may determine rendering sequences corresponding to all transparent grid regions in the live video frame, then determine rendering sequences corresponding to all basic constituent units in each transparent grid region, and send all the rendering sequences to a GPU; in the process that the GPU renders all the transparent grid areas in the live video frame, the GPU may render each transparent grid area in sequence according to the respective rendering order of each transparent grid area; in addition, in the process of rendering one of the transparent grid areas, all the basic component units in the multiple transparent grid areas may be rendered in sequence according to the rendering sequence of each basic component unit in the transparent grid area.
In addition, based on the same inventive concept as the live broadcast rendering method provided in the present application, please refer to fig. 6, and fig. 6 shows a schematic structural block diagram of a live broadcast rendering apparatus 300 provided in the present application, where the live broadcast rendering apparatus 300 may include a processing module 301 and a rendering module 302.
The processing module 301 is configured to determine a transparent grid area in a live video frame according to a transparency parameter corresponding to the live video frame;
the processing module 301 is further configured to determine, according to the depth space parameters corresponding to all the basic constituent units in the transparent grid region, a rendering order corresponding to each of the basic constituent units;
and a rendering module 302, configured to sequentially render all the basic component units in the transparent grid area according to the rendering order corresponding to each basic component unit in the transparent grid area.
Optionally, as a possible implementation manner, when determining, according to the depth space parameters corresponding to all the basic constituent units in the transparent grid area, the rendering order corresponding to each basic constituent unit, the processing module 301 is specifically configured to:
calculating a reference depth value corresponding to each basic composition unit in the transparent grid area;
and sequentially determining the rendering sequence corresponding to each basic composition unit according to the arrangement sequence from large to small of the reference depth values corresponding to all the basic composition units.
Optionally, as a possible implementation manner, when calculating the reference depth value corresponding to each basic component unit in the transparent mesh area, the processing module 301 is specifically configured to:
aiming at the target basic composition unit in the transparent grid area, calculating the average depth value of all vertexes included by the target basic composition unit, and taking the average depth value as a reference depth value corresponding to the target basic composition unit; wherein, the target basic component unit is any one of all basic component units contained in the transparent grid area.
Optionally, as a possible implementation, there are a plurality of transparent grid areas;
after determining the transparent grid area in the live video frame according to the transparency parameter corresponding to the live video frame, the processing module 301 is further configured to:
determining the rendering sequence of each transparent grid area according to the depth space parameter corresponding to each transparent grid area;
the rendering module 302 is further configured to sequentially render each transparent network area according to the respective rendering order of each transparent grid area.
Optionally, as a possible implementation manner, when determining the rendering order of each transparent grid region according to the depth space parameter corresponding to each transparent grid region, the processing module 301 is specifically configured to:
and determining the rendering sequence of each transparent grid area according to the size of the bounding box corresponding to each transparent grid area in the depth space.
Optionally, as a possible implementation manner, when determining the rendering order of each transparent grid region according to the size of the bounding box corresponding to each transparent grid region in the depth space, the processing module 301 is specifically configured to:
and sequentially determining the rendering sequence of each transparent grid area according to the arrangement sequence from large to small of the bounding boxes corresponding to each transparent grid area in the depth space.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to some embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in some embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to some embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The above description is only a few examples of the present application and is not intended to limit the present application, and those skilled in the art will appreciate that various modifications and variations can be made in the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Claims (10)
1. A live rendering method, the method comprising:
determining a transparent grid area in a live video frame according to a transparency parameter corresponding to the live video frame;
determining a rendering sequence corresponding to each basic composition unit according to the depth space parameters corresponding to all the basic composition units in the transparent grid area;
and rendering all the basic composition units in the transparent grid area in sequence according to the rendering sequence corresponding to each basic composition unit in the transparent grid area.
2. The method of claim 1, wherein the determining a rendering order for each of the basic components according to the depth space parameters corresponding to each of the basic components in the transparent grid region comprises:
calculating a reference depth value corresponding to each basic composition unit in the transparent grid area;
and sequentially determining the rendering sequence corresponding to each basic composition unit according to the arrangement sequence from large to small of the reference depth values corresponding to all the basic composition units.
3. The method of claim 2, wherein said calculating a reference depth value corresponding to each basic component unit in the transparent mesh area comprises:
aiming at a target basic component unit in the transparent grid area, calculating an average depth value of all vertexes included by the target basic component unit, and taking the average depth value as a reference depth value corresponding to the target basic component unit; wherein the target basic component unit is any one of all basic component units included in the transparent grid area.
4. The method of any one of claims 1-3, wherein there are a plurality of the transparent grid areas;
after the transparent grid area is determined in the live video frame according to the transparency parameter corresponding to the live video frame, the method further includes:
determining the rendering sequence of each transparent grid area according to the depth space parameter corresponding to each transparent grid area; and rendering each transparent network area in turn according to the rendering sequence of each transparent grid area.
5. The method of claim 4, wherein determining the rendering order of each transparent grid region according to the depth space parameter corresponding to each transparent grid region comprises:
and determining the rendering sequence of each transparent grid area according to the size of the bounding box corresponding to each transparent grid area in the depth space.
6. The method of claim 5, wherein determining the rendering order of each transparent grid region according to the bounding box size of each transparent grid region in the depth space comprises:
and sequentially determining the rendering sequence of each transparent grid area according to the arrangement sequence from large to small of the bounding boxes corresponding to each transparent grid area in the depth space.
7. A live rendering apparatus, the apparatus comprising:
the processing module is used for determining a transparent grid area in a live video frame according to a transparency parameter corresponding to the live video frame;
the processing module is further configured to determine, according to the depth space parameters corresponding to all the basic constituent units in the transparent grid region, a rendering order corresponding to each of the basic constituent units;
and the rendering module is used for rendering all the basic composition units in the transparent grid area in sequence according to the rendering sequence corresponding to each basic composition unit in the transparent grid area.
8. An electronic device is characterized by comprising a Central Processing Unit (CPU) and an image processor (GPU);
the CPU is used for determining a transparent grid area in a live video frame according to a transparency parameter corresponding to the live video frame;
the CPU is further used for determining a rendering sequence corresponding to each basic composition unit according to the depth space parameters corresponding to all the basic composition units in the transparent grid area;
and the GPU is used for rendering all the basic composition units in the transparent grid area in sequence according to the rendering sequence corresponding to each basic composition unit in the transparent grid area.
9. An electronic device, comprising:
a memory for storing one or more programs;
a processor;
the one or more programs, when executed by the processor, implement the method of any of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011045066.4A CN112153408B (en) | 2020-09-28 | 2020-09-28 | Live broadcast rendering method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011045066.4A CN112153408B (en) | 2020-09-28 | 2020-09-28 | Live broadcast rendering method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112153408A true CN112153408A (en) | 2020-12-29 |
CN112153408B CN112153408B (en) | 2022-07-08 |
Family
ID=73894987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011045066.4A Active CN112153408B (en) | 2020-09-28 | 2020-09-28 | Live broadcast rendering method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112153408B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113052951A (en) * | 2021-06-01 | 2021-06-29 | 腾讯科技(深圳)有限公司 | Object rendering method and device, computer equipment and storage medium |
CN115641399A (en) * | 2022-09-08 | 2023-01-24 | 杭州新迪数字工程系统有限公司 | Image-based multi-layer grid picking method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101472090A (en) * | 2007-12-27 | 2009-07-01 | 新奥特(北京)视频技术有限公司 | Method for monitoring beforehand hardware self-adapting multi-channel multi-mode video of video server |
CN103581571A (en) * | 2013-11-22 | 2014-02-12 | 北京中科大洋科技发展股份有限公司 | Video image matting method based on three elements of color |
CN108615254A (en) * | 2018-03-28 | 2018-10-02 | 广州市本真网络科技有限公司 | Point cloud rendering intent, system and device based on the quantization of tree lattice vector |
CN109272565A (en) * | 2017-07-18 | 2019-01-25 | 腾讯科技(深圳)有限公司 | Animation playing method, device, storage medium and terminal |
CN111145330A (en) * | 2019-12-31 | 2020-05-12 | 广州华多网络科技有限公司 | Human body model rendering method and device, electronic equipment and storage medium |
CN111462278A (en) * | 2020-03-17 | 2020-07-28 | 稿定(厦门)科技有限公司 | Depth-based material sorting rendering method, medium, equipment and device |
CN111489429A (en) * | 2020-04-16 | 2020-08-04 | 诚迈科技(南京)股份有限公司 | Image rendering control method, terminal device and storage medium |
-
2020
- 2020-09-28 CN CN202011045066.4A patent/CN112153408B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101472090A (en) * | 2007-12-27 | 2009-07-01 | 新奥特(北京)视频技术有限公司 | Method for monitoring beforehand hardware self-adapting multi-channel multi-mode video of video server |
CN103581571A (en) * | 2013-11-22 | 2014-02-12 | 北京中科大洋科技发展股份有限公司 | Video image matting method based on three elements of color |
CN109272565A (en) * | 2017-07-18 | 2019-01-25 | 腾讯科技(深圳)有限公司 | Animation playing method, device, storage medium and terminal |
CN108615254A (en) * | 2018-03-28 | 2018-10-02 | 广州市本真网络科技有限公司 | Point cloud rendering intent, system and device based on the quantization of tree lattice vector |
CN111145330A (en) * | 2019-12-31 | 2020-05-12 | 广州华多网络科技有限公司 | Human body model rendering method and device, electronic equipment and storage medium |
CN111462278A (en) * | 2020-03-17 | 2020-07-28 | 稿定(厦门)科技有限公司 | Depth-based material sorting rendering method, medium, equipment and device |
CN111489429A (en) * | 2020-04-16 | 2020-08-04 | 诚迈科技(南京)股份有限公司 | Image rendering control method, terminal device and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113052951A (en) * | 2021-06-01 | 2021-06-29 | 腾讯科技(深圳)有限公司 | Object rendering method and device, computer equipment and storage medium |
CN113052951B (en) * | 2021-06-01 | 2021-08-03 | 腾讯科技(深圳)有限公司 | Object rendering method and device, computer equipment and storage medium |
CN115641399A (en) * | 2022-09-08 | 2023-01-24 | 杭州新迪数字工程系统有限公司 | Image-based multi-layer grid picking method and system |
CN115641399B (en) * | 2022-09-08 | 2024-05-17 | 上海新迪数字技术有限公司 | Multi-layer grid pickup method and system based on image |
Also Published As
Publication number | Publication date |
---|---|
CN112153408B (en) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112153408B (en) | Live broadcast rendering method and device, electronic equipment and storage medium | |
KR102463304B1 (en) | Video processing method and device, electronic device, computer-readable storage medium and computer program | |
CN112218108B (en) | Live broadcast rendering method and device, electronic equipment and storage medium | |
CN112215936A (en) | Image rendering method and device, electronic equipment and storage medium | |
US10935788B2 (en) | Hybrid virtual 3D rendering approach to stereovision | |
WO2017107447A1 (en) | Method and device for displaying chat messages in live-broadcast applications | |
CN112929678A (en) | Live broadcast method, device, server and computer readable storage medium | |
CN110856005B (en) | Live stream display method and device, electronic equipment and readable storage medium | |
US11076140B2 (en) | Information processing apparatus and method of controlling the same | |
CN108898678B (en) | Augmented reality method and apparatus | |
CN111970527B (en) | Live broadcast data processing method and device | |
CN111145358A (en) | Image processing method, device and hardware device | |
CN114527980A (en) | Display rendering method and device, electronic equipment and readable storage medium | |
CN108429905B (en) | Naked eye 3D display method and device, electronic equipment and storage medium | |
CN111080781A (en) | Three-dimensional map display method and mobile terminal | |
CN108031117B (en) | Regional fog effect implementation method and device | |
CN112165630B (en) | Image rendering method and device, electronic equipment and storage medium | |
CN112153409B (en) | Live broadcast method and device, live broadcast receiving end and storage medium | |
CN116630516A (en) | 3D characteristic-based 2D rendering ordering method, device, equipment and medium | |
CN110597432A (en) | Interface control method and device, computer readable medium and electronic equipment | |
CN113313807B (en) | Picture rendering method and device, storage medium and electronic device | |
CN111340931A (en) | Scene processing method and device, user side and storage medium | |
CN109544664B (en) | Animation data processing method and device, electronic equipment and readable storage medium | |
CN112967369A (en) | Light ray display method and device | |
CN112235634A (en) | Object rendering method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |