CN115761090A - Special effect rendering method, device, equipment, computer readable storage medium and product - Google Patents

Special effect rendering method, device, equipment, computer readable storage medium and product Download PDF

Info

Publication number
CN115761090A
CN115761090A CN202211441168.7A CN202211441168A CN115761090A CN 115761090 A CN115761090 A CN 115761090A CN 202211441168 A CN202211441168 A CN 202211441168A CN 115761090 A CN115761090 A CN 115761090A
Authority
CN
China
Prior art keywords
image frame
rendering
rendered
special effect
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211441168.7A
Other languages
Chinese (zh)
Inventor
周红文
先久零
张世阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211441168.7A priority Critical patent/CN115761090A/en
Publication of CN115761090A publication Critical patent/CN115761090A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the disclosure provides a special effect rendering method, a device, equipment, a computer readable storage medium and a product, wherein the method comprises the following steps: acquiring an image frame to be rendered and identification information of a target special effect corresponding to virtual reality live broadcast; determining rendering data corresponding to the target special effect and a graphic processor combination for rendering the image frame to be rendered according to the identification information of the target special effect; performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor combination to obtain a rendered target image frame; and sending the target image frame to display equipment for playing. Therefore, the display equipment does not need to be provided with hardware equipment with higher level, and in addition, the content quality of the rendered target image frame can be effectively improved by adopting the combination of the graphics processors to carry out special effect rendering operation on the image frame to be rendered. And then the user experience of the virtual reality live viewer can be improved.

Description

Special effect rendering method, device, equipment, computer readable storage medium and product
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, and in particular relates to a special effect rendering method, device, equipment, computer-readable storage medium and product.
Background
With the development of science and technology, virtual Reality (VR) technology gradually comes into the lives of users. The user can realize the live VR of 3D through the VR technique. In VR live broadcast, there is often a need for special effect rendering.
In a VR live broadcast scene, a local special effect is usually rendered at a live broadcast end, an original live broadcast stream is pushed to a broadcast end, and rendering operation of a network special effect is performed at the broadcast end. However, the live VR generally adopts binocular cameras to shoot in real time, and the live VR generally adopts the ultra-high definition video frame of higher resolution. Therefore, the special effect rendering operation performed at the viewing and playing end often has a high performance requirement on the viewing and playing end, and the special effect rendering performed at the viewing and playing end causes a large pressure on resource occupation, energy consumption and the like of the viewing and playing end, so that the live broadcast frame rate is also limited, and the viewing experience of the user at the viewing and playing end is applied.
Disclosure of Invention
The embodiment of the disclosure provides a special effect rendering method, a special effect rendering device, special effect rendering equipment, a computer readable storage medium and a computer readable storage product, which are used for solving the technical problems that the existing special effect rendering method has high requirements on the hardware of a broadcasting terminal and causes great pressure on the resource occupation, energy consumption and the like of the broadcasting terminal.
In a first aspect, an embodiment of the present disclosure provides a special effect rendering method, including:
acquiring an image frame to be rendered and identification information of a target special effect corresponding to virtual reality live broadcast;
determining rendering data corresponding to the target special effect and a graphic processor combination used for rendering the image frame to be rendered according to the identification information of the target special effect;
performing rendering operation on the image frame to be rendered according to the rendering data through the graphic processor combination to obtain a rendered target image frame;
and sending the target image frame to a display device for playing.
In a second aspect, an embodiment of the present disclosure provides a special effect rendering apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an image frame to be rendered and identification information of a target special effect corresponding to virtual reality live broadcast;
the determining module is used for determining rendering data corresponding to the target special effect and a graphic processor combination used for rendering the image frame to be rendered according to the identification information of the target special effect;
the rendering module is used for performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor combination to obtain a rendered target image frame;
and the sending module is used for sending the target image frame to display equipment for playing.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
the memory stores computer execution instructions;
the processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the special effects rendering method as described above in the first aspect and various possible designs of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the special effects rendering method according to the first aspect and various possible designs of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements the special effects rendering method as described in the first aspect above and in various possible designs of the first aspect.
According to the special effect rendering method, the device, the equipment, the computer readable storage medium and the product provided by the embodiment, after the image frame to be rendered corresponding to the virtual reality live broadcast and the identification information of the target special effect are obtained, the image frame to be rendered is combined by the graphics processor in the cloud server to perform the special effect rendering operation, so that a viewing end does not need to have hardware equipment with a higher level, and in addition, the content quality of the rendered target image frame can be effectively improved by performing the special effect rendering operation on the image frame to be rendered by combining the graphics processors. And then the user experience of the virtual reality live viewer can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a diagram of a system architecture upon which the present disclosure is based;
fig. 2 is a schematic flowchart of a special effect rendering method according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a special effect rendering method according to another embodiment of the disclosure;
fig. 4 is a flowchart illustrating a special effect rendering method according to another embodiment of the disclosure;
fig. 5 is a schematic view of special effect rendering provided by an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a special effect rendering method according to another embodiment of the disclosure;
fig. 7 is a schematic structural diagram of a special effect rendering apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without inventive step, are intended to be within the scope of the present disclosure.
In order to solve the technical problems that the existing special effect rendering method has high requirements on the hardware of the broadcasting end and causes great pressure on the resource occupation, energy consumption and the like of the broadcasting end, the disclosure provides a special effect rendering method, a special effect rendering device, special effect rendering equipment, a computer readable storage medium and a product.
It should be noted that the special effect rendering method, apparatus, device, computer-readable storage medium, and product provided by the present disclosure may be applied to any application scenario of special effect rendering.
The current special effect rendering method generally sends an image frame to be rendered corresponding to virtual reality live broadcast to a watching and broadcasting end, and the watching and broadcasting end carries out special effect rendering operation. Therefore, the hardware requirement of the broadcasting end is often high, and the broadcasting end data processing pressure is too large, which affects the performance of the broadcasting end.
In the process of solving the technical problem, the inventor finds out through research that in order to avoid adverse effects on the viewing terminal caused by special effect rendering operation, the live effect of virtual reality live broadcasting is improved, the special effect rendering operation can be performed on the image frame to be rendered corresponding to the virtual reality live broadcasting through the cloud server, and the rendered target image frame is sent to the viewing terminal to be played.
In addition, in order to guarantee the live broadcast effect of the virtual reality, a plurality of high-performance graphic processors can be arranged in the cloud server. Therefore, in the process of special effect rendering, a graphics processing combination composed of at least one graphics processor can be selected to realize the special effect rendering operation. Therefore, various special effects can be completely presented without limiting the performance of the equipment.
Fig. 1 is a diagram of a system architecture on which the present disclosure is based, and as shown in fig. 1, the system architecture on which the present disclosure is based includes at least a server 11, a binocular image capturing device 12, a display device 13, and a special effects server 14. The server 11 is provided with a special effect rendering device, and the special effect rendering device can be written by languages such as C/C + +, java, shell or Python; the display device 13 may be a display device capable of viewing live content, such as VR glasses, a mobile phone, and a tablet.
Based on the above system architecture, the server 11 may obtain the image frame to be rendered corresponding to the live virtual reality captured by the binocular image capturing device 12 and the identification information of the target special effect sent by the special effect server 14. And then, according to the image frame to be rendered and the identification information of the target special effect, a pattern processor combination is adopted to perform special effect rendering operation, and the target image frame is obtained. The target image frame is sent to the display device 13 for playing.
Fig. 2 is a schematic flowchart of a special effect rendering method provided in an embodiment of the present disclosure, and as shown in fig. 2, the method includes:
step 201, obtaining an image frame to be rendered corresponding to virtual reality live broadcast and identification information of a target special effect.
The execution subject of this embodiment is a special effect rendering device, and the special effect rendering device can be coupled to a cloud server. The cloud server can be in communication connection with the binocular image acquisition device and the display device respectively. And then special effect rendering operation can be carried out on the basis of the image frame to be rendered corresponding to the virtual reality live broadcast acquired by the binocular image acquisition equipment.
In this embodiment, in order to avoid causing the calculation pressure to the display device of the broadcasting end and ensure the effect of special effect rendering, special effect rendering operation can be performed in the cloud server.
Optionally, the special effect rendering device may obtain an image frame to be rendered corresponding to the virtual reality live broadcast and identification information of the target special effect. The image frames to be rendered corresponding to the virtual reality live broadcast can be acquired by binocular image acquisition equipment, and the identification information of the target special effect can be sent by a preset server based on preset triggering operation of live viewers or anchor broadcasters.
It should be noted that the server may specifically be a data server for storing special effect data corresponding to a special effect.
Step 202, determining rendering data corresponding to the target special effect and a graphics processor combination for rendering the image frame to be rendered according to the identification information of the target special effect.
In this embodiment, a higher resolution video frame is generally used for the virtual reality live broadcast. Therefore, a plurality of high-performance graphics processors may be previously set in order to secure the effect of special effect processing. When the special effect is processed, a plurality of graphic processors can be flexibly configured according to the requirements of practical application so as to realize high-quality special effect rendering operation.
After the identification information of the target special effect is acquired, rendering data corresponding to the target special effect can be acquired according to the identification information of the target special effect. And determining a graphics processor combination required for rendering the image frame to be rendered according to the rendering data, wherein the graphics processor combination may include at least two graphics processors.
And 203, performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor combination to obtain a rendered target image frame.
In this embodiment, after determining the graphics processor combination required for rendering the image frame to be rendered, the graphics processor combination may perform a rendering operation on the image frame to be rendered based on the rendering data to obtain a rendered target image frame. By adopting the combination of the graphics processors to carry out special effect rendering operation, the effects of various required special effects can be completely presented without limiting the performance of equipment. On the basis of avoiding watching the computational pressure of the broadcasting end effectively, the quality of virtual reality live broadcasting is improved.
And step 204, sending the target image frame to a display device for playing.
In this embodiment, after the rendering operation of the image frame to be rendered is completed in the cloud server, the target image frame may be sent to the display device for playing. The display device includes, but is not limited to, a virtual reality device, a mobile phone with a display interface, a tablet, a television, and the like.
Therefore, the display equipment can complete the watching of the high-quality live broadcast content only by playing the target image frame without special effect rendering operation.
As an implementation manner, the display device may specifically be a virtual reality device. In order to save the computing power of the virtual reality equipment and ensure a high-quality live broadcast effect, the virtual reality equipment can be in communication connection with the cloud server. And the cloud server selects a graphic processor combination to perform rendering operation on live content corresponding to virtual reality live broadcast. And sending the rendered target video stream to virtual reality equipment for playing.
Further, on the basis of any of the above embodiments, step 201 includes:
the method comprises the steps of obtaining an image frame to be rendered corresponding to virtual reality live broadcast collected by binocular image collecting equipment, and obtaining a target special effect sent by a server, wherein the target special effect is determined when a user triggers virtual resource transfer operation on the server.
In this embodiment, when a user watches live content at a viewing end, the user can trigger a virtual resource transfer operation according to actual requirements. For example, the user can send a small gift live through a preset trigger operation. While different virtual resources may correspond to different special effects.
The cloud server can be in communication connection with a preset server. When acquiring that a user triggers virtual resource transfer operation on a server, the server can send identification information of a target special effect to a cloud server. Correspondingly, the cloud server can respectively acquire the image frames to be rendered corresponding to the live virtual reality acquired by the binocular image acquisition equipment, and acquire the target special effect sent by the server.
In the special effect rendering method provided by this embodiment, after the image frame to be rendered corresponding to the virtual reality live broadcast and the identification information of the target special effect are acquired, the image frame to be rendered is combined by the graphics processor in the cloud server to perform the special effect rendering operation, so that a viewing end does not need to have a higher-level hardware device. And then the user experience of the virtual reality live viewer can be improved.
Further, on the basis of any of the above embodiments, step 202 includes:
and acquiring rendering data corresponding to the target special effect according to the identification information of the target special effect, wherein the rendering data comprises a special effect type, a scene type and calling algorithm information corresponding to the target special effect.
Determining a graphics processor combination for special effect rendering according to the rendering data.
In this embodiment, after the identification information of the target special effect is obtained, the rendering data corresponding to the target special effect may be obtained according to the identification information of the target special effect, where the rendering data includes a special effect type, a scene type, and calling algorithm information corresponding to the target special effect.
Based on the rendering data, the graphics processor combination corresponding to the target special effect can be determined according to the mapping relation between the preset rendering data and the graphics processors.
Optionally, the user may also perform a customized selection operation on multiple graphics processors according to actual needs, which is not limited by this disclosure. For example, a user may select at least two graphics processors among the plurality of graphics processors as the graphics processor combination.
According to the special effect rendering method, the graphic processor combination used for special effect rendering is determined according to the rendering data corresponding to the target special effect, so that special effect rendering operation of the image frame to be rendered can be achieved through the high-performance graphic processor combination, the rendering effect of special effect rendering is improved, the live broadcast effect of virtual reality live broadcast can be optimized, and user experience is improved.
Further, in any of the above embodiments, the graphics processor combination includes at least two graphics processors. Step 203 comprises:
and carrying out decoding operation, algorithm identification operation and special effect rendering operation on the image frame to be rendered through any graphics processor in the graphics processor combination to obtain the preprocessed image frame to be rendered.
And performing transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered through other at least one graphics processor in the graphics processor combination.
In this embodiment, when at least two graphics processors are used for special effect processing, a data transfer operation between the graphics processors is inevitably generated. However, as the resolution of the image frame to be rendered corresponding to the virtual reality live broadcast is high, excessive data transmission operation may reduce the efficiency of special effect processing.
Therefore, in order to improve the efficiency of special effect processing while ensuring the effect of special effect processing, the same graphics processor may be used for decoding and special effect, and another graphics processor may be used for other operations.
Optionally, the image frame to be rendered may be obtained by performing a decoding operation, an algorithm recognition operation, and a special effect rendering operation on the image frame to be rendered by any one of the graphics processor combinations. And performing transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered through other at least one graphics processor in the graphics processor combination.
For example, when two graphics processors are included in the graphics processor combination, one graphics processor may be used to perform decoding + effect algorithm + effect rendering, and the other graphics processor may be used to perform encoding. When the graphics processor combination includes three graphics processors, one graphics processor may be used to perform operations such as decoding + special effect algorithm + special effect rendering, and one graphics processor may be used to perform transcoding algorithm and one graphics processor may be used to perform encoding operation.
According to the special effect rendering method provided by the embodiment, the same graphics processor is used for decoding operation and special effect processing operation, and other graphics processors are used for other operations, so that cross-card data transmission can be reduced as much as possible on the basis of improving the special effect rendering effect, and the special effect rendering efficiency is improved.
Fig. 3 is a flowchart illustrating a method for rendering a target special effect according to another embodiment of the present disclosure, where on the basis of any one of the above embodiments, the number of the target special effects is at least one. As shown in fig. 3, step 203 comprises:
step 301, for each target special effect, determining a rendering time range corresponding to the target special effect according to rendering data corresponding to the target special effect.
Step 302, performing rendering operation on the target special effects according to the rendering time range corresponding to each target special effect.
In this embodiment, in an actual live broadcasting process, the number of target special effects corresponding to the virtual reality live broadcasting may be at least one. Different target special effects respectively correspond to different rendering time ranges.
Therefore, in order to realize accurate rendering of the target special effects, aiming at each target special effect, the rendering time range corresponding to the target special effect is determined according to the rendering data corresponding to the target special effect. And rendering the image frame to be rendered according to the rendering data corresponding to the target special effect within the rendering time range.
According to the special effect rendering method provided by the embodiment, the rendering time range of each target special effect is determined respectively, and the target special effect is rendered within the rendering time range corresponding to the target special effect, so that the special effect rendering operation can be accurately realized, and the rendering effect of the special effect rendering operation is further improved.
Fig. 4 is a schematic flowchart of a special effect rendering method according to yet another embodiment of the present disclosure, where on the basis of any one of the foregoing embodiments, the image frame to be rendered is an encoded image frame; as shown in fig. 4, step 203 comprises:
step 401, performing decoding operation on the image frame to be rendered through the graphics processor combination to obtain a decoded image frame to be rendered.
Step 402, identifying rendering information corresponding to the decoded image frame to be rendered through the graphics processor, wherein the rendering information includes a target rendering area and depth information.
And 403, performing rendering operation on the image frame to be rendered according to the rendering information and the rendering data through the graphics processor combination to obtain a rendered target image frame.
In this embodiment, the image frame to be rendered may specifically be an encoded image frame. Therefore, after the image frame to be rendered is acquired, the image frame to be rendered may be decoded first to obtain a decoded image frame to be rendered.
In order to implement an accurate rendering operation of a target special effect, after obtaining a decoded image frame to be rendered, rendering information corresponding to the decoded image frame to be rendered may be further identified by a graphics processor combination, where the rendering information includes a target rendering area and depth information. Therefore, the position, the depth and the like of the special effect needing to be rendered can be determined based on the rendering information. Therefore, after the rendering information is identified, the image frame to be rendered may be rendered by the graphics processor in combination with the rendering information and the rendering data to obtain a rendered target image frame.
Optionally, the rendering information corresponding to the decoded image frame to be rendered may be specifically identified through a preset depth identification model and a preset region segmentation model.
Alternatively, any one of the area recognition and the depth recognition may be adopted to realize the recognition of the rendering information, which is not limited by the present disclosure.
Further, on the basis of any of the above embodiments, step 402 includes:
and carrying out fisheye image processing operation on the decoded image frame to be rendered to obtain the image frame to be rendered with fisheye effect.
And identifying rendering information corresponding to the image frame to be rendered of the fisheye effect.
In this embodiment, in order to improve the accuracy of the rendering information identification, a fisheye image processing operation may be performed on the decoded image frame to be rendered first, so as to obtain the image frame to be rendered with a fisheye effect. And identifying rendering information corresponding to the image frame to be rendered with the fisheye effect through the graphics processor.
Further, on the basis of any of the above embodiments, after the step 402, the method further includes:
and generating a depth texture map corresponding to the image frame to be rendered according to rendering information corresponding to the image frame to be rendered.
The obtaining a rendered target image frame by performing rendering operation on the image frame to be rendered according to the rendering information and the rendering data through the graphics processor combination includes:
and rendering the depth texture map according to the rendering data to obtain a rendered image frame.
In this embodiment, after the rendering information corresponding to the decoded image frame to be rendered is identified, the depth texture map corresponding to the image frame to be rendered may be generated based on the rendering information. The depth texture map can be a visual texture map, and the contents of shading change, target rendering area and the like in the image frame to be rendered can be determined based on the depth texture map. And rendering the depth texture map according to the rendering data to obtain a rendered image frame.
Fig. 5 is a schematic view of special effect rendering provided by the embodiment of the present disclosure, and as shown in fig. 5, after the image frame 51 to be rendered is acquired, a decoding operation may be performed on the image frame 51 to be rendered to acquire a decoded image frame 52 to be rendered. And performing fisheye image processing operation on the decoded image frame 52 to be rendered to obtain an image frame 53 to be rendered with fisheye effect. For each image frame 54 in the image frame 53 to be rendered with the fisheye effect, the rendering information corresponding to the image frame is identified, and the depth texture map 54 corresponding to the image frame is generated according to the rendering information corresponding to the image frame. Rendering the depth texture map 54 according to rendering data to obtain a rendered image frame 55.
According to the special effect rendering method provided by the embodiment, after the image frame to be rendered is obtained, the depth information and the target rendering area in the image frame to be rendered are identified, and the depth texture map is constructed according to the depth information and the target rendering area, so that the accurate special effect rendering operation can be realized based on the depth texture map, and the special effect rendering effect is improved.
Fig. 6 is a flowchart illustrating a special effect rendering method according to yet another embodiment of the present disclosure, where on the basis of any of the above embodiments, the number of display devices for playing a target image frame is multiple. As shown in fig. 6, step 203 comprises:
step 601, determining the resolution corresponding to each display device.
Step 602, performing a decoding operation, an algorithm identification operation, and a special effect rendering operation on the image frame to be rendered by any graphics processor in the graphics processor combination, to obtain a preprocessed image frame to be rendered.
And 603, respectively adopting a graphics processor corresponding to the resolution corresponding to the display device to perform transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered.
In this embodiment, the number of display devices for playing the target image frame corresponding to the live virtual reality is multiple. While the resolution varies from display device to display device. For example, if the display device is a mobile phone, the resolution is generally 1080P, and if the display device is a virtual reality device, the resolution is generally 8K. In order to adapt the target image frame after special effect rendering to different display devices, different graphics processors can be adopted to perform transcoding algorithm processing and encoding operations for different virtual reality devices.
Alternatively, the resolution corresponding to each display device may be determined. And carrying out decoding operation, algorithm identification operation and special effect rendering operation on the image frame to be rendered through any graphics processor in the graphics processor combination to obtain the preprocessed image frame to be rendered. And respectively adopting a graphic processor corresponding to the resolution corresponding to the display equipment to carry out transcoding algorithm processing and coding operation on the preprocessed image frame to be rendered aiming at different display equipment.
According to the special effect rendering method provided by the embodiment, different graphics processors are adopted to perform special effect rendering operation according to the resolutions corresponding to different display devices, so that multi-path output of live broadcast contents with different resolutions can be realized, and the application scenes of virtual reality live broadcast are increased.
Fig. 7 is a schematic structural diagram of a special effect rendering apparatus provided in an embodiment of the present disclosure, and as shown in fig. 7, the apparatus includes: an acquisition module 71, a determination module 72, a rendering module 73 and a sending module 74. The obtaining module 71 is configured to obtain an image frame to be rendered and identification information of a target special effect, where the image frame corresponds to a virtual reality live broadcast. A determining module 72, configured to determine, according to the identification information of the target special effect, rendering data corresponding to the target special effect and a graphics processor combination used for rendering the image frame to be rendered. And a rendering module 73, configured to perform a rendering operation on the image frame to be rendered according to the rendering data through the graphics processor combination, so as to obtain a rendered target image frame. And a sending module 74, configured to send the target image frame to a display device for playing.
Further, on the basis of any of the above embodiments, the obtaining module is configured to: the method comprises the steps of obtaining an image frame to be rendered corresponding to virtual reality live broadcast collected by binocular image collecting equipment, and obtaining a target special effect sent by a server, wherein the target special effect is determined when a user triggers virtual resource transfer operation on the server.
Further, on the basis of any of the above embodiments, the determining module is configured to: and acquiring rendering data corresponding to the target special effect according to the identification information of the target special effect, wherein the rendering data comprises a special effect type, a scene type and calling algorithm information corresponding to the target special effect. Determining a graphics processor combination for a special effect rendering from the rendering data.
Further, on the basis of any of the above embodiments, the number of the target special effects is at least one. The rendering module is to: and aiming at each target special effect, determining a rendering time range corresponding to the target special effect according to rendering data corresponding to the target special effect. And rendering the target special effects according to the rendering time range corresponding to each target special effect.
Further, on the basis of any of the above embodiments, the image frame to be rendered is an encoded image frame. The rendering module is to: and decoding the image frame to be rendered through the combination of the graphics processor to obtain the decoded image frame to be rendered. And combining and identifying rendering information corresponding to the decoded image frame to be rendered through the graphics processor, wherein the rendering information comprises a target rendering area and depth information. And performing rendering operation on the image frame to be rendered according to the rendering information and the rendering data through the graphics processor combination to obtain a rendered target image frame.
Further, on the basis of any of the above embodiments, the rendering module is configured to: and carrying out fisheye image processing operation on the decoded image frame to be rendered to obtain the image frame to be rendered with fisheye effect. And identifying rendering information corresponding to the image frame to be rendered of the fisheye effect.
Further, on the basis of any of the above embodiments, the rendering module is configured to: and generating a depth texture map corresponding to the image frame to be rendered according to the rendering information corresponding to the image frame to be rendered. And rendering the depth texture map according to the rendering data to obtain a rendered image frame.
Further, on the basis of any of the above embodiments, the rendering module is configured to: and identifying rendering information corresponding to the decoded image frame to be rendered through a preset depth identification model and a preset region segmentation model.
Further, on the basis of any embodiment, the graphics processor combination comprises at least two graphics processors. The rendering module is to: and carrying out decoding operation, algorithm identification operation and special effect rendering operation on the image frame to be rendered through any one graphics processor in the graphics processor combination to obtain the preprocessed image frame to be rendered. And performing transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered through at least one other graphics processor in the graphics processor combination.
Further, on the basis of any of the above embodiments, the number of display devices for playing the target image frame is plural. The rendering module is to: and determining the corresponding resolution of each display device. And carrying out decoding operation, algorithm identification operation and special effect rendering operation on the image frame to be rendered through any one graphics processor in the graphics processor combination to obtain the preprocessed image frame to be rendered. And respectively adopting a graphic processor corresponding to the resolution corresponding to the display equipment to carry out transcoding algorithm processing and coding operation on the preprocessed image frame to be rendered.
Further, on the basis of any one of the above embodiments, the display device includes at least a virtual reality device.
The device provided in this embodiment may be configured to implement the technical solutions of the method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
In order to implement the above embodiments, an embodiment of the present disclosure further provides an electronic device, including: a processor and a memory.
The memory stores computer execution instructions.
The processor executes the computer-executable instructions stored in the memory, so that the processor executes the special effect rendering method according to any one of the embodiments.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure, where the electronic device 800 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, the electronic device 800 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage device 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 807 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer executing instruction is stored, and when a processor executes the computer executing instruction, the special effect rendering method according to any one of the above embodiments is implemented.
Embodiments of the present disclosure also provide a computer program product, including a computer program, where the computer program, when executed by a processor, implements the method for rendering special effects according to any of the embodiments described above.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the method shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first obtaining unit may also be described as a "unit obtaining at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a special effects rendering method, including:
acquiring an image frame to be rendered corresponding to virtual reality live broadcast and identification information of a target special effect;
determining rendering data corresponding to the target special effect and a graphic processor combination used for rendering the image frame to be rendered according to the identification information of the target special effect;
performing rendering operation on the image frame to be rendered according to the rendering data through the graphic processor combination to obtain a rendered target image frame;
and sending the target image frame to a display device for playing.
According to one or more embodiments of the present disclosure, the acquiring identification information of an image frame to be rendered and a target special effect corresponding to virtual reality live broadcasting includes:
the method comprises the steps of obtaining an image frame to be rendered corresponding to virtual reality live broadcast collected by binocular image collecting equipment, and obtaining a target special effect sent by a server, wherein the target special effect is determined when a user triggers virtual resource transfer operation on the server.
According to one or more embodiments of the present disclosure, the determining, according to the identification information of the target special effect, rendering data corresponding to the target special effect and a graphics processor combination for special effect rendering includes:
obtaining rendering data corresponding to the target special effect according to the identification information of the target special effect, wherein the rendering data comprises a special effect type, a scene type and calling algorithm information corresponding to the target special effect;
determining a graphics processor combination for a special effect rendering from the rendering data.
According to one or more embodiments of the present disclosure, the number of the target special effects is at least one; the rendering operation is performed on the image frame to be rendered according to the rendering data through the combination of the graphics processors to obtain a rendered target image frame, and the method comprises the following steps:
aiming at each target special effect, determining a rendering time range corresponding to the target special effect according to rendering data corresponding to the target special effect;
and rendering the target special effects according to the rendering time range corresponding to each target special effect.
According to one or more embodiments of the present disclosure, the image frame to be rendered is an encoded image frame; the obtaining a rendered target image frame by performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor assembly includes:
decoding the image frame to be rendered through the graphics processor combination to obtain a decoded image frame to be rendered;
combining and identifying rendering information corresponding to the decoded image frame to be rendered through the graphics processor, wherein the rendering information comprises a target rendering area and depth information;
and performing rendering operation on the image frame to be rendered according to the rendering information and the rendering data through the graphic processor combination to obtain a rendered target image frame.
According to one or more embodiments of the present disclosure, the combining, by the graphics processor, rendering information corresponding to the decoded image frame to be rendered includes:
carrying out fisheye image processing operation on the decoded image frame to be rendered to obtain the image frame to be rendered with fisheye effect;
and identifying rendering information corresponding to the image frame to be rendered of the fisheye effect.
According to one or more embodiments of the present disclosure, after the identifying, by the graphics processor, rendering information corresponding to the decoded image frame to be rendered is combined, the method further includes:
generating a depth texture map corresponding to the image frame to be rendered according to rendering information corresponding to the image frame to be rendered;
the obtaining a rendered target image frame by performing rendering operation on the image frame to be rendered according to the rendering information and the rendering data through the graphics processor combination includes:
rendering the depth texture map according to the rendering data to obtain a rendered image frame;
according to one or more embodiments of the present disclosure, the combining, by the graphics processor, rendering information corresponding to the decoded image frame to be rendered includes:
and identifying rendering information corresponding to the decoded image frame to be rendered through a preset depth identification model and a preset region segmentation model.
According to one or more embodiments of the present disclosure, the graphics processor combination includes at least two graphics processors;
the obtaining a rendered target image frame by performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor assembly includes:
decoding operation, algorithm identification operation and special effect rendering operation are carried out on the image frame to be rendered through any one graphics processor in the graphics processor combination, and the preprocessed image frame to be rendered is obtained;
and performing transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered through other at least one graphics processor in the graphics processor combination.
According to one or more embodiments of the present disclosure, the number of display devices for playing a target image frame is plural;
the obtaining a rendered target image frame by performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor assembly includes:
determining the resolution corresponding to each display device;
decoding operation, algorithm identification operation and special effect rendering operation are carried out on the image frame to be rendered through any one graphics processor in the graphics processor combination, and the preprocessed image frame to be rendered is obtained;
and respectively adopting a graphic processor corresponding to the resolution corresponding to the display equipment to perform transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered.
Further, on the basis of any of the above embodiments, the display device includes at least a virtual reality device.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a special effect rendering apparatus including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an image frame to be rendered and identification information of a target special effect corresponding to virtual reality live broadcast;
the determining module is used for determining rendering data corresponding to the target special effect and a graphic processor combination used for rendering the image frame to be rendered according to the identification information of the target special effect;
the rendering module is used for performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor combination to obtain a rendered target image frame;
and the sending module is used for sending the target image frame to display equipment for playing.
According to one or more embodiments of the present disclosure, the obtaining module is configured to:
the method comprises the steps of obtaining an image frame to be rendered corresponding to virtual reality live broadcast collected by binocular image collecting equipment, and obtaining a target special effect sent by a server, wherein the target special effect is determined when a user triggers virtual resource transfer operation on the server.
According to one or more embodiments of the present disclosure, the determining module is configured to:
acquiring rendering data corresponding to the target special effect according to the identification information of the target special effect, wherein the rendering data comprises a special effect type, a scene type and calling algorithm information corresponding to the target special effect;
determining a graphics processor combination for special effect rendering according to the rendering data.
According to one or more embodiments of the present disclosure, the number of the target special effects is at least one; the rendering module is to:
aiming at each target special effect, determining a rendering time range corresponding to the target special effect according to rendering data corresponding to the target special effect;
and rendering the target special effects according to the rendering time range corresponding to each target special effect.
According to one or more embodiments of the present disclosure, the image frame to be rendered is an encoded image frame; the rendering module is to:
decoding the image frame to be rendered through the graphics processor combination to obtain a decoded image frame to be rendered;
combining and identifying rendering information corresponding to the decoded image frame to be rendered through the graphics processor, wherein the rendering information comprises a target rendering area and depth information;
and performing rendering operation on the image frame to be rendered according to the rendering information and the rendering data through the graphics processor combination to obtain a rendered target image frame.
According to one or more embodiments of the present disclosure, the rendering module is to:
carrying out fisheye image processing operation on the decoded image frame to be rendered to obtain the image frame to be rendered with fisheye effect;
and identifying rendering information corresponding to the image frame to be rendered of the fisheye effect.
According to one or more embodiments of the present disclosure, the rendering module is to:
generating a depth texture map corresponding to the image frame to be rendered according to rendering information corresponding to the image frame to be rendered;
and rendering the depth texture map according to the rendering data to obtain a rendered image frame.
According to one or more embodiments of the present disclosure, the rendering module is to:
and identifying rendering information corresponding to the decoded image frame to be rendered through a preset depth identification model and a preset region segmentation model.
According to one or more embodiments of the present disclosure, the graphics processor combination includes at least two graphics processors;
the rendering module is to:
decoding operation, algorithm identification operation and special effect rendering operation are carried out on the image frame to be rendered through any one graphics processor in the graphics processor combination, and the preprocessed image frame to be rendered is obtained;
and performing transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered through at least one other graphics processor in the graphics processor combination.
According to one or more embodiments of the present disclosure, the number of display devices for playing a target image frame is plural;
the rendering module is to:
determining the resolution corresponding to each display device;
decoding operation, algorithm identification operation and special effect rendering operation are carried out on the image frame to be rendered through any one graphics processor in the graphics processor combination, and the preprocessed image frame to be rendered is obtained;
and respectively adopting a graphic processor corresponding to the resolution corresponding to the display equipment to perform transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered.
Further, on the basis of any of the above embodiments, the display device includes at least a virtual reality device.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the special effects rendering method as described above in the first aspect and various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the special effect rendering method according to the first aspect and various possible designs of the first aspect.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the special effects rendering method according to the first aspect and various possible designs of the first aspect as described above
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (15)

1. A special effects rendering method, comprising:
acquiring an image frame to be rendered and identification information of a target special effect corresponding to virtual reality live broadcast;
determining rendering data corresponding to the target special effect and a graphic processor combination for rendering the image frame to be rendered according to the identification information of the target special effect;
performing rendering operation on the image frame to be rendered according to the rendering data through the graphic processor combination to obtain a rendered target image frame;
and sending the target image frame to a display device for playing.
2. The method according to claim 1, wherein the acquiring identification information of the image frame to be rendered and the target special effect corresponding to the live virtual reality comprises:
the method comprises the steps of obtaining an image frame to be rendered corresponding to virtual reality live broadcast collected by binocular image collecting equipment, and obtaining a target special effect sent by a server, wherein the target special effect is determined when a user triggers virtual resource transfer operation on the server.
3. The method of claim 1, wherein determining rendering data corresponding to the target special effect and a graphics processor combination for rendering the special effect according to the identification information of the target special effect comprises:
acquiring rendering data corresponding to the target special effect according to the identification information of the target special effect, wherein the rendering data comprises a special effect type, a scene type and calling algorithm information corresponding to the target special effect;
determining a graphics processor combination for a special effect rendering from the rendering data.
4. The method according to claim 1, characterized in that the number of target effects is at least one; the rendering operation is performed on the image frame to be rendered according to the rendering data through the combination of the graphics processors to obtain a rendered target image frame, and the method comprises the following steps:
aiming at each target special effect, determining a rendering time range corresponding to the target special effect according to rendering data corresponding to the target special effect;
and rendering the target special effects according to the rendering time range corresponding to each target special effect.
5. The method according to any one of claims 1-4, wherein the image frame to be rendered is an encoded image frame; the obtaining a rendered target image frame by performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor assembly includes:
decoding the image frame to be rendered through the graphics processor combination to obtain a decoded image frame to be rendered;
combining and identifying rendering information corresponding to the decoded image frame to be rendered through the graphics processor, wherein the rendering information comprises a target rendering area and depth information;
and performing rendering operation on the image frame to be rendered according to the rendering information and the rendering data through the graphics processor combination to obtain a rendered target image frame.
6. The method of claim 5, wherein the combining, by the graphics processor, rendering information identifying the decoded image frame to be rendered corresponds to comprises:
carrying out fisheye image processing operation on the decoded image frame to be rendered to obtain the image frame to be rendered with fisheye effect;
and identifying rendering information corresponding to the image frame to be rendered of the fisheye effect.
7. The method of claim 5, wherein after the combining, by the graphics processor, rendering information identifying a corresponding image frame to be rendered after the decoding, further comprises:
generating a depth texture map corresponding to the image frame to be rendered according to rendering information corresponding to the image frame to be rendered;
the obtaining a rendered target image frame by performing rendering operation on the image frame to be rendered according to the rendering information and the rendering data through the graphics processor combination includes:
and rendering the depth texture map according to the rendering data to obtain a rendered image frame.
8. The method of claim 5, wherein the combining, by the graphics processor, rendering information identifying the decoded image frame to be rendered corresponds to comprises:
and identifying rendering information corresponding to the decoded image frame to be rendered through a preset depth identification model and a preset region segmentation model.
9. The method of any of claims 1-4, wherein the graphics processor combination comprises at least two graphics processors;
the rendering operation is performed on the image frame to be rendered according to the rendering data through the combination of the graphics processors to obtain a rendered target image frame, and the method comprises the following steps:
decoding, algorithm recognition and special effect rendering operations are carried out on the image frame to be rendered through any one of the graphics processor combinations, and the preprocessed image frame to be rendered is obtained;
and performing transcoding algorithm processing and encoding operation on the preprocessed image frame to be rendered through at least one other graphics processor in the graphics processor combination.
10. The method according to any one of claims 1 to 4, wherein the number of display devices for playing the target image frame is plural;
the obtaining a rendered target image frame by performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor assembly includes:
determining the resolution corresponding to each display device;
decoding operation, algorithm identification operation and special effect rendering operation are carried out on the image frame to be rendered through any one graphics processor in the graphics processor combination, and the preprocessed image frame to be rendered is obtained;
and respectively adopting a graphic processor corresponding to the resolution corresponding to the display equipment to carry out transcoding algorithm processing and coding operation on the preprocessed image frame to be rendered.
11. The method of any of claims 1-4, wherein the display device comprises at least a virtual reality device.
12. A special effect rendering apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an image frame to be rendered and identification information of a target special effect corresponding to virtual reality live broadcast;
the determining module is used for determining rendering data corresponding to the target special effect and a graphic processor combination used for rendering the image frame to be rendered according to the identification information of the target special effect;
the rendering module is used for performing rendering operation on the image frame to be rendered according to the rendering data through the graphics processor combination to obtain a rendered target image frame;
and the sending module is used for sending the target image frame to display equipment for playing.
13. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing the computer-executable instructions stored by the memory causes the processor to perform the special effects rendering method of any of claims 1 to 11.
14. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement the special effects rendering method of any of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements a method of effect rendering according to any of claims 1 to 11.
CN202211441168.7A 2022-11-17 2022-11-17 Special effect rendering method, device, equipment, computer readable storage medium and product Pending CN115761090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211441168.7A CN115761090A (en) 2022-11-17 2022-11-17 Special effect rendering method, device, equipment, computer readable storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211441168.7A CN115761090A (en) 2022-11-17 2022-11-17 Special effect rendering method, device, equipment, computer readable storage medium and product

Publications (1)

Publication Number Publication Date
CN115761090A true CN115761090A (en) 2023-03-07

Family

ID=85372758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211441168.7A Pending CN115761090A (en) 2022-11-17 2022-11-17 Special effect rendering method, device, equipment, computer readable storage medium and product

Country Status (1)

Country Link
CN (1) CN115761090A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597062A (en) * 2023-07-10 2023-08-15 北京麟卓信息科技有限公司 Compressed texture rendering optimization method based on dynamic adaptive decoding
CN117041628A (en) * 2023-10-09 2023-11-10 腾讯科技(深圳)有限公司 Live picture rendering method, system, device, equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597062A (en) * 2023-07-10 2023-08-15 北京麟卓信息科技有限公司 Compressed texture rendering optimization method based on dynamic adaptive decoding
CN116597062B (en) * 2023-07-10 2024-02-09 北京麟卓信息科技有限公司 Compressed texture rendering optimization method based on dynamic adaptive decoding
CN117041628A (en) * 2023-10-09 2023-11-10 腾讯科技(深圳)有限公司 Live picture rendering method, system, device, equipment and medium
CN117041628B (en) * 2023-10-09 2024-02-02 腾讯科技(深圳)有限公司 Live picture rendering method, system, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN115761090A (en) Special effect rendering method, device, equipment, computer readable storage medium and product
CN112637517B (en) Video processing method and device, electronic equipment and storage medium
CN110290398B (en) Video issuing method and device, storage medium and electronic equipment
US11785195B2 (en) Method and apparatus for processing three-dimensional video, readable storage medium and electronic device
CN110728622A (en) Fisheye image processing method and device, electronic equipment and computer readable medium
CN112182299A (en) Method, device, equipment and medium for acquiring highlight segments in video
CN112051961A (en) Virtual interaction method and device, electronic equipment and computer readable storage medium
CN115767181A (en) Live video stream rendering method, device, equipment, storage medium and product
CN110796664A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
CN116017018A (en) Live special effect rendering method, device, equipment, readable storage medium and product
CN111967397A (en) Face image processing method and device, storage medium and electronic equipment
CN113535105B (en) Media file processing method, device, equipment, readable storage medium and product
CN111783632B (en) Face detection method and device for video stream, electronic equipment and storage medium
CN113891057A (en) Video processing method and device, electronic equipment and storage medium
CN112492230B (en) Video processing method and device, readable medium and electronic equipment
CN114866706A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112465940B (en) Image rendering method and device, electronic equipment and storage medium
CN114202617A (en) Video image processing method and device, electronic equipment and storage medium
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN114694136A (en) Article display method, device, equipment and medium
CN115706810A (en) Video frame adjusting method and device, electronic equipment and storage medium
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN112214187B (en) Water ripple image implementation method and device
CN117544740A (en) Video recording method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination