CN116206046A - Rendering processing method and device, electronic equipment and storage medium - Google Patents

Rendering processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116206046A
CN116206046A CN202211604133.0A CN202211604133A CN116206046A CN 116206046 A CN116206046 A CN 116206046A CN 202211604133 A CN202211604133 A CN 202211604133A CN 116206046 A CN116206046 A CN 116206046A
Authority
CN
China
Prior art keywords
rendering
effect
graph
rendering effect
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211604133.0A
Other languages
Chinese (zh)
Other versions
CN116206046B (en
Inventor
张岩
王治铭
赵晨
孙昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211604133.0A priority Critical patent/CN116206046B/en
Publication of CN116206046A publication Critical patent/CN116206046A/en
Application granted granted Critical
Publication of CN116206046B publication Critical patent/CN116206046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The disclosure provides a rendering processing method, a rendering processing device, electronic equipment and a storage medium. The method and the device relate to the technical field of artificial intelligence, in particular to the technical fields of augmented reality, virtual reality, computer vision, deep learning and the like, and can be applied to scenes such as metauniverse, virtual digital people and the like. The specific implementation scheme is as follows: acquiring a first rendering effect expansion diagram of a rendering object based on rendering data; adopting a pre-trained rendering effect upgrading model to upgrade the rendering effect of the first rendering effect expansion graph to obtain a second rendering effect expansion graph; and performing rendering processing based on the rendering data and the second rendering effect development diagram. The technology disclosed by the invention can effectively improve the rendering efficiency of three-dimensional real-time rendering of the mobile terminal side.

Description

Rendering processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, in particular to the technical field of artificial intelligence, and particularly relates to the technical fields of augmented reality, virtual reality, computer vision, deep learning and the like, which can be applied to scenes such as metauniverse, virtual digital people and the like, and particularly relates to a rendering processing method, a rendering processing device, electronic equipment and a storage medium.
Background
Three-dimensional real-time rendering is a very important rendering technology, and rendered images can be displayed more truly and realistically.
The three-dimensional real-time rendering has high expenditure requirement on calculation force, and the real-time rendering on the high-end display card device can support high-precision real-time coloring calculation at present, so that the real-time coloring calculation of high-precision digital people and complex scenes is completed. Terminal equipment such as mobile phones and car phones cannot simulate complex illumination coloring effects due to the limitation of memory, bandwidth, computational power of a graphic processor (Graphics Processing Unit; GPU) and the like, and three-dimensional real-time rendering can be realized only by means of the computational power of a high-cost cloud display card.
Disclosure of Invention
The disclosure provides a rendering processing method, a rendering processing device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a rendering processing method including:
acquiring a first rendering effect expansion diagram of a rendering object based on rendering data;
adopting a pre-trained rendering effect upgrading model to upgrade the rendering effect of the first rendering effect expansion graph to obtain a second rendering effect expansion graph;
and performing rendering processing based on the rendering data and the second rendering effect development diagram.
According to another aspect of the present disclosure, there is provided a rendering processing apparatus including:
the acquisition module is used for acquiring a first rendering effect expansion graph of the rendering object based on the rendering data;
the upgrading module is used for upgrading the rendering effect of the first rendering effect development diagram by adopting a pre-trained rendering effect upgrading model to obtain a second rendering effect development diagram;
and the rendering processing module is used for performing rendering processing based on the rendering data and the second rendering effect development diagram.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the aspects and methods of any one of the possible implementations described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of the aspects and any possible implementation described above.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the aspects and any one of the possible implementations described above.
According to the technology disclosed by the invention, the rendering efficiency of three-dimensional real-time rendering at the mobile terminal side can be effectively improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
fig. 3 is a schematic diagram of a first rendering effect development diagram of a multi-frame image provided in the present embodiment;
FIG. 4 is a schematic diagram of a second rendering effect development diagram after rendering an upgrade to the first rendering effect development diagram shown in FIG. 3;
FIG. 5 is a schematic diagram of a general pipeline process flow;
FIG. 6 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a fourth embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing the methods of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments in this disclosure without inventive faculty, are intended to be within the scope of this disclosure.
It should be noted that, the terminal device in the embodiments of the present disclosure may include, but is not limited to, smart devices such as a mobile phone, a personal digital assistant (Personal Digital Assistant, PDA), a wireless handheld device, and a Tablet Computer (Tablet Computer); the display device may include, but is not limited to, a personal computer, a television, or the like having a display function.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The rendering scheme of terminal equipment such as mobile phones, car phones and the like in the prior art is based on modern rendering pipelines, and the rendering is realized by adopting a forward rendering scheme. The modern rendering pipeline mainly refers to a programmable graphics pipeline, and changes the mode of rendering objects by a display card in the form of a vertex loader and a fragment loader. Forward rendering is a standard used by most engines, i.e., the point-of-use rendering technique. The graphics card is provided with geometry, projected and decomposed into vertices, then converted and split into segments or pixels that are subjected to a final rendering process before being passed to the screen. It is linear and each geometry is passed down the pipeline once at a time to generate the final image.
When rendering is performed in a normal manner of forward rendering, rasterization is required for triangles formed by each vertex of the model, and the triangles are converted to each imaged pixel point and then coloring calculation is performed. I.e. to perform illumination calculations for each vertex and each voxel in the visible scene. In order to simulate the illumination effect as real as possible, the FRAGMENT shader is pushed into as many LIGHT sources and LIGHT source calculation as possible, so that the complexity O (f ravment_num) of forward shading can be simply estimated, and the resolution and the illumination complexity are high as the rendering requirements are high. However, the existing mobile terminal equipment is limited by computational power, cannot simulate a complex illumination coloring effect, and can only perform real-time rendering with lower precision. If the high-precision three-dimensional real-time rendering is to be realized locally at common terminal equipment such as mobile phones, car phones and the like, the rendering is realized by means of the computing power of a cloud display card, the cost is high, the network communication capability between the terminal and the cloud is also relied on, and the rendering processing efficiency is low.
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure; as shown in fig. 1, the present embodiment provides a rendering processing method, applied to a mobile terminal, for implementing high-precision rendering processing of the mobile terminal locally on the basis of not depending on a cloud, which specifically includes the following steps:
s101, acquiring a first rendering effect expansion diagram of a rendering object based on rendering data;
the rendering data of the present embodiment may specifically include all data for realizing rendering of the rendering object. Based on the rendering data, a first rendering effect expansion map of the rendering object may be acquired. The first rendering effect expansion graph may refer to a whole graph obtained by expanding and splicing rendering effect graphs of each display surface of a rendering object.
S102, performing rendering effect upgrading on the first rendering effect expansion graph by adopting a pre-trained rendering effect upgrading model to obtain a second rendering effect expansion graph;
the render effect upgrade model of the present embodiment is a pre-trained neural network model. The model may be a pix2pix model. The first rendering effect expansion diagram is input into the rendering effect upgrading model, the rendering effect upgrading model can upgrade the rendering effect of the input image, and the upgraded image, namely the second rendering effect expansion diagram, is output.
And S103, performing rendering processing based on the rendering data and the second rendering effect development diagram.
The rendering processing method of the present embodiment can be applied to rendering processing of arbitrary images. For example, in a scene rendered by a digital person in the meta-universe, by adopting the technical scheme of the embodiment, each frame of image in the digital person animation video can be rendered according to the rendering processing method of the embodiment, so as to obtain the high-precision digital person animation video.
The rendering processing method of the embodiment can be applied to the local terminal without the high-end display card, and high-precision rendering processing can be realized on the basis of being independent of a cloud.
According to the rendering processing method, the rendering effect is upgraded by adopting the rendering effect upgrading model, the upgraded second rendering effect unfolding diagram is obtained, and rendering processing is performed by combining the rendering data, so that high-precision rendering can be realized. Compared with the prior art, the technical scheme of the embodiment does not need to depend on a cloud to realize, can effectively reduce the rendering cost of the terminal side, can realize high-precision rendering at the local site of the terminal, is not influenced by network communication, can greatly reduce the computing cost of rendering, can effectively improve the rendering efficiency of three-dimensional real-time rendering of the mobile terminal side, and is very flexible and convenient to use.
In one embodiment of the present disclosure, the rendering precision of the rendering parameter representation of the second rendering effect expansion map is greater than the rendering precision of the rendering parameter representation of the first rendering effect expansion map, and the second rendering effect expansion map may be considered to be a high-precision rendering effect expansion map, while the first rendering effect expansion map corresponds to a low-precision rendering effect expansion map. Through the rendering effect upgrading model, a first rendering effect expansion diagram with low precision can be upgraded to a second rendering effect expansion diagram with high precision, and the rendering effect can be effectively improved.
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure; as shown in fig. 2, the rendering processing method of the present embodiment further describes the technical solution of the present disclosure in more detail on the basis of the technical solution of the embodiment shown in fig. 1. As shown in fig. 2, the rendering processing method of the present embodiment may specifically include the following steps:
s201, acquiring a first rendering effect graph of each display surface after a rendering object is driven based on rendering data;
s202, splicing the first rendering effect graphs of each display surface of the rendering object to obtain a first rendering effect unfolding graph of the rendering object;
steps S201-S202 are one implementation of step S101 in the embodiment shown in fig. 1. Specifically, in the general pipeline rendering process of the rendering engine, the rendering engine may acquire the rendering data first, and for example, the rendering data may include a three-dimensional model of the rendering object, texture map data of the rendering object, and driving parameters for driving the three-dimensional model at the time of rendering. At this time, the first rendering effect map of each display surface after the rendering object is driven may be obtained based on the three-dimensional model of the rendering object in the rendering data, the texture map data, and the driving parameters. The rendering engine can transmit rendering data to the GPU, the GPU can drive the three-dimensional model of the rendering object to rotate based on the driving parameters, a display angle of the rendering object is obtained, and then each display surface of the rendering object under the display angle can be obtained. And mapping each display surface based on the texture mapping data of the rendering object to obtain a first rendering effect map of each display surface. And then the first rendering effect graphs of all the display surfaces of the rendering object are spliced together after being unfolded, so that the first rendering effect unfolded graph of the rendering object is obtained. For example, fig. 3 is a schematic diagram of a first rendering effect development diagram of a multi-frame image provided in the present embodiment.
S203, adopting a pre-trained rendering effect upgrading model to improve the writing reality and resolution of the first rendering effect expansion graph to obtain a second rendering effect expansion graph;
when the method is specifically used, the first rendering effect expansion diagram is input into the rendering effect upgrading model, and the rendering effect upgrading model can output a second rendering effect expansion diagram. The second rendering effect expansion map not only has higher solidity but also has higher resolution than the first rendering effect expansion map. The realism may be obtained by analysing the rendered image and artwork. The higher the degree of solidity, the higher the rendering accuracy of the rendered image, whereas the lower the degree of solidity, the lower the rendering accuracy of the rendered image. In practical applications, the rendering accuracy may also be characterized based on other rendering parameters. For example, the rendering accuracy of the rendered image may be obtained directly based on the rendered image and the artwork.
The rendering effect upgrading model in the embodiment not only can improve the accuracy of the rendering image, but also can improve the resolution of the rendering image, and further improves the rendering effect of the obtained second rendering effect expansion diagram.
For example, fig. 4 is a schematic diagram of a second rendering effect development diagram after rendering and upgrading the first rendering effect development diagram shown in fig. 3. For example, only the 128 x 128 resolution first render effect expansion map shown in fig. 3 may be processed in the normal rendering flow of the general rendering pipeline. In the rendering effect upgrading model of the present embodiment, after the rendering effect of the rendering map is upgraded, the second rendering effect expansion map with 1024×1024 high precision shown in fig. 4 can be output.
Of course, in practical application, the rendering effect upgrade model may only improve the rendering precision of the first rendering effect expansion map, that is, the resolution of the obtained second rendering effect expansion map may be the same as that of the first rendering effect expansion map.
The rendering effect upgrade model of the embodiment may be a Pix2Pix neural network model. Before use, a certain amount of training data needs to be acquired, and the model is trained so that the model has the capability of rendering and upgrading images. Each piece of training data can comprise an original image and an effect image after rendering and upgrading the original image.
S204, based on the rendering data, acquiring each display surface of the rendering object after being driven;
s205, based on the second rendering effect expansion graph, acquiring a second rendering effect graph of each display surface after the rendering object is driven;
s206, attaching a second rendering effect graph of each display surface of the rendering object to each display surface of the driven rendering object to realize rendering processing;
s207, displaying a rendering effect diagram of the rendering object obtained by rendering processing.
Similar to the process of obtaining the first rendering effect map of the rendering object, each display surface after the rendering object is driven needs to be obtained based on the rendering data. For example, specifically, each presentation surface after the rendering object is driven may be acquired based on the driving parameters in the rendering data and the three-dimensional model of the rendering object. And then obtaining a second rendering effect graph of each display surface from the second rendering effect unfolding graph, and pasting back to each display surface of the rendering object to obtain a final rendering effect graph, so that the display can be displayed on the screen. By the rendering processing in the mode, the rendering precision of the rendering object can be effectively improved; and the rendering efficiency of three-dimensional real-time rendering of the terminal side rendering object can be effectively improved without being realized by means of a cloud.
Steps S204 to S206 are a specific implementation manner of the embodiment shown in fig. 1, and are capable of implementing the second rendering effect graph after rendering and upgrading, and attaching the second rendering effect graph back to each display surface of the rendering object, so as to obtain a three-dimensional rendering effect with higher precision of the rendering object, and improve the accuracy of three-dimensional real-time rendering.
The rendering processing method of the embodiment does not need to depend on a cloud to achieve, the rendering cost of the terminal side can be effectively reduced, high-precision rendering can be achieved locally at the terminal, the method is not affected by network communication, huge calculation cost is not consumed, the rendering efficiency of three-dimensional real-time rendering of the terminal side can be effectively improved, and the method is flexible and convenient to use. In addition, the rendering effect upgrading model of the embodiment can further improve the resolution of the first rendering effect expansion diagram and further improve the rendering effect.
In addition, in the embodiment, based on the three-dimensional model of the rendering object in the rendering data, the texture map data and the driving parameters, the first rendering effect map of each display surface after the rendering object is driven is obtained, and further the first rendering effect expansion map of the rendering object is obtained, so that the accuracy of the obtained first rendering effect expansion map of the rendering object can be effectively ensured. Based on the second rendering effect expansion graph, acquiring a second rendering effect graph of each display surface after the rendering object is driven; and the second rendering effect graphs of all the display surfaces of the rendering object are attached to all the display surfaces of the rendering object after being driven, so that the second rendering effect graphs of all the display surfaces can be effectively ensured to be accurately attached back to all the display surfaces, and the rendering effect of the rendering object is ensured.
In order to more clearly embody the effects of the embodiments of the present disclosure, the technical solutions of the present disclosure are compared with the existing general pipeline processing flow. For example, FIG. 5 is a general pipeline process flow diagram. As shown in fig. 5, after the Rendering engine obtains the Rendering data, the Rendering data is provided to the GPU, and the GPU obtains a Rendering effect diagram of the Rendering object through BS driving, skeleton driving, physical-Based Rendering (PBR) coloring (shading), and the like, and displays the Rendering effect diagram on the screen. For specific processing procedures, reference may also be made to descriptions of related technologies, and details thereof are not described herein.
FIG. 6 is a schematic diagram according to a third embodiment of the present disclosure; fig. 6 is a schematic flow chart of a rendering processing method provided in the present disclosure. As shown in fig. 6, after the rendering engine obtains the rendering data, the rendering data is provided to the GPU, and after the GPU is BS driven and skeleton driven, the GPU performs, instead of PBR panning, unfolding panning, that is, unfolding the first rendering effect graphs of each display surface of the rendering object, and splicing the first rendering effect graphs together to obtain the first rendering effect unfolded graph. Referring to the description of the embodiment shown in fig. 2, the first rendering effect expansion diagram is equivalent to that obtained in the PBR shading processing in the embodiment shown in fig. 5, and the rendering effect diagrams of the display surfaces of the rendering object are spliced together after being expanded. Compared with the rendering diagram shown in fig. 5, the first rendering effect graph of each display surface of the rendering object needs to be unfolded to obtain the first rendering effect unfolded graph, and PBR processing is not needed to be used for displaying on-screen.
As shown in fig. 6, in the present disclosure, the Pix2Pix model is continuously adopted to upgrade the rendering effect of the first rendering effect expansion graph, so as to obtain a second rendering effect expansion graph. In a rendering application of an animated video of a digital person in the metauniverse, a first rendering effect development of each frame image may be similar to the schematic diagram shown in fig. 3. And the second rendering effect expansion diagram of the Pix2Pix model after upgrading can be similar to the schematic diagram shown in fig. 4. As shown in fig. 6, next, the GPU continues to obtain each display surface of the rendered object based on BS driving and skeleton driving by means of the rendering process flow of the general pipeline, and pastes the second rendering effect graph of each display surface in the second rendering effect expansion graph back to each display surface of the three-dimensional model of the rendered object, where the back-pasting process may also be considered as coloring (non-lit Shading) of pbr+non-illumination calculation, so as to obtain a final rendering effect graph, and display on the screen.
Based on the above, it can be known that in the rendering processing method of the present disclosure, a frame of image completed rendering frame is divided into two parts, and the facial vertex data processing is only processed once when submitted to the GPU, and no change occurs, but the normal forward rendering flow is changed, and a low-mode texture, such as a gray-mode image with 128×128 resolution, needs to be unfolded and rendered, i.e. the first rendering effect unfolded graph of the above embodiment. And then the GPU transmits the first rendering effect expansion graph to the Pix2Pix network model, so that a high-precision digital person region map, namely a second rendering effect expansion graph, can be obtained. And finally, normal forward rendering is performed, no illumination information input is needed at the moment, and rendering is performed in a Unlit mode. The overall rendering complexity is changed from O (FRAGMENT_NUM) to O (FRAGMENT_NUM), so that the loader code complexity is greatly reduced, the rendering calculation cost is effectively reduced, and the rendering efficiency is improved.
According to the rendering processing method, high-precision rendering processing can be realized at the terminal side, complex calculation is not needed, the rendering effect can be upgraded only through a pre-trained rendering effect upgrading model, and the rendering efficiency of three-dimensional real-time rendering at the terminal side can be effectively improved. The technique can be applied to all rendered scenes in the meta-universe, such as digital person rendering. Limited by the limitation of terminal computing power, the terminal cannot realize the high-fidelity virtual man effect at present, and the high-cost cloud display card computing power is often needed, so that the rendering effect with the same quality can be realized by the terminal locally by applying the technology disclosed by the invention, the computing cost can be greatly reduced, and the rendering efficiency of three-dimensional real-time rendering of the terminal is improved.
FIG. 7 is a schematic diagram according to a fourth embodiment of the present disclosure; as shown in fig. 7, the rendering processing apparatus 700 of the present embodiment may be applied to implement rendering processing in a mobile terminal, including:
an obtaining module 701, configured to obtain a first rendering effect expansion graph of a rendering object based on rendering data;
the upgrade module 702 is configured to upgrade the rendering effect of the first rendering effect expansion graph by using a pre-trained rendering effect upgrade model, so as to obtain a second rendering effect expansion graph;
a rendering processing module 703, configured to perform rendering processing based on the rendering data and the second rendering effect expansion map.
The rendering processing device 700 of the present embodiment, by adopting the above modules to implement the implementation principle and the technical effect of the rendering processing, is the same as the implementation of the above related method embodiments, and detailed description of the above related method embodiments may be referred to, and will not be repeated here.
Further optionally, in an embodiment of the present disclosure, the obtaining module 701 is configured to:
acquiring a first rendering effect graph of each display surface after the rendering object is driven based on the rendering data;
and splicing the first rendering effect graphs of each display surface of the rendering object to obtain the first rendering effect expansion graph of the rendering object.
Further optionally, in an embodiment of the present disclosure, the obtaining module 701 is configured to:
and acquiring a first rendering effect graph of each display surface after the rendering object is driven based on the three-dimensional model of the rendering object in the rendering data, the texture map data and the driving parameters.
Further optionally, in an embodiment of the present disclosure, the rendering precision of the rendering parameter representation of the second rendering effect expansion map is greater than the rendering precision of the rendering parameter representation of the first rendering effect expansion map.
Further optionally, in an embodiment of the present disclosure, the upgrade module 702 is configured to:
and adopting a pre-trained rendering effect upgrading model to improve the writing degree of the first rendering effect expansion graph and obtain the second rendering effect expansion graph.
Further optionally, in an embodiment of the present disclosure, the upgrade module 702 is further configured to:
and adopting a pre-trained rendering effect upgrading model to improve the resolution of the first rendering effect expansion graph and obtain the second rendering effect expansion graph.
Further optionally, in an embodiment of the present disclosure, the rendering processing module 703 is configured to:
based on the rendering data, acquiring each display surface of the rendering object after being driven;
acquiring a second rendering effect graph of each display surface after the rendering object is driven based on the second rendering effect unfolded graph;
and attaching a second rendering effect graph of each display surface of the rendering object to each display surface of the driven rendering object to realize rendering processing.
Further optionally, in an embodiment of the present disclosure, the rendering processing module 703 is configured to:
and acquiring each display surface of the rendering object after being driven based on the driving parameters in the rendering data and the three-dimensional model of the rendering object.
Further optionally, in an embodiment of the present disclosure, the apparatus further comprises: and the display module is used for displaying the rendering effect graph of the rendering object obtained by the rendering process.
The rendering processing device 700 of the above embodiment implements the implementation principle and the technical effect of rendering processing by using the above modules, which are the same as the implementation of the above related method embodiments, reference may be made to the description of the above related method embodiments for details, and details are not repeated here.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the various methods and processes described above, such as the above-described methods of the present disclosure. For example, in some embodiments, the above-described methods of the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the above-described methods of the present disclosure may be performed as described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the above-described methods of the present disclosure in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (21)

1. A rendering processing method, comprising:
acquiring a first rendering effect expansion diagram of a rendering object based on rendering data;
adopting a pre-trained rendering effect upgrading model to upgrade the rendering effect of the first rendering effect expansion graph to obtain a second rendering effect expansion graph;
and performing rendering processing based on the rendering data and the second rendering effect development diagram.
2. The method of claim 1, wherein obtaining a first rendering effect expansion map of the rendered object based on the rendering data comprises:
acquiring a first rendering effect graph of each display surface after the rendering object is driven based on the rendering data;
and splicing the first rendering effect graphs of each display surface of the rendering object to obtain the first rendering effect expansion graph of the rendering object.
3. The method of claim 2, wherein obtaining a first rendering effect map of each presentation surface after the rendering object is driven based on the rendering data, comprises:
and acquiring a first rendering effect graph of each display surface after the rendering object is driven based on the three-dimensional model of the rendering object in the rendering data, the texture map data and the driving parameters.
4. The method of claim 1, wherein the rendering precision of the rendering parameter representation of the second rendering effect expansion map is greater than the rendering precision of the rendering parameter representation of the first rendering effect expansion map.
5. The method of claim 1, wherein performing a render effect upgrade on the first render effect development map using a pre-trained render effect upgrade model to obtain a second render effect development map comprises:
and adopting a pre-trained rendering effect upgrading model to improve the writing degree of the first rendering effect expansion graph and obtain the second rendering effect expansion graph.
6. The method of claim 5, wherein performing a render effect upgrade on the first render effect development map using a pre-trained render effect upgrade model results in a second render effect development map, further comprising:
and adopting a pre-trained rendering effect upgrading model to improve the resolution of the first rendering effect expansion graph and obtain the second rendering effect expansion graph.
7. The method of any of claims 1-6, wherein performing a rendering process based on the rendering data and the second rendering effect expansion map comprises:
based on the rendering data, acquiring each display surface of the rendering object after being driven;
acquiring a second rendering effect graph of each display surface after the rendering object is driven based on the second rendering effect unfolded graph;
and attaching a second rendering effect graph of each display surface of the rendering object to each display surface of the driven rendering object to realize rendering processing.
8. The method of claim 7, wherein obtaining, based on the rendering data, respective presentation surfaces of the rendered object after being driven, comprises:
and acquiring each display surface of the rendering object after being driven based on the driving parameters in the rendering data and the three-dimensional model of the rendering object.
9. The method of claim 7, wherein the method further comprises: and displaying a rendering effect diagram of the rendering object obtained by the rendering process.
10. A rendering processing apparatus, comprising:
the acquisition module is used for acquiring a first rendering effect expansion graph of the rendering object based on the rendering data;
the upgrading module is used for upgrading the rendering effect of the first rendering effect development diagram by adopting a pre-trained rendering effect upgrading model to obtain a second rendering effect development diagram;
and the rendering processing module is used for performing rendering processing based on the rendering data and the second rendering effect development diagram.
11. The apparatus of claim 10, wherein the means for obtaining is configured to:
acquiring a first rendering effect graph of each display surface after the rendering object is driven based on the rendering data;
and splicing the first rendering effect graphs of each display surface of the rendering object to obtain the first rendering effect expansion graph of the rendering object.
12. The apparatus of claim 11, wherein the means for obtaining is configured to:
and acquiring a first rendering effect graph of each display surface after the rendering object is driven based on the three-dimensional model of the rendering object in the rendering data, the texture map data and the driving parameters.
13. The apparatus of claim 10, wherein a rendering precision of the rendering parameter representation of the second rendering effect expansion map is greater than a rendering precision of the rendering parameter representation of the first rendering effect expansion map.
14. The apparatus of claim 10, wherein the upgrade module is to:
and adopting a pre-trained rendering effect upgrading model to improve the writing degree of the first rendering effect expansion graph and obtain the second rendering effect expansion graph.
15. The apparatus of claim 14, wherein the upgrade module is further to:
and adopting a pre-trained rendering effect upgrading model to improve the resolution of the first rendering effect expansion graph and obtain the second rendering effect expansion graph.
16. The apparatus of any of claims 10-15, wherein the rendering processing module is to:
based on the rendering data, acquiring each display surface of the rendering object after being driven;
acquiring a second rendering effect graph of each display surface after the rendering object is driven based on the second rendering effect unfolded graph;
and attaching a second rendering effect graph of each display surface of the rendering object to each display surface of the driven rendering object to realize rendering processing.
17. The apparatus of claim 16, wherein the rendering processing module is to:
and acquiring each display surface of the rendering object after being driven based on the driving parameters in the rendering data and the three-dimensional model of the rendering object.
18. The apparatus of claim 16, wherein the apparatus further comprises: and the display module is used for displaying the rendering effect graph of the rendering object obtained by the rendering process.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-9.
CN202211604133.0A 2022-12-13 2022-12-13 Rendering processing method and device, electronic equipment and storage medium Active CN116206046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211604133.0A CN116206046B (en) 2022-12-13 2022-12-13 Rendering processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211604133.0A CN116206046B (en) 2022-12-13 2022-12-13 Rendering processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116206046A true CN116206046A (en) 2023-06-02
CN116206046B CN116206046B (en) 2024-01-23

Family

ID=86515259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211604133.0A Active CN116206046B (en) 2022-12-13 2022-12-13 Rendering processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116206046B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564646A (en) * 2018-03-28 2018-09-21 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of object
KR20190013146A (en) * 2017-07-31 2019-02-11 주식회사 엔큐게임즈 Rendering optimization method for real-time mass processing of 3d objects in mobile environment
WO2020103040A1 (en) * 2018-11-21 2020-05-28 Boe Technology Group Co., Ltd. A method for generating and displaying panorama images based on rendering engine and a display apparatus
WO2021047429A1 (en) * 2019-09-11 2021-03-18 腾讯科技(深圳)有限公司 Image rendering method and device, apparatus, and storage medium
CN112581593A (en) * 2020-12-28 2021-03-30 深圳市人工智能与机器人研究院 Training method of neural network model and related equipment
CN113269858A (en) * 2021-07-19 2021-08-17 腾讯科技(深圳)有限公司 Virtual scene rendering method and device, computer equipment and storage medium
CN113327316A (en) * 2021-06-30 2021-08-31 联想(北京)有限公司 Image processing method, device, equipment and storage medium
CN114241105A (en) * 2021-12-06 2022-03-25 网易(杭州)网络有限公司 Interface rendering method, device, equipment and computer readable storage medium
CN114387382A (en) * 2021-12-31 2022-04-22 桂林长海发展有限责任公司 Method, system, storage medium, and electronic device for rendering radar scan animation
CN114463473A (en) * 2020-11-09 2022-05-10 中兴通讯股份有限公司 Image rendering processing method and device, storage medium and electronic equipment
CN114549722A (en) * 2022-02-25 2022-05-27 北京字跳网络技术有限公司 Rendering method, device and equipment of 3D material and storage medium
CN114792359A (en) * 2022-06-24 2022-07-26 北京百度网讯科技有限公司 Rendering network training and virtual object rendering method, device, equipment and medium
CN115330925A (en) * 2022-08-19 2022-11-11 北京字跳网络技术有限公司 Image rendering method and device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190013146A (en) * 2017-07-31 2019-02-11 주식회사 엔큐게임즈 Rendering optimization method for real-time mass processing of 3d objects in mobile environment
CN108564646A (en) * 2018-03-28 2018-09-21 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of object
WO2020103040A1 (en) * 2018-11-21 2020-05-28 Boe Technology Group Co., Ltd. A method for generating and displaying panorama images based on rendering engine and a display apparatus
WO2021047429A1 (en) * 2019-09-11 2021-03-18 腾讯科技(深圳)有限公司 Image rendering method and device, apparatus, and storage medium
CN114463473A (en) * 2020-11-09 2022-05-10 中兴通讯股份有限公司 Image rendering processing method and device, storage medium and electronic equipment
WO2022095714A1 (en) * 2020-11-09 2022-05-12 中兴通讯股份有限公司 Image rendering processing method and apparatus, storage medium, and electronic device
CN112581593A (en) * 2020-12-28 2021-03-30 深圳市人工智能与机器人研究院 Training method of neural network model and related equipment
CN113327316A (en) * 2021-06-30 2021-08-31 联想(北京)有限公司 Image processing method, device, equipment and storage medium
CN113269858A (en) * 2021-07-19 2021-08-17 腾讯科技(深圳)有限公司 Virtual scene rendering method and device, computer equipment and storage medium
CN114241105A (en) * 2021-12-06 2022-03-25 网易(杭州)网络有限公司 Interface rendering method, device, equipment and computer readable storage medium
CN114387382A (en) * 2021-12-31 2022-04-22 桂林长海发展有限责任公司 Method, system, storage medium, and electronic device for rendering radar scan animation
CN114549722A (en) * 2022-02-25 2022-05-27 北京字跳网络技术有限公司 Rendering method, device and equipment of 3D material and storage medium
CN114792359A (en) * 2022-06-24 2022-07-26 北京百度网讯科技有限公司 Rendering network training and virtual object rendering method, device, equipment and medium
CN115330925A (en) * 2022-08-19 2022-11-11 北京字跳网络技术有限公司 Image rendering method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116206046B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN112785674B (en) Texture map generation method, rendering device, equipment and storage medium
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
CN114820906B (en) Image rendering method and device, electronic equipment and storage medium
CN112862933B (en) Method, apparatus, device and storage medium for optimizing model
CN111754381B (en) Graphics rendering method, apparatus, and computer-readable storage medium
US10325414B2 (en) Application of edge effects to 3D virtual objects
US20150325048A1 (en) Systems, methods, and computer-readable media for generating a composite scene of a real-world location and an object
CN113870399B (en) Expression driving method and device, electronic equipment and storage medium
CN111583379A (en) Rendering method and device of virtual model, storage medium and electronic equipment
CN113379932B (en) Human body three-dimensional model generation method and device
CN113888398B (en) Hair rendering method and device and electronic equipment
CN115908687A (en) Method and device for training rendering network, method and device for rendering network, and electronic equipment
CN113961746B (en) Video generation method, device, electronic equipment and readable storage medium
CN112862934B (en) Method, apparatus, device, medium, and product for processing animation
CN116206046B (en) Rendering processing method and device, electronic equipment and storage medium
US10754498B2 (en) Hybrid image rendering system
CN115861510A (en) Object rendering method, device, electronic equipment, storage medium and program product
US11417058B2 (en) Anti-aliasing two-dimensional vector graphics using a multi-vertex buffer
CN114549303A (en) Image display method, image processing method, image display device, image processing equipment and storage medium
US20130257866A1 (en) Flexible defocus blur for stochastic rasterization
US20230078041A1 (en) Method of displaying animation, electronic device and storage medium
CN116363331B (en) Image generation method, device, equipment and storage medium
CN116778053B (en) Target engine-based map generation method, device, equipment and storage medium
US12002132B1 (en) Rendering using analytic signed distance fields
CN116152412A (en) Model rendering method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant