WO2023168999A1 - 一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 - Google Patents

一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2023168999A1
WO2023168999A1 PCT/CN2022/135314 CN2022135314W WO2023168999A1 WO 2023168999 A1 WO2023168999 A1 WO 2023168999A1 CN 2022135314 W CN2022135314 W CN 2022135314W WO 2023168999 A1 WO2023168999 A1 WO 2023168999A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendered
dimensional
dimensional element
grid data
elements
Prior art date
Application number
PCT/CN2022/135314
Other languages
English (en)
French (fr)
Inventor
何佳逊
张德嘉
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023168999A1 publication Critical patent/WO2023168999A1/zh
Priority to US18/378,066 priority Critical patent/US20240033625A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present application relates to the field of computer technology, and in particular, to a virtual scene rendering method, device, electronic equipment, computer-readable storage media, and computer program products.
  • game engines provide game designers with various tools needed to write games.
  • the purpose is to allow game designers to easily and quickly create game programs.
  • the two-dimensional and three-dimensional elements in the game screen are usually mixedly rendered.
  • Embodiments of the present application provide a virtual scene rendering method, device, electronic equipment, computer-readable storage medium and computer program product, which can realize the unification of the rendering modes of the three-dimensional elements to be rendered and the two-dimensional elements to be rendered, so that the three-dimensional elements to be rendered can be The rendering effect is preserved, thereby effectively improving the mixed rendering effect of two-dimensional elements and three-dimensional elements.
  • Embodiments of the present application provide a virtual scene rendering method, which is executed by an electronic device, including:
  • Sampling is performed on the animation of each three-dimensional element to be rendered to obtain a grid sequence frame corresponding to the animation of each three-dimensional element to be rendered, wherein the animation of the three-dimensional element to be rendered includes the current frame and at least A historical frame, each of which includes the three-dimensional element to be rendered;
  • An embodiment of the present application provides a virtual scene rendering device, including:
  • a first acquisition module configured to acquire at least one three-dimensional element to be rendered and at least one two-dimensional element to be rendered from the current frame data to be rendered of the virtual scene;
  • a sampling module configured to perform sampling processing on the animation of each three-dimensional element to be rendered, and obtain a grid sequence frame corresponding to the animation of each three-dimensional element to be rendered, wherein the animation of the three-dimensional element to be rendered includes all The current frame and at least one historical frame, each of the historical frames includes the three-dimensional element to be rendered;
  • the second acquisition module is configured to obtain the grid data corresponding to each of the three-dimensional elements to be rendered from the grid sequence frame, wherein the coordinates of the grid data corresponding to different three-dimensional elements to be rendered are Departments are different;
  • a transformation module configured to perform transformation processing on the grid data corresponding to each of the three-dimensional elements to be rendered, to obtain transformed grid data, and to create a corresponding grid data corresponding to each of the three-dimensional elements to be rendered through the transformed grid data. Convert two-dimensional elements;
  • a rendering module configured to render the at least one two-dimensional element to be rendered in the current frame, and render the converted two-dimensional element corresponding to each three-dimensional element to be rendered in the current frame.
  • An embodiment of the present application provides an electronic device, including:
  • Memory used to store executable instructions
  • the processor is configured to implement the virtual scene rendering method provided by the embodiment of the present application when executing executable instructions stored in the memory.
  • Embodiments of the present application provide a computer-readable storage medium that stores executable instructions for causing the processor to implement the virtual scene rendering method provided by the embodiments of the present application when executed.
  • Embodiments of the present application provide a computer program product or computer program, which includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the virtual scene rendering method described above in the embodiment of the present application.
  • the transformation of the three-dimensional element to be rendered is realized, and then the transformation to be rendered is achieved 2D elements and transform 2D elements.
  • the three-dimensional element to be rendered is converted to the converted two-dimensional element, and the converted two-dimensional element and the two-dimensional element to be rendered are effectively adapted to the two-dimensional element to be rendered and the three-dimensional element to be rendered, and the three-dimensional element to be rendered and the three-dimensional element to be rendered are realized.
  • the unification of the rendering modes for rendering two-dimensional elements allows the rendering effect of the three-dimensional elements to be rendered to be preserved, thereby effectively improving the mixed rendering effect of two-dimensional elements and three-dimensional elements.
  • Figure 1 is a schematic architectural diagram of a virtual scene rendering system 100 provided by an embodiment of the present application
  • Figure 2 is a schematic structural diagram of a terminal device 400 provided by an embodiment of the present application.
  • Figures 3A to 3E are schematic flow charts of a virtual scene rendering method provided by embodiments of the present application.
  • Figure 4A is a schematic diagram of the principle of a virtual scene rendering method provided by an embodiment of the present application.
  • Figure 4B is a schematic diagram of the effect of the virtual scene rendering method provided by the embodiment of the present application.
  • Figure 4C is a schematic flowchart of a virtual scene rendering method provided by an embodiment of the present application.
  • FIGS. 4D to 4G are schematic diagrams of the principles of the virtual scene rendering method provided by embodiments of the present application.
  • Figure 5A is a schematic diagram of the effect of the virtual scene rendering method provided by the embodiment of the present application.
  • Figure 5B is a schematic diagram of the effect provided by related technologies
  • FIG. 5C is a schematic diagram of the effect of the virtual scene rendering method provided by the embodiment of the present application.
  • first ⁇ second ⁇ third are only used to distinguish similar objects and do not represent a specific ordering of objects. It is understandable that “first ⁇ second ⁇ third” The specific order or precedence may be interchanged where permitted so that the embodiments of the application described herein may be practiced in other sequences than illustrated or described herein.
  • Games Also known as video games, they refer to all interactive games that rely on electronic device platforms. According to different running media, they are divided into five categories: console games (in a narrow sense, here refers specifically to home console games), handheld games, arcade games, computer games and mobile games.
  • Game engine refers to the core components of some programmed editable computer game systems or some interactive real-time image applications. These systems provide game designers with all the tools they need to program games, with the goal of allowing game designers to program games easily and quickly without having to start from scratch. Most of them support multiple operating platforms, such as Linux, Mac OS X, and Microsoft Windows.
  • the game engine includes the following systems: rendering engine (i.e. "renderer”, including two-dimensional image engine and three-dimensional image engine), physics engine, collision detection system, sound effects, script engine, computer animation, artificial intelligence, network engine and scene management.
  • Element It is the container of all components (Component). Elements include two-dimensional elements and three-dimensional elements. All game objects in the game are essentially elements. The game objects themselves do not add any characteristics to the game, but accommodate the implementation. A container for the actual functional components.
  • Game engine editor including scene editor, particle effects editor, model browser, animation editor and material editor, etc.
  • scene editor is used to place model objects, light sources, cameras, etc.
  • particle effects editor is used to create various game special effects
  • the animation editor is used to edit animation functions, which can trigger certain aspects of the game logic. Event
  • material editor used to edit model effects.
  • HMI Human Machine Interaction
  • user interface or user interface Also known as user interface or user interface, it is a medium and dialogue interface for transmitting and exchanging information between humans and computers, and is an important part of the computer system. It is a medium for interaction and information exchange between the system and the user. It realizes the conversion between the internal form of information and the form acceptable to humans.
  • Human-machine interfaces exist in all fields that involve human-machine information exchange.
  • Three-dimensional elements elements in the three-dimensional rendering system in the rendering engine.
  • Three-dimensional elements include particle three-dimensional elements, static three-dimensional elements and skinned three-dimensional elements.
  • Two-dimensional elements elements in the two-dimensional rendering system in the rendering engine. Two-dimensional elements can be various controls on the object-oriented programming platform. The base class of each two-dimensional element is Graphic.
  • Skinned three-dimensional element (Skined Mesh): a three-dimensional element used to create skinned animation, that is, used to add animation special effects to the vertices on the geometry.
  • the skinned three-dimensional element is a mesh with a skeleton (Skeleton) and bones (Bones) grid.
  • Particle Three-dimensional elements Particle System: three-dimensional elements used to create special effects in the rendering engine, simulating the movement changes of particles through the internal physical system.
  • Mask component A component in the game engine that can crop and display two-dimensional elements.
  • the mask component is used to specify the renderable range of child nodes. Nodes with mask components will use the constraints of the node.
  • the frame creates a rendering mask. All nodes of this node will be clipped according to this mask. Two-dimensional elements outside the mask range will not be rendered.
  • Group component It is a component in the game engine, used to group two-dimensional elements and control them uniformly.
  • Adaptation component It is a component in the game engine, used to group two-dimensional elements and control them uniformly.
  • Element rendering component (Canvas Renderer): It is the renderer component responsible for rendering two-dimensional elements in the game engine.
  • Object data Data in the game engine that represents the position, rotation and scaling of an element.
  • Original coordinate system (Local Space): a coordinate system with the axis of the element itself as the origin.
  • Canvas coordinate system (World Space): A coordinate system with the center of the canvas to be rendered as the origin.
  • embodiments of the present application realize mixed rendering of two-dimensional elements and three-dimensional elements by converting three-dimensional elements into converted two-dimensional elements, and then rendering the converted two-dimensional elements and the two-dimensional elements to be rendered. At the same time, it ensures that the rendering effect of three-dimensional elements is not affected, which can effectively improve the mixed rendering effect of two-dimensional elements and three-dimensional elements. At the same time, it can effectively reduce the processing time of the central processing unit (CPU, Central Processing Unit), thereby improving processing efficiency.
  • CPU Central Processing Unit
  • FIG. 5A is a schematic diagram of the effect of a virtual scene rendering method provided by an embodiment of the present application.
  • Effect 1 is the effect of mixed rendering of two-dimensional elements and three-dimensional elements through the virtual scene rendering method provided by the embodiment of the present application.
  • Effect 2 is the effect of mixed rendering of two-dimensional elements and three-dimensional elements by related technologies.
  • Effect 3 is real. Effect. Effect 1 restores Effect 3 to a good degree, while Effect 2 restores Effect 3 to a low degree. That is, the virtual scene rendering method provided by the embodiments of the present application can effectively improve the mixed rendering effect of two-dimensional elements and three-dimensional elements.
  • Figure 5B is a schematic diagram of the effect provided by the related technology.
  • the processing time of the central processor is 1.1ms, and the rendering batch is 7.
  • Figure 5C is a schematic diagram of the effect of a virtual scene rendering method provided by an embodiment of the present application.
  • the processing time of the central processor is 0.6ms, and the rendering batch is 7. It can be seen from this that the virtual scene rendering method provided by the embodiment of the present application can effectively reduce the processing time of the central processor, thereby improving processing efficiency.
  • Embodiments of the present application provide a virtual scene rendering method, device, electronic equipment, computer-readable storage medium and computer program product, which can realize the unification of the rendering modes of the three-dimensional elements to be rendered and the two-dimensional elements to be rendered, so that the three-dimensional elements to be rendered can be The rendering effect is retained, thereby effectively improving the mixed rendering effect of two-dimensional elements and three-dimensional elements.
  • the following describes exemplary applications of the electronic devices provided by the embodiments of the present application.
  • the electronic devices provided by the embodiments of the present application can be implemented as laptop computers, tablets
  • Various types of user terminals such as computers, desktop computers, set-top boxes, mobile devices (e.g., mobile phones, portable music players, personal digital assistants, dedicated messaging devices, portable gaming devices), etc., can also be implemented as servers.
  • Figure 1 is a schematic architectural diagram of a virtual scene rendering system 100 provided by an embodiment of the present application. It is an application scenario for realizing the rendering of virtual scenes (for example, mixed rendering of two-dimensional elements and three-dimensional elements in a game engine).
  • the terminal device 400 connects to the server 200 through the network 300.
  • the network 300 may be a wide area network or a local area network, or a combination of the two.
  • the terminal device 400 is used for the user to use the client 410, which is displayed on the graphical interface 410-1.
  • the terminal device 400 and the server 200 are connected to each other through a wired or wireless network.
  • the server 200 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for basic cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, content delivery networks (CDN, Content Delivery Network), and big data and artificial intelligence platforms.
  • the terminal device 400 may be a smartphone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, etc., but is not limited thereto.
  • the terminal and the server can be connected directly or indirectly through wired or wireless communication methods, which are not limited in the embodiments of this application.
  • the client 410 of the terminal device 400 obtains the three-dimensional element to be rendered and the two-dimensional element to be rendered, and sends the three-dimensional element to be rendered to the server 200 through the network 300.
  • the server 200 determines and Render the converted two-dimensional elements corresponding to the three-dimensional elements, and send the converted two-dimensional elements to the terminal device 400.
  • the terminal device 400 renders the converted two-dimensional elements and the two-dimensional elements to be rendered, and displays them in the graphical interface 410-1.
  • the client 410 of the terminal device 400 obtains the three-dimensional element to be rendered and the two-dimensional element to be rendered, and determines the converted two-dimensional element corresponding to the three-dimensional element to be rendered based on the three-dimensional element to be rendered, and the terminal device 400 renders Convert the two-dimensional elements and the two-dimensional elements to be rendered, and display them in the graphical interface 410-1.
  • FIG. 2 is a schematic structural diagram of a terminal device 400 provided by an embodiment of the present application.
  • the terminal device 400 shown in Figure 2 includes: at least one processor 410, a memory 450, and at least one network interface 420 and user interface 430.
  • the various components in the terminal device 400 are coupled together via a bus system 440 .
  • the bus system 440 is used to implement connection communication between these components.
  • the bus system 440 also includes a power bus, a control bus, and a status signal bus.
  • the various buses are labeled bus system 440 in FIG. 2 .
  • the processor 410 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware Components, etc., wherein the general processor can be a microprocessor or any conventional processor, etc.
  • DSP Digital Signal Processor
  • User interface 430 includes one or more output devices 431 that enable the presentation of media content, including one or more speakers and/or one or more visual displays.
  • User interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, and other input buttons and controls.
  • Memory 450 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, etc.
  • Memory 450 optionally includes one or more storage devices physically located remotely from processor 410 .
  • Memory 450 includes volatile memory or non-volatile memory, and may include both volatile and non-volatile memory.
  • Non-volatile memory can be read-only memory (ROM, Read Only Memory), and volatile memory can be random access memory (RAM, Random Access Memory).
  • RAM Random Access Memory
  • the memory 450 described in the embodiments of this application is intended to include any suitable type of memory.
  • the memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplarily described below.
  • the operating system 451 includes system programs used to process various basic system services and perform hardware-related tasks, such as the framework layer, core library layer, driver layer, etc., which are used to implement various basic services and process hardware-based tasks;
  • Network communication module 452 for reaching other computer devices via one or more (wired or wireless) network interfaces 420.
  • Exemplary network interfaces 420 include: Bluetooth, Wireless Compliance Certification (WiFi), and Universal Serial Bus ( USB, Universal Serial Bus), etc.;
  • Presentation module 453 for enabling the presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430 );
  • information e.g., a user interface for operating peripheral devices and displaying content and information
  • output devices 431 e.g., display screens, speakers, etc.
  • An input processing module 454 for detecting one or more user inputs or interactions from one or more input devices 432 and translating the detected inputs or interactions.
  • the virtual scene rendering device provided by the embodiment of the present application can be implemented in software.
  • Figure 2 shows the virtual scene rendering device 455 stored in the memory 450, which can be in the form of a program, a plug-in, etc.
  • the software includes the following software modules: the first acquisition module 4551, the sampling module 4552, the second acquisition module 4553, the transformation module 4554 and the rendering module 4555. These modules are logical, so they can be combined or combined in any way according to the functions implemented. further split. The functions of each module are explained below.
  • Figure 3A is a schematic flowchart of a virtual scene rendering method provided by an embodiment of the present application. It will be described in conjunction with steps 101 to 105 shown in Figure 3A.
  • the execution subject of the following steps 101 to 105 can be the aforementioned server or terminal equipment.
  • step 101 at least one three-dimensional element to be rendered and at least one two-dimensional element to be rendered are obtained from the current frame data to be rendered in the virtual scene.
  • FIG. 4A is a schematic diagram of the principle of a virtual scene rendering method provided by an embodiment of the present application.
  • the current frame to be rendered in the virtual scene is the sampling frame 53
  • at least one three-dimensional element to be rendered and at least one two-dimensional element to be rendered can be obtained from the frame data of the sampling frame 53.
  • the three-dimensional element to be rendered can be Three-dimensional element 11
  • the two-dimensional element to be rendered can be two-dimensional element 12 and two-dimensional element 13.
  • step 102 the animation of each three-dimensional element to be rendered is sampled to obtain a grid sequence frame corresponding to the animation of each three-dimensional element to be rendered.
  • the animation of the three-dimensional element to be rendered includes the current frame and at least one historical frame.
  • Each historical frame includes the three-dimensional element to be rendered (such as a three-dimensional game character).
  • the grid sequence frame refers to multiple sampling frames obtained by sampling the animation. Includes the sampling frame of the three-dimensional element to be rendered.
  • the animation of the three-dimensional element to be rendered includes the current frame 53 and the historical frames 51 and 52 .
  • the animation of the three-dimensional element 11 includes the current frame 53 and the historical frames 51 and 52 .
  • multiple grid sequence frames corresponding to the animation of the three-dimensional element 11 can be obtained, including, for example, the historical frame 51, the historical frame 52, and the current frame 53.
  • FIG. 3B is a schematic flowchart of a virtual scene rendering method provided by an embodiment of the present application.
  • Step 102 shown in Figure 3A can be implemented by executing steps 1021 to 1022 shown in Figure 3B for any three-dimensional element to be rendered, which will be described separately below.
  • step 1021 the animation of each three-dimensional element to be rendered is sampled according to the sampling interval to obtain multiple sampling frames corresponding to the animation of each three-dimensional element to be rendered.
  • the number of sampling frames may be negatively correlated with the duration of the sampling interval, that is, the longer the duration of the sampling interval, the smaller the number of sampling frames.
  • the sampling interval is the time interval between any two adjacent sampling points. The longer the sampling interval, the smaller the number of sample frames obtained. The shorter the sampling interval, the smaller the number of sample frames obtained. The greater the number of sampled frames.
  • the sampling interval can be 1ms. According to the sampling interval of 1ms, the animation of the three-dimensional element 11 is sampled to obtain the sampling frame 51, the sampling frame 52 and the sampling frame 53 corresponding to the animation of the three-dimensional element 11.
  • step 1022 among the multiple sampling frames, determine the grid sequence frame corresponding to the animation of the three-dimensional element to be rendered.
  • the grid sequence frame is a sampling frame including a three-dimensional element to be rendered among multiple sampling frames.
  • the three-dimensional element 11 is included in the sampling frame 51 , the sampling frame 52 and the sampling frame 53 .
  • the sampling frame 51 , the sampling frame 52 and the sampling frame 53 are all grid sequence frames.
  • the corresponding sampling frames that is, sampling frames that do not include three-dimensional elements to be rendered
  • the sampling frames are not grid sequence frames, for example, in the form of multiple
  • the sampling frames are 5 sampling frames. Assume that they are sampling frame 1, sampling frame 2, sampling frame 3, sampling frame 4 and sampling frame 5.
  • sampling frame 2 does not include the three-dimensional elements to be rendered, then sampling frame 2 They are not grid sequence frames, that is, only sampling frame 1, sampling frame 3, sampling frame 4, and sampling frame 5 are determined as grid sequence frames.
  • the above step 1022 can be implemented in the following manner: in the animation of the three-dimensional element to be rendered, determine the starting playback moment and the end playback time of the three-dimensional element to be rendered; based on the start playback time and the end playback time, in Determine the grid sequence frame corresponding to the animation of the three-dimensional element to be rendered among multiple sampling frames.
  • the animation of the three-dimensional element to be rendered determine the starting playback time and the end playback time of the three-dimensional element to be rendered.
  • the total playing time of the animation including the three-dimensional element to be rendered is 10 minutes, starting from the start of the animation , at the 2nd minute, the three-dimensional element to be rendered appears in the animation, then the starting play time of the three-dimensional element to be rendered is the second minute, and at the 9th minute and 10th second, the three-dimensional element to be rendered disappears from the animation, then , the end playback time of the three-dimensional element to be rendered is the 9th minute and 10th second.
  • the start playback time and the end playback time of the three-dimensional element to be rendered are determined, which facilitates the subsequent accurate determination of the grid based on the start playback time and end playback time. Sequence of frames.
  • the above-mentioned determination of the grid sequence frame corresponding to the animation including the three-dimensional element to be rendered in multiple sampling frames based on the start playback moment and the end playback moment can be implemented in the following manner: when the start playback moment When the start playback time and the end playback time are the same time, one sampling frame at the same time among multiple sampling frames is determined as the grid sequence frame corresponding to the animation; when the start playback time and the end playback time are different time, multiple sampling frames are At least two sampling frames between the starting playback moment and the end playback moment among the sampling frames are determined as grid sequence frames corresponding to the animation.
  • the three-dimensional element to be rendered disappears from the animation immediately, that is, the three-dimensional element to be rendered flashes in the animation of the three-dimensional element to be rendered. , determine one sampling frame at the same time among multiple sampling frames as the grid sequence frame corresponding to the animation.
  • the start playback time of the three-dimensional element to be rendered is the 2nd minute
  • the end playback time of the three-dimensional element to be rendered is the 9th minute and 10th second
  • the At least two sampling frames among the multiple sampling frames between the 2nd minute and the 9th minute and 10th second are determined to be the grid sequence frames corresponding to the animation.
  • step 103 the grid data corresponding to each three-dimensional element to be rendered is obtained from the grid sequence frame;
  • the three-dimensional elements to be rendered include skinned three-dimensional elements, particle three-dimensional elements and static three-dimensional elements, where the coordinate systems of the mesh data corresponding to the skinned three-dimensional elements, particle three-dimensional elements and static three-dimensional elements respectively are different.
  • FIG. 3B is a schematic flowchart of a virtual scene rendering method provided by an embodiment of the present application.
  • Step 103 shown in Figure 3A can be implemented by executing steps 1031 to 1032 shown in Figure 3B for any three-dimensional element to be rendered, which will be described separately below.
  • step 1031 the renderer type of the renderer corresponding to the element type of the three-dimensional element to be rendered is determined.
  • the renderer type of the corresponding renderer is a skinned renderer, where the skinned renderer is used to render the skinned three-dimensional element.
  • the renderer type of the corresponding renderer is a particle renderer, where the particle renderer includes a renderer run by the central processor and a renderer run by the graphics processor.
  • Particle Renderers are used to render particle 3D elements.
  • the renderer type of the corresponding renderer is a grid renderer, where the grid renderer is used to render static three-dimensional elements.
  • step 1032 the grid data corresponding to the three-dimensional element to be rendered is obtained from the grid sequence frame corresponding to the renderer type.
  • the above step 1032 can be implemented by performing the following processing on the skinned three-dimensional element: from the grid sequence frame corresponding to the skin renderer, obtain The grid data corresponding to the skinned three-dimensional element; among them, the grid data corresponding to the skinned three-dimensional element includes translation data, rotation data and scaling data.
  • the coordinate system of the translation data and rotation data takes the position of the skinned three-dimensional element as the origin, and the scaling The coordinate system of the data takes the center point of the canvas to be rendered as its origin.
  • the translation data represents the translation characteristics of the skinned three-dimensional element.
  • the translation characteristic may be that the skinned three-dimensional element translates from position A at one time to position B at another time.
  • the rotation data represents the rotation characteristics of the skinned three-dimensional element.
  • the rotation characteristic may be that the skinned three-dimensional element rotates from posture A at one time to posture B at another time.
  • the scaling data represents the scaling characteristics of the skinned three-dimensional element.
  • the scaling characteristic may be that the skinned three-dimensional element is scaled from size A at one time to size B at another time.
  • the above step 1032 can be implemented by performing the following processing on the static three-dimensional element: obtaining the static three-dimensional element from the grid sequence frame corresponding to the grid renderer.
  • the grid data corresponding to the element; among them, the coordinate system of the grid data corresponding to the static three-dimensional element takes the position of the static three-dimensional element as the origin.
  • a static 3D element may be a 3D element other than a skinned 3D element and a particle 3D element. Since the renderer type of the renderer corresponding to the static three-dimensional element is a mesh renderer, the mesh data corresponding to the static three-dimensional element can be accurately obtained from the mesh sequence frame corresponding to the mesh renderer.
  • the above step 1032 can be implemented by performing the following processing on the particle three-dimensional element: from the grid sequence frame corresponding to the renderer running on the central processor, Obtain the first mesh data corresponding to the particle's three-dimensional element; obtain the second mesh data corresponding to the particle's three-dimensional element from the corresponding mesh sequence frame of the renderer running on the graphics processor; combine the first mesh data and the second mesh
  • the grid data is determined as the grid data corresponding to the particle's three-dimensional element; wherein, the coordinate system of the grid data corresponding to the particle's three-dimensional element takes the center point of the canvas to be rendered as the origin.
  • the renderer type of the renderer corresponding to the particle three-dimensional element is a particle renderer
  • the particle renderer includes a renderer run by the central processor and a renderer run by the graphics processor
  • the grid corresponding to the particle three-dimensional element Data can be obtained separately from the renderer running on the CPU and the renderer running on the graphics processor.
  • step 104 the grid data corresponding to each three-dimensional element to be rendered is transformed to obtain transformed grid data, and by transforming the grid data, a transformed two-dimensional element corresponding to each three-dimensional element to be rendered is created.
  • the above transformation process may be a matrix transformation process, which is used to perform dimensional transformation on the matrix form of the grid data, thereby reducing the three-dimensional grid data to two-dimensional grid data.
  • the obtained transformation grid data includes: first transformation grid data based on the original coordinate system, and second transformation grid data based on the canvas coordinate system, where the canvas coordinate system is based on the center point of the canvas to be rendered. is the origin, and the original coordinate system takes the position of the converted two-dimensional element as the origin.
  • step 105 at least one two-dimensional element to be rendered in the current frame is rendered, and a converted two-dimensional element corresponding to each three-dimensional element to be rendered in the current frame is rendered.
  • the transformation network obtained by transforming the grid data corresponding to each three-dimensional element to be rendered is The coordinate system of grid data is also not unified. Then, the coordinate system can be unified when creating a converted two-dimensional element, or when rendering a converted two-dimensional element.
  • FIG. 3C is a schematic flowchart of a virtual scene rendering method provided by an embodiment of the present application.
  • Step 104 shown in Figure 3A can be implemented by executing steps 1041 to 1043 shown in Figure 3C for any two-dimensional element to be rendered, which will be described separately below.
  • step 1041 the first transformation grid data is read from the transformation grid data obtained by transformation processing on the two-dimensional element to be rendered.
  • the obtained transformed grid data includes first transformed grid data based on the original coordinate system and second transformed grid data based on the canvas coordinate system. Therefore, the first transformation grid data and the second transformation grid data can be read from the obtained transformation grid data.
  • step 1042 the first transformation grid data is converted into third transformation grid data based on the canvas coordinate system.
  • the first transformation grid data includes at least one of the following: transformation translation data, transformation rotation data, and static transformation grid data; in the above step 1042, the first transformation grid data is converted into a coordinate system based on the canvas.
  • the third transformation grid data can be realized in the following way: when the three-dimensional element to be rendered is a skinned three-dimensional element, the transformation and translation data based on the original coordinate system is converted into the transformation and translation data based on the canvas coordinate system, and the transformation and translation data is converted based on the canvas coordinate system.
  • the transformation and rotation data of the original coordinate system is converted into transformation and rotation data based on the canvas coordinate system; when the three-dimensional element to be rendered is a static three-dimensional element, the static transformation grid data based on the original coordinate system is converted into static transformation data based on the canvas coordinate system. Transform grid data.
  • the three-dimensional element to be rendered is a particle three-dimensional element
  • the coordinate system of the grid data corresponding to the particle three-dimensional element is based on the center point of the canvas to be rendered, there is no need to perform coordinate system conversion, and it can be compared with The converted coordinate systems of the mesh data corresponding to the above static three-dimensional elements and the skinned three-dimensional elements are unified.
  • step 1043 based on the third transformation grid data and the second transformation grid data, a converted two-dimensional element corresponding to the three-dimensional element to be rendered is created.
  • the created coordinate system of the transformed two-dimensional element is based on the canvas coordinate system, thus realizing the coordinate system.
  • the method of creating converted two-dimensional elements in the above step 1043 can be implemented in the following manner: based on the third transformation grid data and the second transformation grid data, determine the position of each three-dimensional element to be rendered in the canvas coordinate system coordinates; based on the coordinates and the geometric characteristics of the three-dimensional element to be rendered, create a converted two-dimensional element corresponding to each three-dimensional element to be rendered, where the geometric characteristics represent the geometric shape of the three-dimensional element to be rendered.
  • the third transformation grid data and the second transformation grid data are both based on the canvas coordinate system, it is possible to determine the position of each three-dimensional element to be rendered based on the third transformation grid data and the second transformation grid data.
  • the coordinates in the canvas coordinate system determine the coordinates of each three-dimensional element to be rendered in the canvas coordinate system, that is, the specific position of each three-dimensional element to be rendered in the canvas coordinate system is determined.
  • a converted two-dimensional element corresponding to each three-dimensional element to be rendered is created, where the converted two-dimensional element can be the to-be-rendered The projection of a three-dimensional element onto the canvas coordinate system.
  • step 105 shown in FIG. 3A can be implemented by executing step 1051 shown in FIG. 3C, which will be described below.
  • step 1051 the created converted two-dimensional element corresponding to the three-dimensional element to be rendered is rendered.
  • Figure 3D is a schematic flowchart of a virtual scene rendering method provided by an embodiment of the present application. Step 104 shown in Figure 3A can be implemented by executing step 1044 shown in Figure 3D, which will be described separately below.
  • step 1044 a converted two-dimensional element corresponding to the three-dimensional element to be rendered is created based on the first transformation grid data and the second transformation grid data.
  • the coordinate system will be unified when rendering and converting two-dimensional elements, when creating a converted two-dimensional element, there is no need to unify the coordinate system, and it will be directly based on the first transformation grid data and the second transformation grid data. Just create a converted two-dimensional element corresponding to the three-dimensional element to be rendered.
  • the first transformation grid data is based on the original coordinate system
  • the second transformation grid data is based on the canvas coordinate system, therefore, from the creation
  • the coordinate system of the converted two-dimensional element corresponding to the three-dimensional element to be rendered is not uniform.
  • step 105 shown in FIG. 3A can be implemented by executing steps 1052 to 1055 shown in FIG. 3D for any converted two-dimensional element, which will be described separately below.
  • step 1052 the first transformation grid data is read from the obtained transformation grid data.
  • the obtained transformed grid data includes first transformed grid data based on the original coordinate system and second transformed grid data based on the canvas coordinate system. Therefore, the first transformation grid data and the second transformation grid data can be read from the obtained transformation grid data.
  • step 1053 the first transformation grid data is converted into fourth transformation grid data based on the canvas coordinate system.
  • the fourth transformed grid data based on the canvas coordinate system is obtained by transforming the coordinate system of the first transformed grid data.
  • step 1054 based on the fourth transformation grid data and the second transformation grid data, a to-be-rendered transformed two-dimensional element for rendering directly on the to-be-rendered canvas is created.
  • the created coordinate system of the transformed two-dimensional element is based on the canvas coordinate system, thus realizing the coordinate system of unity.
  • step 1055 the two-dimensional element to be rendered converted is rendered.
  • FIG. 3E is a schematic flowchart of a virtual scene rendering method provided by an embodiment of the present application.
  • the sorting of the two-dimensional elements to be rendered and the two-dimensional elements to be converted can also be implemented by executing steps 106 to 109 shown in FIG. 3E , which will be described separately below.
  • step 106 apply for a second memory space.
  • At least one two-dimensional element to be rendered and a converted two-dimensional element corresponding to each three-dimensional element to be rendered are stored in the first memory space.
  • step 107 based on at least one two-dimensional element to be rendered and the converted two-dimensional element corresponding to each three-dimensional element to be rendered in the first memory space, rendering data respectively corresponding to the two-dimensional element to be rendered and the converted two-dimensional element is generated.
  • the rendering data represents the rendering level at which the element is located.
  • rendering data corresponding to the two-dimensional element to be rendered and the two-dimensional element to be converted are generated to facilitate the rendering of the to-be-rendered two-dimensional element in the first memory space based on the rendering data. Render 2D elements and convert 2D elements for sorting.
  • step 108 based on the rendering data, the two-dimensional elements to be rendered and the converted two-dimensional elements in the first memory space are sorted to obtain the sorted two-dimensional elements to be rendered and the sorted converted two-dimensional elements.
  • the to-be-rendered two-dimensional element in the first memory space and the rendering level at which the converted two-dimensional element is located can be determined according to the rendering level at which the to-be-rendered two-dimensional element and the converted two-dimensional element are respectively located. Rendering two-dimensional elements and converting two-dimensional elements for sorting processing, obtaining the two-dimensional elements to be rendered after sorting and converting two-dimensional elements after sorting.
  • step 109 the sorted two-dimensional elements to be rendered and the sorted two-dimensional elements to be converted are stored in the second memory space.
  • sorting processing can be used to determine the rendering order between elements.
  • the following processing can also be performed to determine the rendering order for any two-dimensional element to be rendered in the first memory space: based on the rendering data of the two-dimensional element to be rendered, determine the relationship between the two-dimensional element to be rendered and the first memory space The hierarchical relationship between other elements in the first memory space, where the other elements are two-dimensional elements in the first memory space other than the two-dimensional element to be rendered; based on the hierarchical relationship, determine the two-dimensional element to be rendered and other elements in the first memory space The rendering order between elements, where the hierarchical relationship is positively related to the rendering order.
  • the hierarchical relationship between the two-dimensional element to be rendered and other elements in the first memory space is determined, for example, when the level of the two-dimensional element to be rendered is the lowest level , the hierarchical relationship between the two-dimensional element to be rendered and other elements in the first memory space is that the levels of other elements in the first memory space are greater than the level of the two-dimensional element to be rendered; based on the hierarchical relationship, determine the two-dimensional element to be rendered The rendering order between the element and other elements in the first memory space.
  • the level of the two-dimensional element to be rendered is the lowest level, other elements in the first memory space are rendered first, and the two-dimensional element to be rendered is rendered last. .
  • the three-dimensional rendering component used to render the three-dimensional element to be rendered may also be disabled.
  • the above step 105 can be implemented in the following manner: calling a two-dimensional rendering component, sequentially rendering the sorted two-dimensional elements to be rendered according to the rendering order, and sorting the two-dimensional elements to be converted.
  • the animation of the virtual scene may include multiple sampling frames.
  • the two-dimensional elements 12, two-dimensional elements 13 and three-dimensional elements 11 in the sampling frames 51, 52 and 53 shown in Figure 4A are dynamic. Change, that is, the positions of the two-dimensional element 12, the two-dimensional element 13, and the three-dimensional element 11 in the sampling frame 51, the sampling frame 52, and the sampling frame 53 are different. Specifically, in the sampling frame 52, the two-dimensional element 13 has been correctly rendered below the two-dimensional element 12, thereby realizing a hierarchical interleaving between the two-dimensional element and the two-dimensional element.
  • FIG. 4B is a schematic diagram of the effect of a virtual scene rendering method provided by an embodiment of the present application.
  • two-dimensional elements and three-dimensional elements in game objects of different types are realized.
  • Mixed rendering effectively improves the mixed rendering effect of two-dimensional elements and three-dimensional elements.
  • FIG. 4C is a schematic flowchart of a virtual scene rendering method provided by an embodiment of the present application. Description will be made with reference to steps 501 to 507 shown in FIG. 4C .
  • step 501 skin three-dimensional elements, static three-dimensional elements and particle three-dimensional elements are obtained.
  • step 502 grid data is updated.
  • the grid data corresponding to the skinned three-dimensional elements, static three-dimensional elements, and particle three-dimensional elements are updated.
  • step 503 the rendering module is disabled.
  • the 3D rendering component used to render the 3D elements to be rendered is disabled, leaving only the logical update part to update the grid data.
  • step 504 grid data is obtained.
  • FIG. 4D is a schematic diagram of the principle of a virtual scene rendering method provided by an embodiment of the present application.
  • the process of obtaining grid data will be described below in conjunction with FIG. 4D .
  • the grid data corresponding to the skinned three-dimensional element is obtained from the grid sequence frame corresponding to the skin renderer; where, the grid data corresponding to the skinned three-dimensional element is
  • the coordinate system can be the local coordinate system and the canvas coordinate system.
  • the grid data corresponding to the skinned three-dimensional elements includes rotation data, translation data and scaling data.
  • the coordinate system of the rotation data and translation data can be the local coordinate system
  • the coordinate system of the scaling data can be the canvas coordinate system.
  • the element type of the three-dimensional element to be rendered is a static three-dimensional element
  • the grid data corresponding to the static three-dimensional element is obtained from the grid sequence frame corresponding to the grid renderer.
  • the coordinate system of the grid data corresponding to the static three-dimensional element can be local coordinate system.
  • the grid data corresponding to the particle three-dimensional element can be obtained from the grid sequence frame corresponding to the particle renderer.
  • the coordinate system of the grid data corresponding to the particle three-dimensional element can be Canvas coordinate system.
  • step 505 the grid data is transformed.
  • matrix transformation processing is performed on the acquired mesh data corresponding to the skin three-dimensional elements, static three-dimensional elements, and particle three-dimensional elements to obtain transformed mesh data.
  • step 506 a converted two-dimensional element corresponding to the three-dimensional element is created.
  • a transformed 2D element corresponding to each 3D element to be rendered is created.
  • step 507 the coordinate system of each transformed two-dimensional element is unified.
  • the coordinate systems of the converted two-dimensional elements corresponding to each three-dimensional element to be rendered are unified, so that the rendered converted two-dimensional elements are in the same coordinate system.
  • the unification of the coordinate system is completed in the process of creating a converted two-dimensional element corresponding to each three-dimensional element to be rendered.
  • the grid data corresponding to the skinned three-dimensional element includes rotation data, translation data and scaling data.
  • the coordinate system of the rotation data and translation data can be the local coordinate system, that is, the first transformation grid data is rotation data and translation data,
  • the second transformation grid data is zoom data, the first transformation grid data is converted into the third transformation grid data based on the canvas coordinate system, and then based on the third transformation grid data and the second transformation grid data, create and wait Renders the corresponding converted 2D element of the 3D element.
  • the unification of the coordinate system is completed.
  • the unification of the coordinate system is completed.
  • read the first transformation grid data From the obtained transformation grid data, read the first transformation grid data; convert the first transformation grid data into the fourth transformation grid data based on the canvas coordinate system; based on the fourth transformation grid data and the second Transform the grid data to create a two-dimensional element to be rendered and converted for rendering directly on the canvas to be rendered; render the two-dimensional element to be rendered and converted.
  • Figure 4E is a schematic diagram of the principle of a virtual scene rendering method provided by an embodiment of the present application.
  • the coordinate system is completed. Unification, convert the original coordinate system into the canvas coordinate system. And perform matrix transformation processing on the grid data corresponding to the three-dimensional element to be rendered, and perform correction transformation on the matrix transformation processing result.
  • the rendering order can be determined in the following manner: first, apply for memory space (Prepare Output) in advance, and then generate corresponding rendering information data according to the instruction set and , sort the two-dimensional elements to be rendered and the two-dimensional elements to be converted based on the rendering information data, thereby determining the rendering order between the two-dimensional elements to be rendered and the two-dimensional elements to be converted.
  • Prepare Output memory space
  • FIG. 4F and FIG. 4G are schematic diagrams of the principles of the virtual scene rendering method provided by embodiments of the present application.
  • What is shown in Figure 4F is the time overhead of the related technology
  • what is shown in Figure 4G is the time overhead of the virtual scene rendering method provided by the embodiment of the present application.
  • the time overhead of the related technology is 0.92 milliseconds (ms)
  • the total overhead of the central processor is 1.58ms.
  • the time overhead is significantly reduced from 0.92ms to 0.02ms
  • the total overhead of the central processor is reduced to 0.83ms, and the performance is significantly improved.
  • barrier-free mixed use of three-dimensional elements and two-dimensional elements is realized, which can effectively improve the mixed rendering effect of two-dimensional elements and three-dimensional elements while effectively reducing It reduces time overhead, improves development efficiency, and provides performance space for processing more complex three-dimensional elements.
  • the software module may include: a first acquisition module 4551, configured to acquire at least one three-dimensional element to be rendered and at least one two-dimensional element to be rendered from the current frame data to be rendered in the virtual scene; a sampling module 4552, configured to acquire each The animation of the three-dimensional element to be rendered is sampled to obtain a grid sequence frame corresponding to the animation of each three-dimensional element to be rendered.
  • the animation of the three-dimensional element to be rendered includes the current frame and at least one historical frame, and each historical frame includes the animation to be rendered.
  • the second acquisition module 4553 is configured to obtain the grid data corresponding to each three-dimensional element to be rendered from the grid sequence frame, where the coordinate systems of the grid data corresponding to different three-dimensional elements to be rendered are different;
  • the transformation module 4554 is configured to transform the grid data corresponding to each three-dimensional element to be rendered, to obtain the transformed grid data, and to create a transformed two-dimensional element corresponding to each three-dimensional element to be rendered by transforming the grid data.
  • Rendering module 4555 configured to render at least one two-dimensional element to be rendered in the current frame, and render the converted two-dimensional element corresponding to each three-dimensional element to be rendered in the current frame.
  • the above-mentioned second acquisition module 4553 is also configured to perform the following processing for any three-dimensional element to be rendered: determine the renderer type of the renderer corresponding to the element type of the three-dimensional element to be rendered; In the grid sequence frame corresponding to the renderer, obtain the grid data corresponding to the three-dimensional element to be rendered.
  • the above-mentioned second acquisition module 4553 is also configured to perform the following processing on the skinned three-dimensional element when the element type of the three-dimensional element to be rendered is a skinned three-dimensional element: from the grid sequence corresponding to the skin renderer In the frame, obtain the grid data corresponding to the skinned three-dimensional element; among them, the grid data corresponding to the skinned three-dimensional element includes translation data, rotation data and scaling data.
  • the coordinate system of the translation data and rotation data is based on the location of the skinned three-dimensional element. is the origin, and the coordinate system of the scaled data takes the center point of the canvas to be rendered as the origin.
  • the above-mentioned second acquisition module 4553 is also configured to perform the following processing on the static three-dimensional element when the element type of the three-dimensional element to be rendered is a static three-dimensional element: from the grid sequence frame corresponding to the grid renderer , obtain the grid data corresponding to the static three-dimensional element; wherein, the coordinate system of the grid data corresponding to the static three-dimensional element takes the position of the static three-dimensional element as the origin.
  • the above-mentioned second acquisition module 4553 is also configured to perform the following processing on the particle three-dimensional element when the element type of the three-dimensional element to be rendered is a particle three-dimensional element: from the grid corresponding to the renderer running on the central processor In the sequence frame, obtain the first grid data corresponding to the particle's three-dimensional element; obtain the second grid data corresponding to the particle's three-dimensional element from the corresponding grid sequence frame of the renderer running on the graphics processor; convert the first grid data The second grid data is determined as grid data corresponding to the three-dimensional element of the particle; wherein, the coordinate system of the grid data corresponding to the three-dimensional element of the particle takes the center point of the canvas to be rendered as its origin.
  • the obtained transformation grid data includes: first transformation grid data based on the original coordinate system, and second transformation grid data based on the canvas coordinate system, where the canvas coordinate system is based on the center point of the canvas to be rendered. is the origin, and the original coordinate system takes the position of the converted two-dimensional element as the origin; the above-mentioned transformation module 4554 is also configured to perform the following processing on any three-dimensional element to be rendered: from the transformed grid data obtained by transforming the three-dimensional element to be rendered , read the first transformation grid data; convert the first transformation grid data into the third transformation grid data based on the canvas coordinate system; create and wait based on the third transformation grid data and the second transformation grid data. Renders the corresponding converted 2D element of the 3D element.
  • the above-mentioned rendering module 4555 is also configured to render the created converted two-dimensional element corresponding to the three-dimensional element to be rendered.
  • the above-mentioned transformation module 4554 is also configured to determine the coordinates of each three-dimensional element to be rendered in the canvas coordinate system based on the third transformation grid data and the second transformation grid data; based on the coordinates and the three-dimensional element to be rendered The geometric characteristics of the element are used to create a converted two-dimensional element corresponding to each three-dimensional element to be rendered, where the geometric characteristics represent the geometric shape of the three-dimensional element to be rendered.
  • the first transformation grid data includes at least one of the following: transformation translation data, transformation rotation data, and static transformation grid data; the above transformation module 4554 is also configured to when the three-dimensional element to be rendered is a skinned three-dimensional element When , the transformation and translation data based on the original coordinate system is converted into transformation and translation data based on the canvas coordinate system, and the transformation and rotation data based on the original coordinate system is converted into transformation and rotation data based on the canvas coordinate system; when the three-dimensional element to be rendered When it is a static three-dimensional element, the static transformation grid data based on the original coordinate system is converted into static transformation grid data based on the canvas coordinate system.
  • the obtained transformation grid data includes: first transformation grid data based on the original coordinate system, and second transformation grid data based on the canvas coordinate system, where the canvas coordinate system is based on the center point of the canvas to be rendered. is the origin, and the original coordinate system takes the position of the converted two-dimensional element as the origin; the above-mentioned transformation module 4554 is also configured to create a converted two-dimensional corresponding to the three-dimensional element to be rendered based on the first transformation grid data and the second transformation grid data. element.
  • the above-mentioned rendering module 4555 is also configured to perform the following processing for any converted two-dimensional element: read the first transformed grid data from the obtained transformed grid data; convert the first transformed grid data into , converted into the fourth transformation grid data based on the canvas coordinate system; based on the fourth transformation grid data and the second transformation grid data, create a to-be-rendered transformed two-dimensional element for rendering directly on the to-be-rendered canvas; render to-be-rendered Convert 2D elements.
  • the above-mentioned virtual scene rendering device 455 also includes: a sorting module configured to apply for a second memory space; based on at least one two-dimensional element to be rendered in the first memory space and each three-dimensional element to be rendered corresponding to Convert the two-dimensional elements to generate rendering data corresponding to the two-dimensional elements to be rendered and the converted two-dimensional elements; based on the rendering data, sort the two-dimensional elements to be rendered and the converted two-dimensional elements in the first memory space to obtain the sorting
  • the two-dimensional elements to be rendered and the two-dimensional elements to be converted after sorting are stored in the second memory space, where the sorting process is used to determine the rendering between elements. order.
  • the above-mentioned virtual scene rendering device 455 also includes: a sequence determination module configured to perform the following processing for any two-dimensional element to be rendered in the first memory space: based on the rendering data of the two-dimensional element to be rendered, Determine the hierarchical relationship between the two-dimensional element to be rendered and other elements in the first memory space, where the other elements are two-dimensional elements in the first memory space other than the two-dimensional element to be rendered; based on the hierarchical relationship, determine the to-be-rendered The rendering order between the two-dimensional element and other elements in the first memory space, where the hierarchical relationship is positively related to the rendering order.
  • a sequence determination module configured to perform the following processing for any two-dimensional element to be rendered in the first memory space: based on the rendering data of the two-dimensional element to be rendered, Determine the hierarchical relationship between the two-dimensional element to be rendered and other elements in the first memory space, where the other elements are two-dimensional elements in the first memory space other than the two-dimensional element to be rendered; based on the
  • the above-mentioned virtual scene rendering device 455 further includes: a disabling module configured to disable the three-dimensional rendering component used to render the three-dimensional element to be rendered.
  • the above-mentioned rendering module 4555 is also used to call the two-dimensional rendering component to sequentially render the two-dimensional elements to be rendered after sorting and convert the two-dimensional elements after sorting according to the rendering order.
  • the above-mentioned sampling module 4552 is also configured to perform the following processing on any three-dimensional element to be rendered: perform sampling processing on the animation of each three-dimensional element to be rendered according to the sampling interval, and obtain the animation of each three-dimensional element to be rendered. Multiple sampling frames corresponding to the animation, where the number of sampling frames is negatively correlated with the length of the sampling interval; among the multiple sampling frames, determine the grid sequence frame corresponding to the animation of the three-dimensional element to be rendered, where the grid sequence frame is The multiple sampling frames include sampling frames of the three-dimensional elements to be rendered.
  • the above-mentioned sampling module 4552 is also configured to determine the starting playback time and the end playback time of the three-dimensional element to be rendered in the animation of the three-dimensional element to be rendered; based on the start playback time and the end playback time, in multiple Determine the grid sequence frame corresponding to the animation of the three-dimensional element to be rendered among the sampling frames.
  • the above-mentioned sampling module 4552 is also configured to determine a sampling frame at the same time among multiple sampling frames as the animation of the three-dimensional element to be rendered when the start playback time and the end playback time are the same time.
  • the grid sequence frame corresponding to the element's animation.
  • Embodiments of the present application provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the virtual scene rendering method described above in the embodiment of the present application.
  • Embodiments of the present application provide a computer-readable storage medium storing executable instructions.
  • the executable instructions are stored therein.
  • the executable instructions When executed by a processor, they will cause the processor to execute the virtual scene provided by the embodiments of the present application.
  • the rendering method is, for example, the virtual scene rendering method shown in FIG. 3A .
  • the computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • Various equipment may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • Various equipment may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • executable instructions may take the form of a program, software, software module, script, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and their May be deployed in any form, including deployed as a stand-alone program or deployed as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to, files in a file system and may be stored as part of a file holding other programs or data, for example, in a Hyper Text Markup Language (HTML) document. in one or more scripts, in a single file that is specific to the program in question, or in multiple collaborative files (e.g., files that store one or more modules, subroutines, or portions of code).
  • HTML Hyper Text Markup Language
  • executable instructions may be deployed to execute on one computing device, or on multiple computing devices located at one location, or alternatively, on multiple computing devices distributed across multiple locations and interconnected by a communications network execute on.
  • the start playback time and the end playback time of the three-dimensional element to be rendered are determined, so as to facilitate subsequent accurate determination of the network based on the start playback time and end playback time.
  • Grid sequence frame

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供了一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品;方法包括:从虚拟场景的待渲染的当前帧数据中,获取至少一个待渲染三维元素和至少一个待渲染二维元素;对每个待渲染三维元素的动画进行采样处理,得到每个待渲染三维元素的动画对应的网格序列帧;从网格序列帧中获取每个待渲染三维元素分别对应的网格数据;对每个待渲染三维元素分别对应的网格数据进行变换处理,得到变换网格数据,并通过变换网格数据,创建与每个待渲染三维元素对应的转换二维元素;渲染当前帧中的至少一个待渲染二维元素,并渲染当前帧中的每个待渲染三维元素对应的转换二维元素。

Description

一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品
相关申请的交叉引用
本申请基于申请号为202210239108.0,申请日为2022年03月11日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机技术领域,尤其涉及一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品。
背景技术
随着游戏引擎技术的发展,游戏引擎为游戏设计者提供编写游戏所需的各种工具,其目的在于让游戏设计者能容易和快速地做出游戏程序。在进行游戏画面渲染的过程中,通常会对游戏画面中的二维元素和三维元素进行混合渲染。
在相关技术中,由于在游戏引擎中二维元素和三维元素分别处于不同的渲染体系,导致在混合渲染时,二维元素和三维元素无法进行有效适配,进而导致渲染效果不佳。
对于如何有效提高二维元素和三维元素的混合渲染效果,相关技术尚无有效的解决方案。
发明内容
本申请实施例提供一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够实现待渲染三维元素和待渲染二维元素渲染模式的统一,使待渲染三维元素的渲染效果得以保留,从而有效提高二维元素和三维元素的混合渲染效果。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景的渲染方法,由电子设备执行,包括:
从所述虚拟场景的待渲染的当前帧数据中,获取至少一个待渲染三维元素和至少一个待渲染二维元素;
对每个所述待渲染三维元素的动画进行采样处理,得到每个所述待渲染三维元素的动画对应的网格序列帧,其中,所述待渲染三维元素的动画包括所述当前帧以及至少一个历史帧,每个所述历史帧均包括所述待渲染三维元素;
从所述网格序列帧中获取每个所述待渲染三维元素分别对应的网格数据,其中,不同的所述待渲染三维元素对应的所述网格数据的坐标系不同;
对每个所述待渲染三维元素分别对应的网格数据进行变换处理,得到变换网格数据,并通过所述变换网格数据,创建与每个所述待渲染三维元素对应的转换二维元素;
渲染所述当前帧中的所述至少一个待渲染二维元素,并渲染所述当前帧中的每个所述待渲染三维元素对应的所述转换二维元素。
本申请实施例提供一种虚拟场景的渲染装置,包括:
第一获取模块,配置为从所述虚拟场景的待渲染的当前帧数据中,获取至少一个待渲染三维元素和至少一个待渲染二维元素;
采样模块,配置为对每个所述待渲染三维元素的动画进行采样处理,得到每个所述待渲染三维元素的动画对应的网格序列帧,其中,所述待渲染三维元素的动画包括所述当前帧以 及至少一个历史帧,每个所述历史帧均包括所述待渲染三维元素;
第二获取模块,配置为从所述网格序列帧中获取每个所述待渲染三维元素分别对应的网格数据,其中,不同的所述待渲染三维元素对应的所述网格数据的坐标系不同;
变换模块,配置为对每个所述待渲染三维元素分别对应的网格数据进行变换处理,得到变换网格数据,并通过所述变换网格数据,创建与每个所述待渲染三维元素对应的转换二维元素;
渲染模块,配置为渲染所述当前帧中的所述至少一个待渲染二维元素,并渲染所述当前帧中的每个所述待渲染三维元素对应的所述转换二维元素。
本申请实施例提供一种电子设备,包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的可执行指令时,实现本申请实施例提供的虚拟场景的渲染方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,用于引起处理器执行时,实现本申请实施例提供的虚拟场景的渲染方法。
本申请实施例提供了一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取所述计算机指令,处理器执行所述计算机指令,使得所述计算机设备执行本申请实施例上述的虚拟场景的渲染方法。
本申请实施例具有以下有益效果:
通过对待渲染三维元素对应的网格数据进行变换处理,进而根据得到的变换网格数据,创建与待渲染三维元素对应的转换二维元素,从而实现了待渲染三维元素的转换,进而渲染待渲染二维元素和转换二维元素。如此,对待渲染三维元素进行转换得到转换二维元素,并渲染转换二维元素和待渲染二维元素实现了待渲染二维元素和待渲染三维元素的有效适配,实现待渲染三维元素和待渲染二维元素渲染模式的统一,使待渲染三维元素的渲染效果得以保留,从而有效提高二维元素和三维元素的混合渲染效果。
附图说明
图1是本申请实施例提供的虚拟场景的渲染系统100的架构示意图;
图2是本申请实施例提供的终端设备400的结构示意图;
图3A至图3E是本申请实施例提供的虚拟场景的渲染方法的流程示意图;
图4A是本申请实施例提供的虚拟场景的渲染方法的原理示意图;
图4B是本申请实施例提供的虚拟场景的渲染方法的效果示意图;
图4C是本申请实施例提供的虚拟场景的渲染方法的流程示意图;
图4D至图4G是本申请实施例提供的虚拟场景的渲染方法的原理示意图;
图5A是本申请实施例提供的虚拟场景的渲染方法的效果示意图;
图5B是相关技术提供的效果示意图;
图5C是本申请实施例提供的虚拟场景的渲染方法的效果示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)游戏:又称电子游戏(Video Games),是指所有依托于电子设备平台而运行的交互游戏。根据运行媒介的不同分为五类:主机游戏(狭义的,此处专指家用机游戏)、掌机游戏、街机游戏、电脑游戏及手机游戏。
2)游戏引擎:是指一些已编写好的可编辑电脑游戏系统或者一些交互式实时图像应用程序的核心组件。这些系统为游戏设计者提供各种编写游戏所需的各种工具,其目的在于让游戏设计者能容易和快速地做出游戏程式而不用从头开始。大部分都支持多种操作平台,如Linux、Mac OS X、微软Windows。游戏引擎包含以下系统:渲染引擎(即“渲染器”,含二维图像引擎和三维图像引擎)、物理引擎、碰撞检测系统、音效、脚本引擎、电脑动画、人工智能、网络引擎以及场景管理。
3)元素:是所有组件(Component)的容器,元素包括二维元素和三维元素,在游戏中的所有游戏对象本质上都是元素,游戏对象自身不会向游戏添加任何特性,而是容纳实现实际功能的组件的容器。
4)游戏引擎编辑器:包括场景编辑器、粒子特效编辑器、模型浏览器、动画编辑器和材质编辑器等。其中,场景编辑器,用于负责摆放模型物体、光源、摄像机等;粒子特效编辑器,用于制作各种游戏特效;动画编辑器,用于编辑动画功能,可以触发游戏逻辑中的某些事件;材质编辑器,用于编辑模型效果。
5)人机交互界面(HMI,Human Machine Interaction):又称用户界面或使用者界面,是人与计算机之间传递、交换信息的媒介和对话接口,是计算机系统的重要组成部分。是系统和用户之间进行交互和信息交换的媒介,它实现信息的内部形式与人类可以接受形式之间的转换。凡参与人机信息交流的领域都存在着人机界面。
6)三维元素:在渲染引擎中处于三维渲染体系的元素,三维元素包括粒子三维元素、静态三维元素和蒙皮三维元素。
7)二维元素:在渲染引擎中处于二维渲染体系的元素,二维元素可以是面向对象程序设计平台上的各类控件,每一个二维元素的基类为图表(Graphic)。
8)蒙皮三维元素(Skined Mesh):用于制作蒙皮动画的三维元素,即用于给几何体上的顶点添加动画特效,蒙皮三维元素是具有骨架(Skeleton)和骨骼(Bones)的网格。
9)粒子三维元素(Particle System):在渲染引擎中用于制作特效的三维元素,通过内部物理系统模拟粒子的运动变化。
10)遮罩组件(Mask):游戏引擎中的组件,可以对二维元素进行裁剪显示,遮罩组件用于规定子节点可渲染的范围,带有遮罩组件的节点会使用该节点的约束框创建一个渲染遮罩,该节点的所有节点都会依据这个遮罩进行裁剪,遮罩范围之外二维元素的不会被渲染。
11)分组组件(Canvas Group):是游戏引擎中的一种组件,用于对二维元素进行分组,统一控制。
12)适配组件:是游戏引擎中的一种组件,用于对二维元素进行分组,统一控制。
13)元素渲染组件(Canvas Renderer):是游戏引擎中负责渲染二维元素的渲染器组件。
14)物体数据(Transform):游戏引擎中表示一个元素的位置、旋转和缩放的数据。
15)原始坐标系(Local Space):以元素自身的轴心为原点的坐标系。
16)画布坐标系(World Space):以待渲染画布的中心为原点的坐标系。
在本申请实施例的实施过程中,申请人发现相关技术存在以下问题:
在游戏引擎的渲染系统中,经常会遇到混合渲染三维元素和二维元素的场景,由于在游戏引擎中,三维元素和二维元素处于不同的渲染体系,在相关技术中,将二维元素和三维元素混合渲染会出现诸多问题。例如,三维元素和二维元素无法有效的进行层级控制,无法有效的控制二维元素和三维元素之间的渲染顺序,无法支撑复杂的二维元素和三维元素混合使用的场景,三维元素和二维元素的原生功能组件无法兼容使用,导致二维元素和三维元素无法兼容适配和排版等功能。
针对上述相关技术存在的问题,本申请实施例通过将三维元素转换为转换二维元素,然后再渲染转换二维元素和待渲染二维元素,从而实现了二维元素和三维元素的混合渲染,同时保证了三维元素的渲染效果不受影响,进而能够有效提高二维元素和三维元素的混合渲染效果。同时可以有效减少中央处理器(CPU,Central Processing Unit)的处理耗时,从而提高处理效率。下面进行详细说明。
参见图5A,图5A是本申请实施例提供的虚拟场景的渲染方法的效果示意图。效果1是通过本申请实施例提供的虚拟场景的渲染方法对二维元素和三维元素进行混合渲染的效果,效果2是相关技术对二维元素和三维元素进行混合渲染的效果,效果3是真实效果。效果1对于效果3的还原程度较好,效果2对于效果3的还原程度不高,即,通过本申请实施例提供的虚拟场景的渲染方法能够有效提高二维元素和三维元素的混合渲染效果。
参见图5B,图5B是相关技术提供的效果示意图。在相关技术中,中央处理器的处理耗时为1.1ms,渲染批次为7。参见图5C,图5C是本申请实施例提供的虚拟场景的渲染方法的效果示意图。在本申请实施例中,中央处理器的处理耗时为0.6ms,渲染批次为7。由此可知,通过本申请实施例提供的虚拟场景的渲染方法可以有效减少中央处理器的处理耗时,从而提高处理效率。
本申请实施例提供一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够实现待渲染三维元素和待渲染二维元素渲染模式的统一,使待渲染三维元素的渲染效果得以保留,从而有效提高二维元素和三维元素的混合渲染效果,下面说明本申请实施例提供的电子设备的示例性应用,本申请实施例提供的电子设备可以实施为笔记本电脑,平板电脑,台式计算机,机顶盒,移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的用户终端,也可以实施为服务器。
参见图1,图1是本申请实施例提供的虚拟场景的渲染系统100的架构示意图,为实现虚拟场景的渲染的应用场景(例如,对游戏引擎中的二维元素和三维元素进行混合渲染),终端设备400通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合。
终端设备400用于供用户使用客户端410,在图形界面410-1显示。终端设备400和服务器200通过有线或者无线网络相互连接。
在一些实施例中,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(CDN,Content Delivery Network)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端设备400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表、智能语音交互设备、智能家电、车载终端等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
在一些实施例中,终端设备400的客户端410获取待渲染三维元素和待渲染二维元素,并将待渲染三维元素通过网络300发送到服务器200,服务器200基于待渲染三维元素,确定与待渲染三维元素对应的转换二维元素,并将转换二维元素发送到终端设备400,终端设备400渲染转换二维元素和待渲染二维元素,并在图形界面410-1中进行显示。
在另一些实施例中,终端设备400的客户端410获取待渲染三维元素和待渲染二维元素,并基于待渲染三维元素,确定与待渲染三维元素对应的转换二维元素,终端设备400渲染转换二维元素和待渲染二维元素,并在图形界面410-1中进行显示。
在一些实施例中,参见图2,图2是本申请实施例提供的终端设备400的结构示意图,图2所示的终端设备400包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。终端设备400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450可选地包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他计算机设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块453,用于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景的渲染装置可以采用软件方式实现,图2示出了存储在存储器450中的虚拟场景的渲染装置455,其可以是程序和插件等形式的软件,包括以下软件模块:第一获取模块4551、采样模块4552、第二获取模块4553、变换模块4554和渲染模块4555,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分。将在下文中说明各个模块的功能。
下面,将结合本申请实施例提供的终端设备的示例性应用和实施,说明本申请实施例提供的虚拟场景的渲染方法。
参见图3A,图3A是本申请实施例提供的虚拟场景的渲染方法的流程示意图,将结合图3A示出的步骤101至步骤105进行说明,下述步骤101至步骤105的执行主体可以是前述的服务器或终端设备。
在步骤101中,从虚拟场景的待渲染的当前帧数据中,获取至少一个待渲染三维元素和 至少一个待渲染二维元素。
作为示例,参见图4A,图4A是本申请实施例提供的虚拟场景的渲染方法的原理示意图。当虚拟场景的待渲染的当前帧为采样帧53时,可以从采样帧53的帧数据中,获取至少一个待渲染三维元素和至少一个待渲染二维元素,具体的,待渲染三维元素可以为三维元素11,待渲染二维元素可以为二维元素12和二维元素13。
在步骤102中,对每个待渲染三维元素的动画进行采样处理,得到每个待渲染三维元素的动画对应的网格序列帧。
这里,待渲染三维元素的动画包括当前帧以及至少一个历史帧,每个历史帧均包括待渲染三维元素(例如三维游戏角色),网格序列帧是指针对动画进行采样得到的多个采样帧中包括待渲染三维元素的采样帧。
作为示例,参见图4A,待渲染三维元素的动画包括当前帧53以及历史帧51和历史帧52。以三维元素11为例,对三维元素11的动画进行采样处理,可以得到三维元素11的动画对应的多个网格序列帧,例如包括历史帧51、历史帧52和当前帧53。
在一些实施例中,参见图3B,图3B是本申请实施例提供的虚拟场景的渲染方法的流程示意图。图3A所示出的步骤102可以通过针对任意一个待渲染三维元素执行图3B示出的步骤1021至步骤1022实现,下面分别进行说明。
在步骤1021中,按照采样间隔,对每个待渲染三维元素的动画进行采样处理,得到每个待渲染三维元素的动画对应的多个采样帧。
这里,采样帧的数量可以与采样间隔的时长负相关,即采样间隔的时长越长,则采样帧的数量就越少。
在一些实施例中,采样间隔是任意两个相邻的采样点之间的时间间隔,采样间隔的时长越长,所得到的采样帧的数量越少,采样间隔的时长越短,所得到的采样帧的数量越多。
作为示例,参见图4A,采样间隔可以为1ms,按照1ms的采样间隔,对三维元素11的动画进行采样处理,得到三维元素11的动画对应的采样帧51、采样帧52和采样帧53。
在步骤1022中,在多个采样帧中,确定待渲染三维元素的动画对应的网格序列帧。
这里,网格序列帧是多个采样帧中包括待渲染三维元素的采样帧。
作为示例,参见图4A,在采样帧51、采样帧52和采样帧53中均包括三维元素11,那么,采样帧51、采样帧52和采样帧53均是网格序列帧。当多个采样帧中,存在采样帧不包括待渲染三维元素(例如三维游戏角色)时,则对应的采样帧(即不包括待渲染三维元素的采样帧)不是网格序列帧,例如以多个采样帧为5个采样帧为例,假设分别为采样帧1、采样帧2、采样帧3、采样帧4和采样帧5,其中,采样帧2不包括待渲染三维元素,则采样帧2不是网格序列帧,也就是说,仅将采样帧1、采样帧3、采样帧4和采样帧5确定为网格序列帧。
在一些实施例中,上述步骤1022可以通过以下方式实现:在待渲染三维元素的动画中,确定待渲染三维元素的起始播放时刻和结束播放时刻;基于起始播放时刻和结束播放时刻,在多个采样帧中确定待渲染三维元素的动画对应的网格序列帧。
作为示例,在待渲染三维元素的动画中,确定待渲染三维元素的起始播放时刻和结束播放时刻,例如,包括待渲染三维元素的动画的播放总时长为10分钟,从动画开始播放起始,在第2分钟时,在动画中出现待渲染三维元素,那么待渲染三维元素的起始播放时刻为第2分钟,在第9分钟第10秒时,待渲染三维元素从动画中消失,那么,待渲染三维元素的结束播放时刻为第9分钟第10秒。
如此,通过确定待渲染三维元素在动画中显示和消失的时刻,确定待渲染三维元素的起始播放时刻和结束播放时刻,从而便于后续根据起始播放时刻和结束播放时刻准确的确定出网格序列帧。
在一些实施例中,上述基于起始播放时刻和结束播放时刻,在多个采样帧中,确定包括待渲染三维元素的动画对应的网格序列帧,可以通过以下方式实现:当起始播放时刻与结束 播放时刻为相同时刻时,将多个采样帧中处于相同时刻的一个采样帧,确定为动画对应的网格序列帧;当起始播放时刻与结束播放时刻为不同时刻时,将多个采样帧中处于起始播放时刻和结束播放时刻之间的至少两个采样帧,确定为动画对应的网格序列帧。
作为示例,当起始播放时刻和结束播放时刻相同时,在动画中出现待渲染三维元素后,待渲染三维元素从动画中立刻消失,即,待渲染三维元素在待渲染三维元素的动画中闪现,将多个采样帧中处于相同时刻的一个采样帧,确定为动画对应的网格序列帧。
作为示例,当起始播放时刻与结束播放时刻为不同时刻时,例如,待渲染三维元素的起始播放时刻为第2分钟,待渲染三维元素的结束播放时刻为第9分钟第10秒,将多个采样帧中处于第2分钟和第9分钟第10秒之间的至少两个采样帧,确定为动画对应的网格序列帧。
在步骤103中,从网格序列帧中获取每个待渲染三维元素分别对应的网格数据;
这里,不同的待渲染三维元素对应的网格数据的坐标系不同。
在一些实施例中,待渲染三维元素包括蒙皮三维元素、粒子三维元素和静态三维元素,其中,蒙皮三维元素、粒子三维元素和静态三维元素分别对应的网格数据的坐标系不同。
在一些实施例中,参见图3B,图3B是本申请实施例提供的虚拟场景的渲染方法的流程示意图。图3A所示出的步骤103可以通过针对任意一个待渲染三维元素执行图3B示出的步骤1031至步骤1032实现,下面分别进行说明。
在步骤1031中,确定与待渲染三维元素的元素类型对应的渲染器的渲染器类型。
在一些实施例中,当待渲染三维元素的元素类型为蒙皮三维元素时,对应的渲染器的渲染器类型为蒙皮渲染器,其中,蒙皮渲染器用于渲染蒙皮三维元素。当待渲染三维元素的元素类型为粒子三维元素时,对应的渲染器的渲染器类型为粒子渲染器,其中,粒子渲染器包括中央处理器运行的渲染器和图形处理器运行的渲染器,粒子渲染器用于渲染粒子三维元素。当待渲染三维元素的元素类型为静态三维元素时,对应的渲染器的渲染器类型为网格渲染器,其中,网格渲染器用于渲染静态三维元素。
在步骤1032中,从渲染器类型的渲染器对应的网格序列帧中,获取待渲染三维元素对应的网格数据。
在一些实施例中,当待渲染三维元素的元素类型为蒙皮三维元素时,上述步骤1032可以通过针对蒙皮三维元素执行以下处理实现:从蒙皮渲染器对应的网格序列帧中,获取蒙皮三维元素对应的网格数据;其中,蒙皮三维元素对应的网格数据包括平移数据、旋转数据和缩放数据,平移数据和旋转数据的坐标系以蒙皮三维元素所在位置为原点,缩放数据的坐标系以待渲染画布的中心点为原点。
在一些实施例中,平移数据表征蒙皮三维元素的平移特性,例如,平移特性可以是蒙皮三维元素由一个时刻的A位置,平移至另一时刻的B位置。旋转数据表征蒙皮三维元素的旋转特性,例如,旋转特性可以是蒙皮三维元素由一个时刻的A姿态旋转至另一时刻的B姿态。缩放数据表征蒙皮三维元素的缩放特性,例如,缩放特性可以是蒙皮三维元素由一个时刻的A尺寸缩放至另一时刻的B尺寸。
如此,通过从蒙皮渲染器对应的网格序列帧中,获取蒙皮三维元素对应的网格数据,从而有效保证了所获取的蒙皮三维元素对应的网格数据的准确性。
在一些实施例中,当待渲染三维元素的元素类型为静态三维元素时,上述步骤1032可以通过针对静态三维元素执行以下处理实现:从网格渲染器对应的网格序列帧中,获取静态三维元素对应的网格数据;其中,静态三维元素对应的网格数据的坐标系以静态三维元素所在位置为原点。
作为示例,静态三维元素可以是三维元素中除蒙皮三维元素和粒子三维元素以外的元素。由于静态三维元素对应的渲染器的渲染器类型为网格渲染器,因此,从网格渲染器对应的网格序列帧中,可以准确获取到静态三维元素对应的网格数据。
如此,通过从静态渲染器对应的网格序列帧中,获取静态三维元素对应的网格数据,从 而有效保证了所获取的静态三维元素对应的网格数据的准确性。
在一些实施例中,当待渲染三维元素的元素类型为粒子三维元素时,上述步骤1032可以通过针对粒子三维元素执行以下处理实现:从中央处理器运行的渲染器对应的网格序列帧中,获取粒子三维元素对应的第一网格数据;从图形处理器运行的渲染器的对应网格序列帧中,获取粒子三维元素对应的第二网格数据;将第一网格数据和第二网格数据确定为粒子三维元素对应的网格数据;其中,粒子三维元素对应的网格数据的坐标系以待渲染画布的中心点为原点。
作为示例,由于粒子三维元素对应的渲染器的渲染器类型为粒子渲染器,而粒子渲染器包括中央处理器运行的渲染器和图形处理器运行的渲染器,那么,粒子三维元素对应的网格数据可以从中央处理器运行的渲染器和图形处理器运行的渲染器中分别获取。
在步骤104中,对每个待渲染三维元素分别对应的网格数据进行变换处理,得到变换网格数据,并通过变换网格数据,创建与每个待渲染三维元素对应的转换二维元素。
在一些实施例中,上述变换处理可以是矩阵变换处理,矩阵变换处理用于对网格数据的矩阵形式进行维度变换,从而将三维的网格数据降低为二维的网格数据。
在一些实施例中,得到的变换网格数据包括:基于原始坐标系的第一变换网格数据,基于画布坐标系的第二变换网格数据,其中,画布坐标系以待渲染画布的中心点为原点,原始坐标系以转换二维元素所在位置为原点。
在步骤105中,渲染当前帧中的至少一个待渲染二维元素,并渲染当前帧中的每个待渲染三维元素对应的转换二维元素。
在一些实施例中,由于每个待渲染三维元素分别对应的网格数据的坐标系是不统一的,那么,对每个待渲染三维元素分别对应的网格数据进行变换处理后得到的变换网格数据的坐标系也是不统一的,那么,可以在创建转换二维元素时进行坐标系的统一,也可以在渲染转换二维元素时进行坐标系的统一。
下面,分别对统一坐标系的两种方式进行说明。
在一些实施例中,对在创建转换二维元素时进行坐标系的统一的情形进行说明,具体的,参见图3C,图3C是本申请实施例提供的虚拟场景的渲染方法的流程示意图。图3A所示出的步骤104可以通过针对任意一个待渲染二维元素执行图3C示出的步骤1041至步骤1043实现,下面分别进行说明。
在步骤1041中,从针对待渲染二维元素进行变换处理得到的变换网格数据中,读取第一变换网格数据。
在一些实施例中,由于得到的变换网格数据中包括基于原始坐标系的第一变换网格数据和基于画布坐标系的第二变换网格数据。从而可以从得到的变换网格数据中读取到第一变换网格数据和第二变换网格数据。
在步骤1042中,将第一变换网格数据转换为基于画布坐标系的第三变换网格数据。
在一些实施例中,上述第一变换网格数据包括以下至少之一:变换平移数据、变换旋转数据、静态变换网格数据;上述步骤1042中将第一变换网格数据转换为基于画布坐标系的第三变换网格数据,可以通过以下方式实现:当待渲染三维元素为蒙皮三维元素时,将基于原始坐标系的变换平移数据,转换为基于画布坐标系的变换平移数据,并将基于原始坐标系的变换旋转数据,转换为基于画布坐标系的变换旋转数据;当待渲染三维元素为静态三维元素时,将基于原始坐标系的静态变换网格数据,转换为基于画布坐标系的静态变换网格数据。
作为示例,当待渲染三维元素为粒子三维元素时,由于粒子三维元素对应的网格数据的坐标系是以待渲染画布的中心点为原点的,那么就不需要进行坐标系转换,即可与上述静态三维元素和蒙皮三维元素分别对应的网格数据的转换后坐标系进行统一。
如此,通过将得到的变换网格数据的坐标系进行统一,从而便于在同一坐标系下对转换二维元素进行渲染,从而有效避免了坐标系不统一所导致的渲染效果紊乱,有效提高了渲染效果。
在步骤1043中,基于第三变换网格数据以及第二变换网格数据,创建与待渲染三维元素对应的转换二维元素。
作为示例,由于第三变换网格数据和第二变换网格数据均是基于画布坐标系,从而使得所创建的转换二维元素的坐标系均是基于画布坐标系的,从而实现了坐标系的统一。
在一些实施例中,上述步骤1043中创建转换二维元素的方式可以通过以下方式实现:基于第三变换网格数据和第二变换网格数据,确定每个待渲染三维元素在画布坐标系下的坐标;基于坐标和待渲染三维元素的几何特征,创建与每个待渲染三维元素对应的转换二维元素,其中,几何特征表征待渲染三维元素的几何形状。
作为示例,由于第三变换网格数据和第二变换网格数据均是基于画布坐标系的,从而可以基于第三变换网格数据和第二变换网格数据,确定每个待渲染三维元素在画布坐标系下的坐标,确定了每个待渲染三维元素在画布坐标系下的坐标,即确定了每个待渲染三维元素在画布坐标系下的具体位置。进而基于每个待渲染三维元素在画布坐标系下的具体位置和待渲染三维元素的几何特征,创建与每个待渲染三维元素对应的转换二维元素,其中,转换二维元素可以是待渲染三维元素在画布坐标系上的投影。
对应的,图3A所示出的步骤105可以通过执行图3C示出的步骤1051实现,下面进行说明。
在步骤1051中,渲染所创建的与待渲染三维元素对应的转换二维元素。
作为示例,由于在上述步骤1041至步骤1043创建转换二维元素的过程中已经完成了坐标系的统一,那么在上述步骤1051对转换二维元素进行渲染的过程中,则无需进行坐标系的统一,直接对所创建与待渲染三维元素对应的转换二维元素进行渲染即可。
在另一些实施例中,对在渲染转换二维元素时进行坐标系的统一的情形进行说明,具体的,参见图3D,图3D是本申请实施例提供的虚拟场景的渲染方法的流程示意图。图3A所示出的步骤104可以通过执行图3D示出的步骤1044实现,下面分别进行说明。
在步骤1044中,基于第一变换网格数据以及第二变换网格数据,创建与待渲染三维元素对应的转换二维元素。
作为示例,由于在渲染转换二维元素时会进行坐标系的统一,从而在创建转换二维元素时,无需进行坐标系的统一,直接基于第一变换网格数据以及第二变换网格数据,创建与待渲染三维元素对应的转换二维元素即可,此时,由于第一变换网格数据是基于原始坐标系的,而第二变换网格数据是基于画布坐标系的,因而,从创建的与待渲染三维元素对应的转换二维元素的坐标系并不统一。
对应的,图3A所示出的步骤105可以通过针对任意一个转换二维元素执行图3D示出的步骤1052至步骤1055实现,下面分别进行说明。
在步骤1052中,从得到的变换网格数据中,读取第一变换网格数据。
在一些实施例中,由于得到的变换网格数据中包括基于原始坐标系的第一变换网格数据和基于画布坐标系的第二变换网格数据。从而可以从得到的变换网格数据中读取到第一变换网格数据和第二变换网格数据。
在步骤1053中,将第一变换网格数据,转换为基于画布坐标系的第四变换网格数据。
作为示例,由于第一变换网格数据是基于原始坐标系的,通过将第一变换网格数据的坐标系进行转换,从而得到基于画布坐标系的第四变换网格数据。
在步骤1054中,基于第四变换网格数据以及第二变换网格数据,创建用于直接在待渲染画布渲染的待渲染转换二维元素。
作为示例,由于第四变换网格数据以及第二变换网格数据均是基于画布坐标系的,从而使得所创建的转换二维元素的坐标系均是基于画布坐标系的,从而实现了坐标系的统一。
在步骤1055中,渲染待渲染转换二维元素。
如此,通过将得到的变换网格数据的坐标系进行统一,从而便于在同一坐标系下对待渲染转换二维元素进行渲染,从而有效避免了坐标系不统一所导致的渲染效果紊乱,有效提高 了渲染效果。
在一些实施例中,参见图3E,图3E是本申请实施例提供的虚拟场景的渲染方法的流程示意图。在执行图3A示出的步骤105之前,还可以通过执行图3E示出的步骤106至步骤109实现待渲染二维元素和转换二维元素的排序,下面分别进行说明。
在步骤106中,申请第二内存空间。
在一些实施例中,至少一个待渲染二维元素以及每个待渲染三维元素对应的转换二维元素存储于第一内存空间中。
如此,通过在排序之前申请第二内存空间,从而便于后续将排序后待渲染二维元素以及排序后转换二维元素存储至第二内存空间中。
在步骤107中,基于第一内存空间中的至少一个待渲染二维元素以及每个待渲染三维元素对应的转换二维元素,生成与待渲染二维元素以及转换二维元素分别对应的渲染数据。
在一些实施例中,渲染数据表征元素所在的渲染层级,如此,通过生成与待渲染二维元素以及转换二维元素分别对应的渲染数据,从而便于基于渲染数据,对第一内存空间中的待渲染二维元素以及转换二维元素进行排序处理。
在步骤108中,基于渲染数据,对第一内存空间中的待渲染二维元素以及转换二维元素进行排序处理,得到排序后待渲染二维元素以及排序后转换二维元素。
在一些实施例中,由于渲染数据表征元素所在的渲染层级,那么,可以根据第一内存空间中的待渲染二维元素以及转换二维元素分别所在的渲染层级,对第一内存空间中的待渲染二维元素以及转换二维元素进行排序处理,得到排序后待渲染二维元素以及排序后转换二维元素。
在步骤109中,将排序后待渲染二维元素以及排序后转换二维元素,存储至第二内存空间中。
这里,排序处理可以用于确定元素之间的渲染顺序。
在一些实施例中,还可以针对第一内存空间中的任意一个待渲染二维元素执行以下处理确定渲染顺序:基于待渲染二维元素的渲染数据,确定待渲染二维元素与第一内存空间中的其他元素之间的层级关系,其中,其他元素是第一内存空间中除待渲染二维元素以外的二维元素;基于层级关系,确定待渲染二维元素与第一内存空间中的其他元素之间的渲染顺序,其中,层级关系与渲染顺序正相关。
作为示例,基于待渲染二维元素的渲染数据,确定待渲染二维元素与第一内存空间中的其他元素之间的层级关系,例如,当待渲染二维元素所在层级为最底层的层级时,待渲染二维元素与第一内存空间中的其他元素之间的层级关系为第一内存空间中的其他元素的层级均大于待渲染二维元素的层级;基于层级关系,确定待渲染二维元素与第一内存空间中的其他元素之间的渲染顺序,当待渲染二维元素所在层级为最底层的层级时,则先渲染第一内存空间中的其他元素,最后渲染待渲染二维元素。
在一些实施例中,在执行上述的步骤105之前,还可以将用于渲染待渲染三维元素的三维渲染组件禁用。
如此,通过将用于渲染待渲染三维元素的三维渲染组件禁用,从而避免了三维渲染组件和二维渲染组件的混合使用,所导致的渲染混乱。
在一些实施例中,上述步骤105可以通过以下方式实现:调用二维渲染组件,按照渲染顺序依次渲染排序后待渲染二维元素以及排序后转换二维元素。
如此,通过将用于渲染待渲染三维元素的三维渲染组件禁用,单独调用二维渲染组件,按照渲染顺序依次渲染排序后待渲染二维元素以及排序后转换二维元素,从而有效保证了二维元素和三维元素的混合渲染效果。
下面,将说明本申请实施例在一个实际的游戏画面渲染的应用场景中的示例性应用。
参见图4A,虚拟场景的动画中可以包括多个采样帧,如图4A所示出的采样帧51、采样帧52和采样帧53中的二维元素12、二维元素13和三维元素11动态变化,即,二维元素 12、二维元素13和三维元素11在采样帧51、采样帧52和采样帧53中的位置各不相同。具体的,在采样帧52中,二维元素13已经正确渲染在二维元素12的下方,从而实现了是二维元素和二维元素之间的层级穿插。在采样帧53中,通过调用游戏引擎的遮罩组件,可以看到,二维元素13的裁剪效果正确,说明转换后的二维元素功能正常。通过本申请实施例提供的虚拟场景的渲染方法,可以有效提高二维元素和二维元素的混合渲染效果。
参见图4B,图4B是本申请实施例提供的虚拟场景的渲染方法的效果示意图。在一个实际的游戏画面渲染的应用场景中,通过本申请实施例提供的虚拟场景的渲染方法,实现了不同类型(类型A1、类型A2和类型A3)的游戏对象中的二维元素和三维元素的混合渲染,有效提高二维元素和三维元素的混合渲染效果。
在一些实施例中,参见图4C,图4C是本申请实施例提供的虚拟场景的渲染方法的流程示意图。将结合图4C示出的步骤501至步骤507进行说明。
在步骤501中,获取蒙皮三维元素、静态三维元素以及粒子三维元素。
作为示例,从虚拟场景的待渲染的当前帧数据中,获取至少一个待渲染三维元素和至少一个待渲染二维元素,其中,待渲染三维元素包括蒙皮三维元素、静态三维元素以及粒子三维元素。
在步骤502中,更新网格数据。
作为示例,由于三维元素是随着虚拟场景的动画的变化而变化的,随着虚拟场景的动画的更新,更新蒙皮三维元素、静态三维元素以及粒子三维元素对应的网格数据。
在步骤503中,禁用渲染模块。
作为示例,将用于渲染待渲染三维元素的三维渲染组件禁用,只保留逻辑更新部分,以更新网格数据。
在步骤504中,获取网格数据。
作为示例,获取蒙皮三维元素、静态三维元素以及粒子三维元素分别对应的网格数据。
在一些实施例中,参见图4D,图4D是本申请实施例提供的虚拟场景的渲染方法的原理示意图,下面,结合图4D对获取网格数据的过程进行说明。当待渲染三维元素的元素类型为蒙皮三维元素时,从蒙皮渲染器对应的网格序列帧中,获取蒙皮三维元素对应的网格数据;其中,蒙皮三维元素对应的网格数据的坐标系可以为本地坐标系和画布坐标系。蒙皮三维元素对应的网格数据包括旋转数据、平移数据和缩放数据,旋转数据和平移数据的坐标系可以为本地坐标系,缩放数据的坐标系可以为画布坐标系。当待渲染三维元素的元素类型为静态三维元素时,从网格渲染器对应的网格序列帧中,获取静态三维元素对应的网格数据,静态三维元素对应的网格数据的坐标系可以为本地坐标系。当待渲染三维元素的元素类型为粒子三维元素时,可以从粒子渲染器对应的网格序列帧中,获取粒子三维元素对应的网格数据,粒子三维元素对应的网格数据的坐标系可以为画布坐标系。
继续参见图4C,在步骤505中,对网格数据进行变换。
作为示例,对获取到的蒙皮三维元素、静态三维元素以及粒子三维元素分别对应的网格数据,进行矩阵变换处理,得到变换网格数据。
在步骤506中,创建三维元素对应的转换二维元素。
作为示例,基于得到的变换网格数据,创建与每个待渲染三维元素对应的转换二维元素。
在步骤507中,统一每个转换二维元素的坐标系。
作为示例,参见图4D,将每个待渲染三维元素对应的转换二维元素进行坐标系的统一,以使得所渲染的转换二维元素在相同坐标系下。
下面,对于统一坐标系的具体实现方式进行详细说明。
在一些实施例中,在创建与每个待渲染三维元素对应的转换二维元素的过程中,完成坐标系的统一。从针对待渲染三维元素进行变换处理得到的变换网格数据中,读取第一变换网格数据;将第一变换网格数据转换为基于画布坐标系的第三变换网格数据;基于第三变换网格数据以及第二变换网格数据,创建与待渲染三维元素对应的转换二维元素。
作为示例,蒙皮三维元素对应的网格数据包括旋转数据、平移数据和缩放数据,旋转数据和平移数据的坐标系可以为本地坐标系,即第一变换网格数据为旋转数据和平移数据,第二变换网格数据为缩放数据,将第一变换网格数据转换为基于画布坐标系的第三变换网格数据,然后基于第三变换网格数据以及第二变换网格数据,创建与待渲染三维元素对应的转换二维元素。
在另一些实施例中,在渲染当前帧中的每个待渲染三维元素对应的转换二维元素的过程中,完成坐标系的统一。从得到的变换网格数据中,读取第一变换网格数据;将第一变换网格数据,转换为基于画布坐标系的第四变换网格数据;基于第四变换网格数据以及第二变换网格数据,创建用于直接在待渲染画布渲染的待渲染转换二维元素;渲染待渲染转换二维元素。
作为示例,参见图4E,图4E是本申请实施例提供的虚拟场景的渲染方法的原理示意图,在渲染当前帧中的每个待渲染三维元素对应的转换二维元素的过程中,完成坐标系的统一,将原始坐标系转换为画布坐标系。并对待渲染三维元素对应的网格数据进行矩阵变换处理,并对矩阵变换处理结果进行修正变换。
在一些实施例中,在渲染待渲染二维元素和转换二维元素之前,可以通过以下方式确定渲染顺序:首先,预先申请内存空间(Prepare Output),然后根据指令集和生成对应的渲染信息数据,基于渲染信息数据对渲染待渲染二维元素和转换二维元素进行排序,从而确定渲染待渲染二维元素和转换二维元素之间的渲染顺序。
在一些实施例中,参见图4F和图4G,图4F和图4G是本申请实施例提供的虚拟场景的渲染方法的原理示意图。图4F中所示出的是相关技术的时间开销,图4G所示出的是本申请实施例提供的虚拟场景的渲染方法的时间开销。可以看出,相关技术的时间开销为0.92毫秒(ms),中央处理器的总开销为1.58ms。而在本申请中,时间开销明显减少,从0.92ms变为0.02ms,中央处理器的总开销减少为0.83ms,性能明显提升。
如此,通过本申请实施例提供的虚拟场景的渲染方法,实现了三维元素和二维元素之间无障碍的混合使用,在能够有效提高二维元素和三维元素的混合渲染效果的同时,有效减少了时间开销,提升了开发效率,为处理更为复杂的三维元素提供了性能空间。
可以理解的是,在本申请实施例中,涉及到的当前帧数据等相关的数据,当本申请实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
下面继续说明本申请实施例提供的虚拟场景的渲染装置455的实施为软件模块的示例性结构,在一些实施例中,如图2所示,存储在存储器440的虚拟场景的渲染装置455中的软件模块可以包括:第一获取模块4551,配置为从虚拟场景的待渲染的当前帧数据中,获取至少一个待渲染三维元素和至少一个待渲染二维元素;采样模块4552,配置为对每个待渲染三维元素的动画进行采样处理,得到每个待渲染三维元素的动画对应的网格序列帧,其中,待渲染三维元素的动画包括当前帧以及至少一个历史帧,每个历史帧均包括待渲染三维元素;第二获取模块4553,配置为从网格序列帧中获取每个待渲染三维元素分别对应的网格数据,其中,不同的待渲染三维元素对应的网格数据的坐标系不同;变换模块4554,配置为对每个待渲染三维元素分别对应的网格数据进行变换处理,得到变换网格数据,并通过变换网格数据,创建与每个待渲染三维元素对应的转换二维元素;渲染模块4555,配置为渲染当前帧中的至少一个待渲染二维元素,并渲染当前帧中的每个待渲染三维元素对应的转换二维元素。
在一些实施例中,上述第二获取模块4553,还配置为针对任意一个待渲染三维元素执行以下处理:确定与待渲染三维元素的元素类型对应的渲染器的渲染器类型;从渲染器类型的渲染器对应的网格序列帧中,获取待渲染三维元素对应的网格数据。
在一些实施例中,上述第二获取模块4553,还配置为当待渲染三维元素的元素类型为蒙皮三维元素时,针对蒙皮三维元素执行以下处理:从蒙皮渲染器对应的网格序列帧中,获取蒙皮三维元素对应的网格数据;其中,蒙皮三维元素对应的网格数据包括平移数据、旋转数 据和缩放数据,平移数据和旋转数据的坐标系以蒙皮三维元素所在位置为原点,缩放数据的坐标系以待渲染画布的中心点为原点。
在一些实施例中,上述第二获取模块4553,还配置为当待渲染三维元素的元素类型为静态三维元素时,针对静态三维元素执行以下处理:从网格渲染器对应的网格序列帧中,获取静态三维元素对应的网格数据;其中,静态三维元素对应的网格数据的坐标系以静态三维元素所在位置为原点。
在一些实施例中,上述第二获取模块4553,还配置为当待渲染三维元素的元素类型为粒子三维元素时,针对粒子三维元素执行以下处理:从中央处理器运行的渲染器对应的网格序列帧中,获取粒子三维元素对应的第一网格数据;从图形处理器运行的渲染器的对应网格序列帧中,获取粒子三维元素对应的第二网格数据;将第一网格数据和第二网格数据确定为粒子三维元素对应的网格数据;其中,粒子三维元素对应的网格数据的坐标系以待渲染画布的中心点为原点。
在一些实施例中,得到的变换网格数据包括:基于原始坐标系的第一变换网格数据,基于画布坐标系的第二变换网格数据,其中,画布坐标系以待渲染画布的中心点为原点,原始坐标系以转换二维元素所在位置为原点;上述变换模块4554,还配置为针对任意一个待渲染三维元素执行以下处理:从针对待渲染三维元素进行变换处理得到的变换网格数据中,读取第一变换网格数据;将第一变换网格数据转换为基于画布坐标系的第三变换网格数据;基于第三变换网格数据以及第二变换网格数据,创建与待渲染三维元素对应的转换二维元素。
在一些实施例中,上述渲染模块4555,还配置为渲染所创建的与待渲染三维元素对应的转换二维元素。
在一些实施例中,上述变换模块4554,还配置为基于第三变换网格数据和第二变换网格数据,确定每个待渲染三维元素在画布坐标系下的坐标;基于坐标和待渲染三维元素的几何特征,创建与每个待渲染三维元素对应的转换二维元素,其中,几何特征表征待渲染三维元素的几何形状。
在一些实施例中,第一变换网格数据包括以下至少之一:变换平移数据、变换旋转数据、静态变换网格数据;上述变换模块4554,还配置为当待渲染三维元素为蒙皮三维元素时,将基于原始坐标系的变换平移数据,转换为基于画布坐标系的变换平移数据,并将基于原始坐标系的变换旋转数据,转换为基于画布坐标系的变换旋转数据;当待渲染三维元素为静态三维元素时,将基于原始坐标系的静态变换网格数据,转换为基于画布坐标系的静态变换网格数据。
在一些实施例中,得到的变换网格数据包括:基于原始坐标系的第一变换网格数据,基于画布坐标系的第二变换网格数据,其中,画布坐标系以待渲染画布的中心点为原点,原始坐标系以转换二维元素所在位置为原点;上述变换模块4554,还配置为基于第一变换网格数据以及第二变换网格数据,创建与待渲染三维元素对应的转换二维元素。
在一些实施例中,上述渲染模块4555,还配置为针对任意一个转换二维元素执行以下处理:从得到的变换网格数据中,读取第一变换网格数据;将第一变换网格数据,转换为基于画布坐标系的第四变换网格数据;基于第四变换网格数据以及第二变换网格数据,创建用于直接在待渲染画布渲染的待渲染转换二维元素;渲染待渲染转换二维元素。
在一些实施例中,上述虚拟场景的渲染装置455还包括:排序模块,配置为申请第二内存空间;基于第一内存空间中的至少一个待渲染二维元素以及每个待渲染三维元素对应的转换二维元素,生成与待渲染二维元素以及转换二维元素分别对应的渲染数据;基于渲染数据,对第一内存空间中的待渲染二维元素以及转换二维元素进行排序处理,得到排序后待渲染二维元素以及排序后转换二维元素;将排序后待渲染二维元素以及排序后转换二维元素,存储至第二内存空间中,其中,排序处理用于确定元素之间的渲染顺序。
在一些实施例中,上述虚拟场景的渲染装置455还包括:顺序确定模块,配置为针对第一内存空间中的任意一个待渲染二维元素执行以下处理:基于待渲染二维元素的渲染数据, 确定待渲染二维元素与第一内存空间中的其他元素之间的层级关系,其中,其他元素是第一内存空间中除待渲染二维元素以外的二维元素;基于层级关系,确定待渲染二维元素与第一内存空间中的其他元素之间的渲染顺序,其中,层级关系与渲染顺序正相关。
在一些实施例中,上述虚拟场景的渲染装置455还包括:禁用模块,配置为将用于渲染待渲染三维元素的三维渲染组件禁用。上述渲染模块4555,还用于调用二维渲染组件,按照渲染顺序依次渲染排序后待渲染二维元素以及排序后转换二维元素。
在一些实施例中,上述采样模块4552,还配置为针对任意一个待渲染三维元素执行以下处理:按照采样间隔,对每个待渲染三维元素的动画进行采样处理,得到每个待渲染三维元素的动画对应的多个采样帧,其中,采样帧的数量与采样间隔的时长负相关;在多个采样帧中,确定待渲染三维元素的动画对应的网格序列帧,其中,网格序列帧是多个采样帧中包括待渲染三维元素的采样帧。
在一些实施例中,上述采样模块4552,还配置为在待渲染三维元素的动画中,确定待渲染三维元素的起始播放时刻和结束播放时刻;基于起始播放时刻和结束播放时刻,在多个采样帧中确定待渲染三维元素的动画对应的网格序列帧。
在一些实施例中,上述采样模块4552,还配置为当起始播放时刻与结束播放时刻为相同时刻时,将多个采样帧中处于相同时刻的一个采样帧,确定为待渲染三维元素的动画对应的网格序列帧;当起始播放时刻与结束播放时刻为不同时刻时,将多个采样帧中处于起始播放时刻和结束播放时刻之间的至少两个采样帧,确定为待渲染三维元素的动画对应的网格序列帧。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟场景的渲染方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟场景的渲染方法,例如,如图3A示出的虚拟场景的渲染方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
综上,本申请实施例具有以下有益效果:
(1)通过将用于渲染待渲染三维元素的三维渲染组件禁用,单独调用二维渲染组件,按照渲染顺序依次渲染排序后待渲染二维元素以及排序后转换二维元素,从而有效保证了二维元素和三维元素的混合渲染效果。
(2)通过将用于渲染待渲染三维元素的三维渲染组件禁用,从而避免了三维渲染组件和二维渲染组件的混合使用,所导致的渲染混乱。
(3)通过在排序之前申请第二内存空间,从而便于后续将排序后待渲染二维元素以及 排序后转换二维元素存储至第二内存空间中。
(4)通过将得到的变换网格数据的坐标系进行统一,从而便于在同一坐标系下对待渲染转换二维元素进行渲染,从而有效避免了坐标系不统一所导致的渲染效果紊乱,有效提高了渲染效果。
(5)通过将得到的变换网格数据的坐标系进行统一,从而便于在同一坐标系下对转换二维元素进行渲染,从而有效避免了坐标系不统一所导致的渲染效果紊乱,有效提高了渲染效果。
(6)通过从静态渲染器对应的网格序列帧中,获取静态三维元素对应的网格数据,从而有效保证了所获取的静态三维元素对应的网格数据的准确性。
(7)通过从蒙皮渲染器对应的网格序列帧中,获取蒙皮三维元素对应的网格数据,从而有效保证了所获取的蒙皮三维元素对应的网格数据的准确性。
(8)通过确定待渲染三维元素在动画中显示和消失的时刻,确定待渲染三维元素的起始播放时刻和结束播放时刻,从而便于后续根据起始播放时刻和结束播放时刻准确的确定出网格序列帧。
(9)通过对待渲染三维元素对应的网格数据进行变换处理,进而根据得到的变换网格数据,创建与待渲染三维元素对应的转换二维元素,从而实现了待渲染三维元素的转换,进而渲染待渲染二维元素和转换二维元素。如此,对待渲染三维元素进行转换得到转换二维元素,并渲染转换二维元素和待渲染二维元素实现了待渲染二维元素和待渲染三维元素的有效适配,实现待渲染三维元素和待渲染二维元素渲染模式的统一,使待渲染三维元素的渲染效果得以保留,从而有效提高二维元素和三维元素的混合渲染效果。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (21)

  1. 一种虚拟场景的渲染方法,由电子设备执行,所述方法包括:
    从所述虚拟场景的待渲染的当前帧数据中,获取至少一个待渲染三维元素和至少一个待渲染二维元素;
    对每个所述待渲染三维元素的动画进行采样处理,得到每个所述待渲染三维元素的动画对应的网格序列帧,其中,所述待渲染三维元素的动画包括所述当前帧以及至少一个历史帧,每个所述历史帧均包括所述待渲染三维元素;
    从所述网格序列帧中获取每个所述待渲染三维元素分别对应的网格数据,其中,不同的所述待渲染三维元素对应的所述网格数据的坐标系不同;
    对每个所述待渲染三维元素分别对应的网格数据进行变换处理,得到变换网格数据,并通过所述变换网格数据,创建与每个所述待渲染三维元素对应的转换二维元素;
    渲染所述当前帧中的所述至少一个待渲染二维元素,并渲染所述当前帧中的每个所述待渲染三维元素对应的所述转换二维元素。
  2. 根据权利要求1所述的方法,所述从所述网格序列帧中获取每个所述待渲染三维元素分别对应的网格数据,包括:
    针对任意一个所述待渲染三维元素执行以下处理:
    确定与所述待渲染三维元素的元素类型对应的渲染器的渲染器类型;
    从所述渲染器类型的渲染器对应的网格序列帧中,获取所述待渲染三维元素对应的网格数据。
  3. 根据权利要求2所述的方法,所述从所述渲染器类型的渲染器对应的网格序列帧中,获取所述待渲染三维元素对应的网格数据,包括:
    当所述待渲染三维元素的元素类型为蒙皮三维元素时,针对所述蒙皮三维元素执行以下处理:
    从蒙皮渲染器对应的网格序列帧中,获取所述蒙皮三维元素对应的网格数据;
    其中,所述蒙皮三维元素对应的网格数据包括平移数据、旋转数据和缩放数据,所述平移数据和所述旋转数据的坐标系以所述蒙皮三维元素所在位置为原点,所述缩放数据的坐标系以待渲染画布的中心点为原点。
  4. 根据权利要求2所述的方法,所述从所述渲染器类型的渲染器对应的网格序列帧中,获取所述待渲染三维元素对应的网格数据,包括:
    当所述待渲染三维元素的元素类型为静态三维元素时,针对所述静态三维元素执行以下处理:
    从网格渲染器对应的网格序列帧中,获取所述静态三维元素对应的网格数据;
    其中,所述静态三维元素对应的网格数据的坐标系以所述静态三维元素所在位置为原点。
  5. 根据权利要求2所述的方法,所述从所述渲染器类型的渲染器对应的网格序列帧中,获取所述待渲染三维元素对应的网格数据,包括:
    当所述待渲染三维元素的元素类型为粒子三维元素时,针对所述粒子三维元素执行以下处理:
    从中央处理器运行的渲染器对应的网格序列帧中,获取所述粒子三维元素对应的第一网格数据;
    从图形处理器运行的渲染器的对应网格序列帧中,获取所述粒子三维元素对应的第二网格数据;
    将所述第一网格数据和所述第二网格数据确定为所述粒子三维元素对应的网格数据;
    其中,所述粒子三维元素对应的网格数据的坐标系以待渲染画布的中心点为原点。
  6. 根据权利要求1所述的方法,
    所述变换网格数据包括:基于原始坐标系的第一变换网格数据,基于画布坐标系的第二变换网格数据,其中,所述画布坐标系以待渲染画布的中心点为原点,所述原始坐标系以所述转换二维元素所在位置为原点;
    所述通过所述变换网格数据,创建与每个所述待渲染三维元素对应的转换二维元素,包括:
    针对任意一个所述待渲染三维元素执行以下处理:
    从针对所述待渲染三维元素进行所述变换处理得到的所述变换网格数据中,读取所述第一变换网格数据;
    将所述第一变换网格数据转换为基于所述画布坐标系的第三变换网格数据;
    基于所述第三变换网格数据以及所述第二变换网格数据,创建与所述待渲染三维元素对应的转换二维元素。
  7. 根据权利要求6所述的方法,所述渲染所述当前帧中的每个所述待渲染三维元素对应的所述转换二维元素,包括:
    渲染所创建的与所述待渲染三维元素对应的转换二维元素。
  8. 根据权利要求6所述的方法,所述基于所述第三变换网格数据以及所述第二变换网格数据,创建与所述待渲染三维元素对应的转换二维元素,包括:
    基于所述第三变换网格数据和所述第二变换网格数据,确定每个所述待渲染三维元素在所述画布坐标系下的坐标;
    基于所述坐标和所述待渲染三维元素的几何特征,创建与每个所述待渲染三维元素对应的转换二维元素,其中,所述几何特征表征所述待渲染三维元素的几何形状。
  9. 根据权利要求6所述的方法,所述第一变换网格数据包括以下至少之一:变换平移数据、变换旋转数据、静态变换网格数据;
    所述将所述第一变换网格数据转换为基于所述画布坐标系的第三变换网格数据,包括:
    当所述待渲染三维元素为蒙皮三维元素时,将基于所述原始坐标系的变换平移数据,转换为基于所述画布坐标系的变换平移数据,并将基于所述原始坐标系的变换旋转数据,转换为基于所述画布坐标系的变换旋转数据;
    当所述待渲染三维元素为静态三维元素时,将基于所述原始坐标系的静态变换网格数据,转换为基于所述画布坐标系的静态变换网格数据。
  10. 根据权利要求1所述的方法,
    所述变换网格数据包括:基于原始坐标系的第一变换网格数据,基于画布坐标系的第二变换网格数据,其中,所述画布坐标系以待渲染画布的中心点为原点,所述原始坐标系以所述转换二维元素所在位置为原点;
    所述通过所述变换网格数据,创建与每个所述待渲染三维元素对应的转换二维元素,包括:
    基于所述第一变换网格数据以及所述第二变换网格数据,创建与所述待渲染三维元素对应的转换二维元素。
  11. 根据权利要求10所述的方法,
    所述渲染所述当前帧中的每个所述待渲染三维元素对应的转换二维元素,包括:
    针对任意一个所述转换二维元素执行以下处理:
    从所述变换网格数据中,读取所述第一变换网格数据;
    将所述第一变换网格数据,转换为基于所述画布坐标系的第四变换网格数据;
    基于所述第四变换网格数据以及所述第二变换网格数据,创建用于直接在所述待渲染画布渲染的待渲染转换二维元素;
    渲染所述待渲染转换二维元素。
  12. 根据权利要求1所述的方法,
    所述至少一个待渲染二维元素、以及每个所述待渲染三维元素对应的转换二维元素存储 于第一内存空间中;
    所述渲染所述当前帧中的至少一个待渲染二维元素,并渲染所述当前帧中的每个所述待渲染三维元素对应的转换二维元素之前,所述方法还包括:
    申请第二内存空间;
    基于所述第一内存空间中的所述至少一个待渲染二维元素以及每个所述待渲染三维元素对应的转换二维元素,生成与所述待渲染二维元素以及所述转换二维元素分别对应的渲染数据;
    基于所述渲染数据,对所述第一内存空间中的所述待渲染二维元素以及所述转换二维元素进行排序处理,得到排序后待渲染二维元素以及排序后转换二维元素;
    将所述排序后待渲染二维元素以及所述排序后转换二维元素,存储至所述第二内存空间中,其中,所述排序处理用于确定元素之间的渲染顺序。
  13. 根据权利要求12所述的方法,所述方法还包括:
    针对所述第一内存空间中的任意一个所述待渲染二维元素执行以下处理:
    基于所述待渲染二维元素的渲染数据,确定所述待渲染二维元素与所述第一内存空间中的其他元素之间的层级关系,其中,所述其他元素是所述第一内存空间中除所述待渲染二维元素以外的二维元素;
    基于所述层级关系,确定所述待渲染二维元素与所述第一内存空间中的其他元素之间的渲染顺序,其中,所述层级关系与所述渲染顺序正相关。
  14. 根据权利要求12所述的方法,
    所述渲染所述当前帧中的每个所述待渲染三维元素对应的转换二维元素之前,所述方法还包括:
    将用于渲染所述待渲染三维元素的三维渲染组件禁用;
    所述渲染所述当前帧中的至少一个待渲染二维元素,并渲染所述当前帧中的每个所述待渲染三维元素对应的转换二维元素,包括:
    调用二维渲染组件,按照所述渲染顺序依次渲染所述排序后待渲染二维元素以及排序后转换二维元素。
  15. 根据权利要求1所述的方法,所述对每个所述待渲染三维元素的动画进行采样处理,得到每个所述待渲染三维元素的动画对应的网格序列帧,包括:
    针对任意一个所述待渲染三维元素执行以下处理:
    按照采样间隔,对每个所述待渲染三维元素的动画进行采样处理,得到每个所述待渲染三维元素的动画对应的多个采样帧,其中,所述采样帧的数量与所述采样间隔的时长负相关;
    在所述多个采样帧中,确定所述待渲染三维元素的动画对应的网格序列帧,其中,所述网格序列帧是所述多个采样帧中包括所述待渲染三维元素的采样帧。
  16. 根据权利要求15所述的方法,所述在所述多个采样帧中,确定所述待渲染三维元素的动画对应的网格序列帧,包括:
    在所述待渲染三维元素的动画中,确定所述待渲染三维元素的起始播放时刻和结束播放时刻;
    基于所述起始播放时刻和所述结束播放时刻,在所述多个采样帧中确定所述待渲染三维元素的动画对应的网格序列帧。
  17. 根据权利要求16所述的方法,所述基于所述起始播放时刻和所述结束播放时刻,在所述多个采样帧中,确定所述待渲染三维元素的动画对应的网格序列帧,包括:
    当所述起始播放时刻与所述结束播放时刻为相同时刻时,将所述多个采样帧中处于所述相同时刻的一个采样帧,确定为所述待渲染三维元素的动画对应的网格序列帧;
    当所述起始播放时刻与所述结束播放时刻为不同时刻时,将所述多个采样帧中处于所述起始播放时刻和所述结束播放时刻之间的至少两个采样帧,确定为所述待渲染三维元素的动画对应的网格序列帧。
  18. 一种虚拟场景的渲染装置,所述装置包括:
    第一获取模块,配置为从所述虚拟场景的待渲染的当前帧数据中,获取至少一个待渲染三维元素和至少一个待渲染二维元素;
    采样模块,配置为对每个所述待渲染三维元素的动画进行采样处理,得到每个所述待渲染三维元素的动画对应的网格序列帧,其中,所述待渲染三维元素的动画包括所述当前帧以及至少一个历史帧,每个所述历史帧均包括所述待渲染三维元素;
    第二获取模块,配置为从所述网格序列帧中获取每个所述待渲染三维元素分别对应的网格数据,其中,不同的所述待渲染三维元素对应的所述网格数据的坐标系不同;
    变换模块,配置为对每个所述待渲染三维元素分别对应的网格数据进行变换处理,得到变换网格数据,并通过所述变换网格数据,创建与每个所述待渲染三维元素对应的转换二维元素;
    渲染模块,配置为渲染所述当前帧中的所述至少一个待渲染二维元素,并渲染所述当前帧中的每个所述待渲染三维元素对应的所述转换二维元素。
  19. 一种电子设备,所述电子设备包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令或者计算机程序时,实现权利要求1至17任一项所述的虚拟场景的渲染方法。
  20. 一种计算机可读存储介质,存储有可执行指令或者计算机程序,所述可执行指令被处理器执行时实现权利要求1至17任一项所述的虚拟场景的渲染方法。
  21. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现权利要求1至17任一项所述的虚拟场景的渲染方法。
PCT/CN2022/135314 2022-03-11 2022-11-30 一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品 WO2023168999A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/378,066 US20240033625A1 (en) 2022-03-11 2023-10-09 Rendering method and apparatus for virtual scene, electronic device, computer-readable storage medium, and computer program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210239108.0A CN116764203A (zh) 2022-03-11 2022-03-11 一种虚拟场景的渲染方法、装置、设备及存储介质
CN202210239108.0 2022-03-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/378,066 Continuation US20240033625A1 (en) 2022-03-11 2023-10-09 Rendering method and apparatus for virtual scene, electronic device, computer-readable storage medium, and computer program product

Publications (1)

Publication Number Publication Date
WO2023168999A1 true WO2023168999A1 (zh) 2023-09-14

Family

ID=87937129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/135314 WO2023168999A1 (zh) 2022-03-11 2022-11-30 一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Country Status (3)

Country Link
US (1) US20240033625A1 (zh)
CN (1) CN116764203A (zh)
WO (1) WO2023168999A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706830A (zh) * 2009-11-12 2010-05-12 中国人民解放军国防科学技术大学 对刚性材质物体表面网格模型进行钻孔后模型重建的方法
CN103559730A (zh) * 2013-11-20 2014-02-05 广州博冠信息科技有限公司 一种渲染方法及装置
US20150084949A1 (en) * 2013-09-26 2015-03-26 Abhishek Venkatesh Stereoscopic rendering using vertix shader instancing
CN106204704A (zh) * 2016-06-29 2016-12-07 乐视控股(北京)有限公司 虚拟现实中三维场景的渲染方法和装置
CN108243629A (zh) * 2015-11-11 2018-07-03 索尼公司 图像处理设备和图像处理方法
CN108479067A (zh) * 2018-04-12 2018-09-04 网易(杭州)网络有限公司 游戏画面的渲染方法和装置
CN109345616A (zh) * 2018-08-30 2019-02-15 腾讯科技(深圳)有限公司 三维虚拟宠物的二维渲染图的生成方法、设备及存储介质
CN112396689A (zh) * 2019-08-14 2021-02-23 富士施乐株式会社 三维形状数据生成装置及方法、三维成型装置及存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706830A (zh) * 2009-11-12 2010-05-12 中国人民解放军国防科学技术大学 对刚性材质物体表面网格模型进行钻孔后模型重建的方法
US20150084949A1 (en) * 2013-09-26 2015-03-26 Abhishek Venkatesh Stereoscopic rendering using vertix shader instancing
CN103559730A (zh) * 2013-11-20 2014-02-05 广州博冠信息科技有限公司 一种渲染方法及装置
CN108243629A (zh) * 2015-11-11 2018-07-03 索尼公司 图像处理设备和图像处理方法
US20180302603A1 (en) * 2015-11-11 2018-10-18 Sony Corporation Image processing apparatus and image processing method
CN106204704A (zh) * 2016-06-29 2016-12-07 乐视控股(北京)有限公司 虚拟现实中三维场景的渲染方法和装置
CN108479067A (zh) * 2018-04-12 2018-09-04 网易(杭州)网络有限公司 游戏画面的渲染方法和装置
CN109345616A (zh) * 2018-08-30 2019-02-15 腾讯科技(深圳)有限公司 三维虚拟宠物的二维渲染图的生成方法、设备及存储介质
CN112396689A (zh) * 2019-08-14 2021-02-23 富士施乐株式会社 三维形状数据生成装置及方法、三维成型装置及存储介质

Also Published As

Publication number Publication date
US20240033625A1 (en) 2024-02-01
CN116764203A (zh) 2023-09-19

Similar Documents

Publication Publication Date Title
US10592238B2 (en) Application system that enables a plurality of runtime versions of an application
EP1594091B1 (en) System and method for providing an enhanced graphics pipeline
US7978205B1 (en) Systems and methods for providing an enhanced graphics pipeline
Cozzi et al. OpenGL insights
CN110825467B (zh) 渲染方法、装置、硬件装置和计算机可读存储介质
CN103970518A (zh) 一种逻辑窗口的3d渲染方法和装置
WO2023197762A1 (zh) 图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品
WO2022095526A1 (zh) 图形引擎和适用于播放器的图形处理方法
CN116610881A (zh) 一种基于低代码软件的WebGL浏览交互方法
WO2005086629A2 (en) Ingeeni flash interface
WO2023168999A1 (zh) 一种虚拟场景的渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN112862981B (zh) 用于呈现虚拟表示的方法和装置、计算机设备和存储介质
Stenning Direct3D Rendering Cookbook
WO2023184357A1 (zh) 一种表情模型制作的方法、装置及电子设备
US20240111496A1 (en) Method for running instance, computer device, and storage medium
CN117611763A (zh) 一种建筑群模型的生成方法、装置、介质及设备
CN117036562A (zh) 一种三维显示方法和相关装置
CN117437346A (zh) 图像处理方法、装置、电子设备、存储介质及程序产品
CN117839223A (zh) 游戏编辑预览方法、装置、存储介质与电子设备
CN114445532A (zh) 树冠状模型的信息处理方法、装置、电子设备及存储介质
CN117596377A (zh) 画面推流方法、装置、电子设备、存储介质及程序产品
CN115272559A (zh) 动态物体的烘焙方法及装置、介质、电子设备
CN117519686A (zh) 三维ui的开发、三维ui的开发装置及存储介质
CN117708454A (zh) 网页内容处理方法、装置、设备、存储介质及程序产品
CN117093069A (zh) 一种混合应用的跨维度交互方法、装置及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930622

Country of ref document: EP

Kind code of ref document: A1