WO2023138170A1 - 捕捉待渲染虚拟对象的运动轨迹的方法、装置及电子设备 - Google Patents

捕捉待渲染虚拟对象的运动轨迹的方法、装置及电子设备 Download PDF

Info

Publication number
WO2023138170A1
WO2023138170A1 PCT/CN2022/130798 CN2022130798W WO2023138170A1 WO 2023138170 A1 WO2023138170 A1 WO 2023138170A1 CN 2022130798 W CN2022130798 W CN 2022130798W WO 2023138170 A1 WO2023138170 A1 WO 2023138170A1
Authority
WO
WIPO (PCT)
Prior art keywords
primitive
virtual object
tracking
rendered
virtual
Prior art date
Application number
PCT/CN2022/130798
Other languages
English (en)
French (fr)
Inventor
黄政
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to US18/329,897 priority Critical patent/US20230316541A1/en
Publication of WO2023138170A1 publication Critical patent/WO2023138170A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present application relates to the technical field of cloud computing, and in particular to a method for capturing the trajectory of a virtual object to be rendered, a method for rendering a virtual object, a device for capturing the trajectory of a virtual object to be rendered, a device for rendering a virtual object, an electronic device, and a computer-readable storage medium.
  • 3D avatars need to be rendered to be displayed.
  • the virtual objects in the scene are often supplemented and drawn directly.
  • directly drawing/tracking virtual objects in the scene there are various problems with directly drawing/tracking virtual objects in the scene. For example, direct drawing often results in a large number of redundant surfaces, resulting in excessive calculations.
  • computing resources will be occupied.
  • a method for capturing a motion track of a virtual object to be rendered a method for rendering a virtual object, a method for determining a tracking primitive, a device for determining a tracking primitive, an electronic device, and a computer-readable storage medium.
  • An embodiment of the present application provides a method for capturing a motion trajectory of a virtual object to be rendered, which is executed by an electronic device, including: based on the virtual object to be rendered in a virtual scene, determining at least one tracking primitive corresponding to the virtual object to be rendered, wherein the tracking primitive corresponding to the virtual object to be rendered covers a movable part of the virtual object to be rendered; drawing the tracking primitive corresponding to the virtual object to be rendered in the virtual scene; and capturing the motion trajectory of the virtual object to be rendered in the virtual scene based on the drawn motion trajectory corresponding to the tracking primitive.
  • the present application also provides a method for rendering a virtual object, which is executed by an electronic device, comprising: drawing a tracking primitive corresponding to the virtual object to be rendered in a virtual scene, the drawn tracking primitive covering the movable part of the virtual object to be rendered and not displayed on the display screen; capturing the motion trajectory of the virtual object to be rendered in the virtual scene based on the motion trajectory corresponding to the drawn tracking primitive; and displaying the virtual object to be rendered on the display screen based on the motion trajectory of the virtual object to be rendered in the virtual scene.
  • the present application also provides a method for drawing a tracking primitive, which is executed by an electronic device, and the tracking primitive is used to assist the rendering of a virtual object.
  • the method includes: acquiring a motion state of a virtual object to be rendered in a virtual scene; determining at least one primitive mark corresponding to the virtual object to be rendered based on the motion state of the virtual object to be rendered; In the virtual scene, the tracking primitive corresponding to the virtual object to be rendered is drawn.
  • the present application also provides a device for capturing a motion trajectory of a virtual object to be rendered, including: a tracking primitive determination module, configured to determine at least one tracking primitive corresponding to the virtual object to be rendered based on the virtual object to be rendered in the virtual scene, and the tracking primitive corresponding to the virtual object to be rendered covers the movable part of the virtual object to be rendered; a tracking primitive drawing module, used to draw the tracking primitive corresponding to the virtual object to be rendered in the virtual scene; The trajectory of the virtual object to be rendered.
  • a tracking primitive determination module configured to determine at least one tracking primitive corresponding to the virtual object to be rendered based on the virtual object to be rendered in the virtual scene, and the tracking primitive corresponding to the virtual object to be rendered covers the movable part of the virtual object to be rendered
  • a tracking primitive drawing module used to draw the tracking primitive corresponding to the virtual object to be rendered in the virtual scene.
  • the present application also provides a device for rendering a virtual object, including: a tracking primitive drawing module, configured to draw a tracking primitive corresponding to the virtual object to be rendered in a virtual scene, and the drawn tracking primitive covers a movable part of the virtual object to be rendered; a motion trajectory capturing module, configured to capture a motion trajectory of the virtual object to be rendered in the virtual scene based on a motion trajectory corresponding to the drawn tracking primitive; and a display module, configured to display the virtual object to be rendered on a display screen based on the motion trajectory of the virtual object to be rendered in the virtual scene.
  • a tracking primitive drawing module configured to draw a tracking primitive corresponding to the virtual object to be rendered in a virtual scene, and the drawn tracking primitive covers a movable part of the virtual object to be rendered
  • a motion trajectory capturing module configured to capture a motion trajectory of the virtual object to be rendered in the virtual scene based on a motion trajectory corresponding to the drawn tracking primitive
  • a display module configured to display the virtual object to be rendered on a display
  • the present application also provides a device for drawing a tracking primitive, the tracking primitive is used to assist the rendering of a virtual object, and the device includes: a scene management module, configured to: acquire a motion state of a virtual object to be rendered in a virtual scene; determine at least one primitive mark corresponding to the virtual object to be rendered based on the motion state of the virtual object to be rendered; and a primitive drawing module, configured to: draw a tracking primitive corresponding to the virtual object to be rendered in the virtual scene.
  • a scene management module configured to: acquire a motion state of a virtual object to be rendered in a virtual scene; determine at least one primitive mark corresponding to the virtual object to be rendered based on the motion state of the virtual object to be rendered
  • a primitive drawing module configured to: draw a tracking primitive corresponding to the virtual object to be rendered in the virtual scene.
  • Some embodiments of the present application provide an electronic device, including: a processor; and a memory, where computer instructions are stored in the memory, and the above method is implemented when the computer instructions are executed by the processor.
  • Some embodiments of the present application provide a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed by a processor, the above method is implemented.
  • Some embodiments of the present application provide a computer program product, which includes computer-readable instructions, and when executed by a processor, the computer-readable instructions cause the processor to perform the above-mentioned method.
  • FIG. 1 is a schematic diagram illustrating an application scenario according to some embodiments of the present application
  • FIG. 2 is a schematic diagram illustrating a virtual object to be rendered according to some embodiments of the present application
  • Fig. 3 is a schematic diagram illustrating a projection surface of a virtual object to be rendered according to some embodiments of the present application
  • FIG. 4 is a flow chart illustrating a method for capturing a motion trajectory of a virtual object to be rendered according to some embodiments of the present application
  • FIG. 5A is a schematic diagram of an example of a tracking primitive according to various embodiments of the present application.
  • FIG. 5B is another schematic diagram of an example of a tracking primitive according to various embodiments of the present application.
  • FIG. 6 is a schematic diagram of a method for capturing a motion trajectory of a virtual object to be rendered according to various embodiments of the present application
  • FIG. 7A is a schematic diagram of an apparatus for drawing tracking primitives according to various embodiments of the present application.
  • FIG. 7B is a flow chart corresponding to the steps that the scene management module of the device for drawing tracking primitives is configured to execute according to various embodiments of the present application;
  • FIG. 7C is a flowchart corresponding to the steps executed by the primitive creation module of the device for drawing and tracking primitives according to various embodiments of the present application;
  • FIG. 7D is a flowchart corresponding to the steps executed by the primitive drawing module of the device for drawing tracking primitives according to various embodiments of the present application;
  • FIG. 8 is a schematic diagram illustrating an electronic device according to some embodiments of the present application.
  • Figure 9 is a schematic diagram illustrating a computer architecture according to some embodiments of the present application.
  • Figure 10 is a schematic diagram illustrating a computer readable storage medium according to some embodiments of the application.
  • FIG. 1 shows a schematic diagram of an application scenario 100 according to some embodiments of the present application, in which a server 110 and multiple terminals 120 are schematically shown.
  • the terminal 120 and the server 110 may be connected directly or indirectly through wired or wireless communication, which is not limited in this application.
  • the server 110 may be an independent server, or may be a server cluster or a distributed system composed of multiple physical servers, or may be a cloud server that provides basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery network (CDN, Content Delivery Network), and the like, which is not specifically limited in this embodiment of the present application.
  • cloud server that provides basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery network (CDN, Content Delivery Network), and the like, which is not specifically limited in this embodiment of the present application.
  • each terminal in the plurality of terminals 120 may be a fixed terminal such as a desktop computer, a mobile terminal having a network function such as a smart phone, a tablet computer, a portable computer, a handheld device, a personal digital assistant, a smart wearable device, a vehicle-mounted terminal, or any combination thereof, which is not specifically limited in this embodiment of the present application.
  • a fixed terminal such as a desktop computer
  • a mobile terminal having a network function such as a smart phone, a tablet computer, a portable computer, a handheld device, a personal digital assistant, a smart wearable device, a vehicle-mounted terminal, or any combination thereof, which is not specifically limited in this embodiment of the present application.
  • the server 110 and the terminal 120 can be connected through a network to realize the normal execution of the game application.
  • the network can be an Internet of Things (IOT, Internet of Things) based on the Internet and/or a telecommunication network. It can be a wired network or a wireless network. For example, it can be a local area network (LAN, Local Area Network), a metropolitan area network (MAN, Metropolitan Area Network), a wide area network (WAN, Wide Area Network), and a cellular data communication network.
  • LAN Local Area Network
  • MAN metropolitan area network
  • WAN Wide Area Network
  • Networked game applications usually rely on the graphics card on the server to synthesize the game screen to be displayed on the terminal (that is, to render the game screen). Such game applications are also called cloud games.
  • the terminal 120 can transmit the user's game operation data to the server 110 through the control stream, and the server 110 returns one or more audio frames and video frames to the terminal 120 through the data stream (for example, in the way of Steam).
  • the terminal 120 may also encode user operations (that is, input events) and transmit them to the server through the control stream.
  • the server may also determine one or more corresponding audio frames and video frames according to the received input event.
  • both the server 110 and the terminal 120 can transmit data streams and control streams through protocols such as Real Time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP), or Real-time Transport Control Protocol (Real-time Transport Control Protocol, RTCP), but the present application is not limited thereto.
  • RTSP Real Time Streaming Protocol
  • RTP Real-time Transport Protocol
  • RTCP Real-time Transport Control Protocol
  • RTCP Real-time Transport Control Protocol
  • FIG. 2 it shows four scenarios that may appear in a certain cloud game.
  • the user manipulates the game character to track the virtual mechanical dog.
  • the virtual mechanical dog may be running above the game character manipulated by the user, while in some game video frames, the virtual mechanical dog may run below or beside the user-controlled game character, or run towards the user-controlled game character.
  • FIG. 2 is only an example scene, and the present application is not limited thereto.
  • the server 110 encodes and renders the audio frame/video frame after acquiring the data of the user operating the game in the terminal 120, and then transmits the encoded and rendered audio frame/video frame to the terminal 120 through the data stream.
  • the server 110 can reproduce the input event to process the next audio frame/video frame of the cloud game or obtain the running result of the cloud game.
  • the user's input event may include, for example, at least one of instructions for commanding a certain character in the game to advance, retreat, shoot or jump, etc.
  • the running result of the cloud game may include at least one of game victory or failure, for example.
  • the terminal 120 may decode the audio frame/video frame, and then play the decoded audio frame/video frame.
  • the server 110 mainly uses its own graphics card resources (such as GPU computing resources) to encode or render (render) various virtual objects to obtain various scene pictures as shown in FIG. 2 .
  • GPU computing resources mainly refer to the computing resources corresponding to Graphics Processing Unit (abbreviation: GPU).
  • GPU also known as a display chip, is a microprocessor that specializes in image and graphics-related calculations on personal computers, workstations, game consoles, and some mobile devices.
  • the server may also use its own CPU computing resources to perform the above rendering operations.
  • the Chinese name of the CPU is the central processing unit (Central Processing Unit), which is mainly used as the computing and control core of the computer system, and is the final execution unit for information processing and program operation.
  • Central Processing Unit central processing unit
  • the rendering process is, for example, utilizing each cloud game specific model to generate audio frames or video frames.
  • a cloud game-specific model refers to the description of a strictly defined three-dimensional object or virtual scene using language or data structure, which includes information such as geometry, viewpoint, texture, lighting, and shadow.
  • the game’s model can describe the scene of the game event (for example, the image of the virtual mechanical dog, including the shape of the virtual mechanical dog viewed from the point of view of the user-controlled game character, clothing texture, lighting conditions, the sound of the virtual mechanical dog, background sound, etc.).
  • Rendering calculations can convert these descriptions into audio frames and/or video frames to form images and sounds that users will see and hear at the game client.
  • the CPU and GPU in the server 110 can cooperate with each other to complete a rendering.
  • the CPU and GPU work in parallel with a command buffer between them.
  • the CPU will submit a command (for example, a draw call command) to the command buffer to command the GPU rendering operation.
  • a draw call command for example, a command that specifies a draw call.
  • the CPU submits a draw call, it needs to process a large amount of data calculations, such as some data, status, commands, and so on.
  • the corresponding GPU unit can sequentially perform operations such as Vertex Shadering, Shape Assembly, Geometry Shader, Rasterization, and Fragment Shader to calculate the RGB (Red, Green, blue) value of each pixel in the video frame, and then obtain the image to be displayed by the game client.
  • the device performing rendering processing may also be the terminal 120, that is, the method according to the embodiment of the present application may be carried on the terminal 120 in whole or in part, so as to use the GPU computing resources and CPU resources of the terminal 120 to perform rendering processing on various virtual objects or virtual scenes.
  • the above rendering process may also be performed by a system composed of a terminal and a server. This application does not limit this.
  • FIG 3 it shows a schematic diagram of the projection surface of the virtual mechanical dog to be rendered. If you want to track the movement of the virtual mechanical dog in Figure 2, you need to project the movement of the virtual mechanical dog onto the corresponding motion plane, and then track and record the movement of the virtual mechanical dog in Figure 2 on this plane.
  • a virtual mechanical dog running above or below a game character its movement on the horizontal plane (that is, the plane formed by the XY axis) can be tracked and recorded from the perspective of looking up/down.
  • the virtual mechanical dog running by the side of the game character and the mechanical dog running head-on its movement on the numerical plane can be tracked and recorded from a flat perspective.
  • drawing submission means that the CPU calls a drawing interface of a rendering API, such as DirectX's DrawPrimitive/DrawIndexedPrimitive, or OpenGL's glDrawElement/glDrawArrays.
  • a rendering API such as DirectX's DrawPrimitive/DrawIndexedPrimitive, or OpenGL's glDrawElement/glDrawArrays.
  • the embodiments of the present application provide a method for capturing a motion trajectory of a virtual object to be rendered, a method for capturing a motion trajectory of a virtual object to be rendered, a method for rendering a virtual object, a method for determining a tracking primitive, a device for determining a tracking primitive, a device for capturing a motion trajectory of a virtual object to be rendered, a device for rendering a virtual object, an electronic device, and a computer-readable storage medium.
  • the embodiments of the present application use tracking primitives to replace the original virtual objects in the scene, so that the 3D rendering program can achieve higher controllability and precision control for motion tracking, reduce the amount of calculation, and thus reduce the occupation of computer resources.
  • some embodiments of the present application also use modern multi-instance rendering technology to reduce the number of hardware rendering submissions, thereby optimizing the performance of 3D rendering and improving the computing efficiency of 3D rendering.
  • FIG. 4 is a flow chart of a method 400 for capturing a motion track of a virtual object to be rendered according to an embodiment of the present application.
  • 5A and 5B are schematic diagrams of examples of tracking primitives according to various embodiments of the present application.
  • FIG. 6 is a schematic diagram of a method for capturing a motion track of a virtual object to be rendered according to various embodiments of the present application.
  • each embodiment corresponding to the method 400 includes step 401 to step 402 .
  • the calculation corresponding to step 401 is also referred to as the initialization phase of the method 400 .
  • the calculations corresponding to steps 402 to 403 are also referred to as the rendering phase of the method 400 .
  • Those skilled in the art should understand that various embodiments of the present application may also include other possible steps, and the scope of the present application is not limited to the examples in FIGS. 4 to 5A-5B.
  • the method 400 may further include one or more steps related to the creation/determination of the virtual scene and the virtual object to be rendered.
  • the virtual environment may be a virtual environment displayed (or provided) when an application program is run on a terminal.
  • the virtual environment can be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment and a three-dimensional virtual environment, which is not limited in this application.
  • the following embodiments are described with an example that the virtual environment is a three-dimensional virtual environment.
  • the virtual object may refer to an active object in the virtual environment.
  • the movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment.
  • the virtual object is a three-dimensional model created based on animation skeleton technology.
  • Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • the virtual object to be rendered will be presented on a display screen, for example, the display screen of the terminal 120 .
  • the display screen will display the motion animation corresponding to the virtual object to be rendered.
  • various embodiments of the present application also relate to computer vision technology (Computer Vision, CV).
  • Computer vision is a science that studies how to make machines "see”. To put it further, it refers to the use of cameras and computers instead of human eyes to identify, track and measure machine vision, and further graphics processing, so that computer processing becomes more suitable for human eyes to observe or send to instruments for detection.
  • the virtual scene may present a virtual scene "viewed" from the perspective of the controlled virtual object.
  • the controlled virtual object can be bound to the camera model in the 3D rendering process.
  • the camera model is equivalent to the eyes of the 3D game world.
  • the controlled virtual object can "see” the three-dimensional world in the game, and then see various other virtual objects in the scene that may interact with it.
  • other viewing angles may also be set, and one or more camera models are set based on the viewing angles, and corresponding scene pictures are presented on the display screen of the terminal 120 .
  • a method for capturing a motion track of a virtual object to be rendered executed by an electronic device, the electronic device may be a terminal or a server, and the method includes:
  • Step 401 Determine at least one tracking primitive corresponding to the virtual object to be rendered based on the virtual object to be rendered in the virtual scene, and the tracking primitive corresponding to the virtual object to be rendered covers a movable part of the virtual object to be rendered.
  • the virtual object to be rendered is a controlled virtual object currently moving, for example, a walking virtual character.
  • a trace primitive is a type of primitive.
  • a primitive (Primitive) is a kind of graphics data, which corresponds to a visible entity on a drawing interface, for example, some simple shapes in 3D or 2D space. Primitives can be easily described and drawn in software. Common primitives in 3D space include cubes, spheres, ellipsoids, or cylinders. Common primitives in 2D space include squares, circles, ellipses, or rectangles.
  • a primitive is the basic unit that can be rendered, which is a geometry that can be drawn directly with glDrawArray or glDrawElement.
  • a primitive can be described by a series of vertices (Vertex), and each vertex contains multiple attributes (Attribute), such as vertex coordinates, normal vectors, color, and depth values of the primitive.
  • Vertex vertices
  • Attribute attributes
  • the present application only exemplifies the concept of graphic elements, and the present application is not limited thereto.
  • the angle of view corresponding to the camera model is the left front side of the avatar.
  • FIG. 5A schematically shows two tracking primitives corresponding to the avatar, and the two tracking primitives are the head and the right forearm of the avatar.
  • the virtual object to be rendered includes not only the walking virtual character, but also houses and trees behind the walking virtual character.
  • the angle of view corresponding to the camera model is the left front side of the controlled virtual object.
  • FIG. 5B schematically shows two tracking primitives corresponding to the controlled virtual object, and the two tracking primitives are the head and the right forearm of the virtual character.
  • no tracking primitive is set for the stationary virtual object, so as to reduce the number of drawn faces.
  • tracking primitives can mask the complex avatar's head and right forearm mesh surfaces.
  • the head of the avatar may correspond to a sphere-type tracking primitive
  • the right forearm may correspond to a box-type tracking primitive.
  • the head of the avatar may correspond to a sphere-type tracking primitive.
  • the sphere-type tracking primitive is shown with a black line and a gray-filled circle
  • the right forearm may correspond to a box-type tracking primitive.
  • FIG. 5B shows the box-type tracking primitive with a black line and a gray-filled circle. More than one virtual object is also shown in FIG. 5B , for example, virtual objects corresponding to houses and trees. The virtual objects corresponding to houses and trees do not move. Therefore, in order to reduce the amount of calculation, it is not necessary to set corresponding tracking primitives for these virtual objects.
  • the present application is not limited thereto.
  • the tracking primitive corresponding to the virtual object to be rendered follows the grid model corresponding to the virtual object to be rendered, the tracking primitive can be set for the moving virtual object, reducing the amount of calculation.
  • tracking primitives are automatically generated/created based on primitive tags. These primitive marks may be marked by the developer in advance for each virtual object serving as a game character. For example, in some embodiments, in order to avoid conflicts or superpositions between elements among the virtual objects, the developer may set primitive marks for each virtual object in advance for collision detection.
  • the tracking primitives of this application can reuse the primitive tags used for collision detection. It is worth noting that although the collision detection primitive corresponding to the primitive mark used for collision detection is correspondingly generated in the possible collision detection, the collision detection primitive is often close to the mesh surface corresponding to the virtual object, and will not cover the movable part of the virtual object. That is, in such a case, although the present application reuses primitive tags for collision detection, the method 400 generates tracking primitives different from collision detection primitives based on such primitive tags. Of course, the present application is not limited thereto.
  • step 401 as an initialization phase further includes a scene management sub-step and a primitive creation sub-step, and the scene management sub-step and the primitive creation sub-step determine the tracking primitive based on the primitive mark.
  • the virtual objects collected in the 3D virtual scene may also be diverse, with various geometric shapes and mesh models.
  • Figure 6 simply shows some complex virtual objects in the 3D scene.
  • simple tracking primitives are used to mask the mesh surfaces of these complex virtual objects.
  • step 401 may only use three to five types of primitives to express all complex virtual object geometric features and motion trajectories.
  • primitive types optionally include: sphere, (rounded) box, quasi-cylindrical, ellipsoid, triangular pyramid, and the like. Although this article only lists the above-mentioned primitive types, other types of primitives may exist in actual situations. For example, in a 2D scene, primitive types optionally include: circle, (rounded) rectangle, ellipse, triangle, square, and rectangle, among others. No matter what kind of graphic element type is used, even if the above-mentioned unlisted graphic element type is not used, it can fall within the scope covered by the present application.
  • the scene management sub-step can be performed by the scene management module 701 described in detail later.
  • the electronic device may obtain the motion state of the virtual object to be rendered in the virtual scene, and determine at least one primitive mark corresponding to the virtual object to be rendered based on the motion state of the virtual object to be rendered.
  • the electronic device determines at least one tracking primitive corresponding to the virtual object to be rendered based on at least one primitive flag corresponding to the virtual object to be rendered.
  • the electronic device can also provide the state of the controlled virtual object and the parameters collected from the environment where the controlled virtual object is located to the motion tracking system of the cloud game, so as to determine the moving or stationary virtual object in the virtual scene.
  • the graphic element tag is determined according to the motion state
  • the tracking graphic entity is determined according to the graphic element tag, thereby realizing the purpose of determining the tracking graphic entity according to the motion state.
  • only moving virtual objects can be tracked.
  • at least one primitive mark corresponding to the virtual object to be rendered is added to the primitive mark list.
  • the graphic element tag list will be shown in Table 2 described in detail later.
  • the electronic device may also cover the movable part with a larger area of the tracking primitive for the movable part such as the torso, which has simple movement, infrequent changes, or small changes.
  • a larger area of the tracking primitive for the movable part such as the torso, which has simple movement, infrequent changes, or small changes.
  • the primitive creation sub-step may be performed by the primitive creation module 702 described in detail later.
  • the electronic device determines whether the tracking primitive corresponding to the virtual object to be rendered has been created based on at least one primitive flag corresponding to the virtual object to be rendered; if the tracking primitive corresponding to the virtual object to be rendered has been created, update the corresponding tracking primitive in the list of primitives to be drawn; if the tracking primitive corresponding to the virtual object to be rendered has not been created, create a corresponding tracking primitive, and add the created tracking primitive to the list of primitives to be drawn.
  • the list of primitives to be drawn will be shown in Table 3 described in detail later.
  • the tracking graphic entities are created and updated, which improves the management efficiency of the tracking graphic entities.
  • Step 402 in the virtual scene, draw the tracking primitive corresponding to the virtual object to be rendered.
  • the drawn tracking primitives are not displayed on the display screen.
  • the electronic device may draw the determined tracking primitives through a draw call.
  • the draw call in addition to the attributes corresponding to the tracking primitives described in detail above, the draw call can also correspondingly increase the values related to the motion trajectory corresponding to each tracking primitive.
  • the motion trajectory-related values corresponding to these tracking primitives may be determined by the motion tracking system based on the above created tracking primitives. For example, in a virtual scene, the virtual control object controlled by the controlled virtual object and the surrounding virtual interactive objects will change correspondingly according to the instructions issued by the end user.
  • the CPU of the cloud server 110 will judge the values related to the motion trajectory corresponding to each tracking primitive according to various instructions triggered by the user of the terminal 120, and then submit these values to the GPU of the cloud server 110 through a draw call for subsequent rendering.
  • drawing the tracking primitive corresponding to the virtual object to be rendered further includes: determining the attribute change of the tracking primitive corresponding to the virtual object to be rendered based on the interaction data of each virtual object in the virtual scene and the operation data of the controlled virtual object; and drawing the tracking primitive corresponding to the virtual object to be rendered based on the attribute change corresponding to the tracking primitive.
  • the tracking primitive can follow the movement and posture of the grid model of the original virtual object, and perform appropriate displacement transformation, scaling and rotation, which improves the tracking accuracy of the tracking primitive.
  • these tracking primitives will not be actually displayed on the display screen of the terminal 120, the embodiment of the present application may use them to track the motion trajectory of the virtual object.
  • these tracking primitives can also be displayed on the display screen of the debugger or developer, so that the debugger or developer can make detailed adjustments to the game.
  • step 402 may also use an instantiation technique to draw each tracking primitive.
  • step 402 further includes: classifying the tracking primitives according to the primitive types of the tracking primitives to obtain a tracking primitive set corresponding to each primitive type, and then submitting a drawing submission for drawing all the tracking primitives in the tracking primitive set corresponding to each primitive type at one time.
  • the electronic device may submit a drawing submission for drawing all tracking primitives in the tracking primitive collection at one time through a glDrawArraysInstanced command or a glDrawElementsInstanced command in OpenGL.
  • the electronic device may submit (submit)/call (call) a drawing submission (draw call) for one-time drawing of all tracking primitives in the tracking primitive collection through the DrawIndexedPrimitive command in direct3D.
  • draw call drawing submission
  • the drawing of all tracking primitives of the same type can be completed in one drawing submission (draw call), which improves the computing efficiency.
  • all triangular pyramid-type primitives will be completed in one rendering submission, and the amount of drawing used to capture the motion track of the virtual object will be compressed to 3 to 5 times.
  • an indirect drawing (Indirect Draw) technique may also be used to draw each tracking primitive.
  • Indirect Draw ⁇ IndirectCompute IndirectDraw ⁇ IndirectCompute
  • less switching between CPU/GPU can be achieved when rendering.
  • virtual scene data can be prepared in the CPU and then submitted to the GPU at one time. This application does not limit this.
  • Step 403 capturing the motion trajectory of the virtual object to be rendered in the virtual scene based on the motion trajectory corresponding to the drawn tracking primitive.
  • the electronic device can determine the interaction data between the drawn tracking primitives based on the drawn tracking primitives, and then determine the motion trajectory corresponding to the drawn tracking primitives based on the interaction data between the drawn transparent tracking primitives; and capture the motion trajectory of the virtual object to be rendered in the virtual scene based on the motion trajectory corresponding to the drawn tracking primitives.
  • the electronic device can further realize the mapping from the motion trajectory of the tracking primitive to the motion trajectory of the object to be rendered based on the distance field fast rendering step technology.
  • the electronic device may draw the motion trajectory corresponding to the drawn tracking primitive into a target texture or buffer.
  • the electronic device can use the distance field function to quickly capture the tracking primitives that have been drawn in batches according to the primitive type, so as to determine the motion track of the object to be rendered.
  • the electronic device can use a signed distance field (Signed Distance Field, SDF) function to realize the mapping from the motion track of the tracking primitive to the motion track of the object to be rendered.
  • SDF signed Distance Field
  • the SDF function can be represented by a scalar field function or a body map, which realizes the mapping from the tracking primitive to the object to be rendered through the distance from a point in space to the nearest triangular surface.
  • the electronic device can use the above-mentioned distance field function to sample the buffer area (this process is also called trajectory picking), and then use the sampled value to determine whether the sampling point is within the trajectory of the tracking primitive, thereby determining the trajectory of the object to be rendered.
  • the motion trajectory corresponding to the drawn tracking primitives is determined, which improves the accuracy of the motion trajectory.
  • the method 400 further includes step 404 .
  • step 404 the virtual object to be rendered is displayed on the display screen based on the motion track of the virtual object to be rendered in the virtual scene.
  • the electronic device may render the virtual object to be rendered in the virtual scene based on the motion track of the virtual object to be rendered in the virtual scene, and display the virtual object to be rendered on the display screen.
  • the display position of the virtual object to be rendered can be precisely positioned according to the motion trajectory, thereby improving the display effect.
  • the tracking primitive corresponding to the virtual object to be rendered has been determined in advance.
  • the cloud game has been running smoothly for a period of time, and there is no need to create tracking primitives of virtual objects corresponding to each game character, but can be directly responsible for the rendering of virtual objects based on these known tracking primitives.
  • the present application also provides a method for rendering a virtual object, which is executed by an electronic device.
  • the electronic device may be a terminal or a server.
  • the method for rendering the virtual object includes: drawing a tracking primitive corresponding to the virtual object to be rendered in the virtual scene, and the drawn tracking primitive covers the movable part of the virtual object to be rendered and is not displayed on the display screen; based on the motion trajectory corresponding to the drawn tracking primitive, capturing the movement trajectory of the virtual object to be rendered in the virtual scene;
  • Embodiments of the present application use tracking primitives to replace original virtual objects (eg, movable virtual objects) in the scene, so that the 3D rendering program can achieve higher controllability and precision control for motion tracking.
  • the application also provides a method for determining the tracking primitive.
  • the tracking primitive is used to assist the rendering of the virtual object and is executed by an electronic device.
  • the electronic device can be a terminal or a server. Rendering at least one tracking primitive corresponding to the virtual object to be rendered, the tracking primitive corresponding to the virtual object to be rendered covers the movable part of the virtual object to be rendered; and drawing the tracking primitive corresponding to the virtual object to be rendered in the virtual scene.
  • the drawn tracking primitives are not displayed on the display screen.
  • these tracking primitives can also be displayed on the display screen of the debugger or developer, so that the debugger or developer can make detailed adjustments to the game.
  • Embodiments of the present application use tracking primitives to replace original virtual objects (eg, movable virtual objects) in the scene, so that the 3D rendering program can achieve higher controllability and precision control for motion tracking.
  • some embodiments of the present application also use modern multi-instance rendering technology to reduce the number of hardware rendering submissions, thereby optimizing the performance of 3D rendering and increasing the computing efficiency of 3D rendering.
  • the apparatus 700 includes a scene management module 701 , a primitive creation module 702 and a primitive drawing module 703 .
  • the scene management module 701 is configured to acquire the motion state of the virtual object to be rendered in the virtual scene; determine at least one primitive tag corresponding to the virtual object to be rendered based on the motion state of the virtual object to be rendered; and add at least one primitive tag corresponding to the virtual object to be rendered to the primitive tag list. That is to say, the scene management module 701 can be used to manage movable virtual objects (such as the virtual mechanical dog in FIG. 2 ) in the virtual scene, observe their motion states and record those moving virtual objects.
  • movable virtual objects such as the virtual mechanical dog in FIG. 2
  • the scene management module 701 can perform the operations shown in FIG. 7B.
  • the scene management module 701 will traverse the virtual object tracking list.
  • An example of a virtual object tracking list is shown in Table 1.
  • the scene management module 701 In the process of traversing the virtual object tracking list, the scene management module 701 first needs to determine the motion state of the current virtual object to determine whether to track the virtual object. For example, if the virtual object is a static object (for example, land, grass, etc.) or the presentation of the game screen will not be affected even if the rendering result of the virtual object is not updated, then the scene management module 701 will determine that the virtual object does not need to be tracked. For a virtual object that may move, the scene management module 701 will determine whether the virtual object includes a primitive mark. The scene management module 701 will identify the primitive marks on the virtual object, and generate a list of primitive marks. An example of a list of primitive tags is shown in Table 2.
  • the scene management module 701 may add the graphic element tag of the virtual object to the graphic element tag list. If the graphic element tag corresponding to the object is included in the graphic element tag list, the scene management module 701 will update the data of the graphic element tag. Optionally, in some embodiments, if the scene management module 701 determines that there is no need to continue tracking a certain primitive, the scene management module 701 may also delete the primitive in the primitive mark list. In this example, the scene management module 701 will execute the above process cyclically until all the virtual objects in the virtual object tracking list are traversed.
  • the primitive creation module 702 is configured to determine whether the tracking primitive corresponding to the virtual object to be rendered has been created based on at least one primitive flag corresponding to the virtual object to be rendered; if the tracking primitive corresponding to the virtual object to be rendered has been created, update the corresponding tracking primitive in the list of primitives to be drawn; if the tracking primitive corresponding to the virtual object to be rendered has not been created, create a corresponding tracking primitive, and add the created tracking primitive to the list of primitives to be drawn.
  • the primitive creating module 702 can be used to match the moving virtual object in the scene, and generate tracking primitives with corresponding transformation attributes (position, scale and rotation) for the moving virtual object.
  • primitive creation module 702 will perform the operations shown in Figure 7C.
  • the primitive creation module 702 will traverse the list of primitive tags described above.
  • the primitive creation module 702 may determine information corresponding to the current primitive to create a corresponding tracking primitive.
  • the primitive creating module 702 will determine the attributes corresponding to the trace primitive based on the primitive mark, and create the trace primitive based on these attributes.
  • the primitive creation module 702 will execute the above process cyclically until all the primitive tags in the primitive tag list are traversed.
  • Table 3 An example of a list of primitives to be drawn is shown in Table 3.
  • the primitive drawing module 703 is configured to draw a tracking primitive corresponding to the virtual object to be rendered in the virtual scene.
  • the primitive drawing module 703 can be used to collect the generated primitives, record their information and finally submit them to the GPU for rendering.
  • primitive drawing module 703 will perform the operations shown in Figure 7D.
  • the primitive drawing module 703 will traverse the list of primitives to be drawn.
  • the primitive drawing module 703 will classify and record the primitive tags according to the primitive types corresponding to the primitive tags, so as to generate subsequent instanced draw instructions or indirect draw instructions.
  • the primitive drawing module 703 may record a set of drawing transformation parameters for each primitive type, and then submit a draw call for all primitive types.
  • the primitive drawing module 703 may organize and generate instanced draw instructions or indirect draw instructions corresponding to the following drawing instruction list.
  • FIG. 4 An example of a drawing instruction list is shown in Table 4.
  • the primitive drawing module 703 will execute the above process cyclically until all primitive tags in the primitive list to be drawn are traversed.
  • the apparatus 700 uses the tracking primitive to replace the original virtual object in the scene, so that the 3D rendering program can achieve higher controllability and precision control for motion tracking.
  • some embodiments of the present application also use modern multi-instance drawing technology to reduce the number of hardware drawing submissions (Draw Call), thereby optimizing the performance of 3D rendering and increasing the computing efficiency of 3D rendering.
  • a device for capturing the motion trajectory of the virtual object to be rendered including: a tracking primitive determination module, configured to determine at least one tracking primitive corresponding to the virtual object to be rendered based on the virtual object to be rendered in the virtual scene, and the tracking primitive corresponding to the virtual object to be rendered covers the movable part of the virtual object to be rendered; a tracking primitive drawing module, used to draw the tracking primitive corresponding to the virtual object to be rendered in the virtual scene; The trajectory of the virtual object to be rendered.
  • the tracking primitive drawing module is further configured to classify the tracking primitives according to the primitive types of the tracking primitives, so as to obtain a tracking primitive set corresponding to each primitive type; and, for each primitive type corresponding to the tracking primitive set, submit a drawing submission for drawing all the tracking primitives in the tracking primitive set at one time.
  • the tracking primitive drawing module is further configured to determine the attribute change of the tracking primitive corresponding to the virtual object to be rendered based on the interaction data of each virtual object in the virtual scene and the operation data of the controlled virtual object; and, based on the attribute change corresponding to the tracking primitive, draw the tracking primitive corresponding to the virtual object to be rendered.
  • the device for capturing the motion trajectory of the virtual object to be rendered is further configured to determine, based on the drawn tracking primitives, interaction data between the drawn tracking primitives; based on the drawn interaction data between transparent tracking primitives, determine the motion trajectory corresponding to the drawn tracking primitives.
  • the tracking primitive determination module is also used to obtain the motion state of the virtual object to be rendered in the virtual scene; determine at least one primitive mark corresponding to the virtual object to be rendered based on the motion state of the virtual object to be rendered; determine at least one tracking primitive corresponding to the virtual object to be rendered based on the at least one primitive mark corresponding to the virtual object to be rendered.
  • the tracking primitive determination module includes a primitive creation module, the primitive creation module is used to determine whether the tracking primitive corresponding to the virtual object to be rendered has been created based on at least one primitive flag corresponding to the virtual object to be rendered; if the tracking primitive corresponding to the virtual object to be rendered has been created, update the corresponding tracking primitive in the list of primitives to be drawn; if the tracking primitive corresponding to the virtual object to be rendered has not been created, create a corresponding tracking primitive, and add the created tracking primitive to the list of primitives to be drawn.
  • the primitive creation module is used to determine whether the tracking primitive corresponding to the virtual object to be rendered has been created based on at least one primitive flag corresponding to the virtual object to be rendered; if the tracking primitive corresponding to the virtual object to be rendered has been created, update the corresponding tracking primitive in the list of primitives to be drawn; if the tracking primitive corresponding to the virtual object to be rendered has not been created, create a corresponding tracking primitive, and add the created tracking primitive to the list of primitives to be drawn.
  • the device for capturing the motion trajectory of the virtual object to be rendered is further used to display the virtual object to be rendered on the display screen based on the motion trajectory of the virtual object to be rendered in the virtual scene.
  • this application also provides a device for rendering virtual objects, including: tracking the map drawing module for the tracked chart corresponding to the virtual object in the virtual scene, the drawing of the tracked map of the tracked metamical to the virtual object;
  • the motion trajectory captures the motion trajectory of virtual objects in virtual scenes; display modules for motion trajectory for rendering virtual objects based on virtual scenes, and display virtual objects to be rendered on the display screen.
  • the tracking primitive drawing module is further configured to classify the tracking primitives according to the primitive types of the tracking primitives, so as to obtain a tracking primitive set corresponding to each primitive type; and, for each primitive type corresponding to the tracking primitive set, submit a drawing submission for drawing all the tracking primitives in the tracking primitive set at one time.
  • FIG. 8 shows a schematic diagram of an electronic device 2000 according to some embodiments of the present application.
  • an electronic device 2000 may include one or more processors 2010 and one or more memories 2020 .
  • the memory 2020 stores computer-readable codes, and when the computer-readable codes are run by one or more processors 2010, the above search request processing method can be executed.
  • the processor in some embodiments of the present application may be an integrated circuit chip with signal processing capabilities.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the various methods, steps and logic block diagrams disclosed in some embodiments of the present application can be realized or executed.
  • the general-purpose processor may be a microprocessor, or the processor may be any conventional processor, etc., and may be of an X86 architecture or an ARM architecture.
  • the various example embodiments of the present application may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device.
  • aspects of some embodiments of the present application are illustrated or described as block diagrams, flowcharts, or using some other graphical representation, it will be understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented, by way of non-limiting example, in hardware, software, firmware, special purpose circuits or logic, general purpose hardware or a controller or other computing device, or some combination thereof.
  • computing device 3000 may include bus 3010, one or more CPUs 3020, read only memory (ROM) 3030, random access memory (RAM) 3040, communication port 3050 connected to a network, input/output components 3060, hard disk 3070, etc.
  • the storage device in the computing device 3000 such as the ROM 3030 or the hard disk 3070, can store various data or files used in the processing and/or communication of the method for determining the driving risk of the vehicle provided by the present application and program instructions executed by the CPU.
  • Computing device 3000 may also include user interface 3080 .
  • the architecture shown in FIG. 9 is only exemplary, and one or more components in the computing device shown in FIG. 9 may be omitted according to actual needs when implementing different devices.
  • FIG. 10 shows a schematic diagram 4000 of a storage medium according to the present application.
  • a computer readable storage medium in some embodiments of the present application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
  • the nonvolatile memory can be read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), or flash memory.
  • Volatile memory can be random access memory (RAM), which acts as external cache memory.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDRSDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous linked dynamic random access memory
  • DRRAM direct memory bus random access memory
  • Some embodiments of the present application also provide a computer program product including computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method according to some embodiments of the present application.
  • each block in the flowchart or block diagram may represent a module, program segment, or a portion of code that includes one or more executable instructions for implementing specified logical functions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • the various example embodiments of the present application may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device.
  • aspects of some embodiments of the present application are illustrated or described as block diagrams, flowcharts, or using some other graphical representation, it will be understood that the blocks, apparatus, systems, techniques, or methods described herein may be implemented, by way of non-limiting example, in hardware, software, firmware, special purpose circuits or logic, general purpose hardware, or a controller or other computing device, or some combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

公开了一种捕捉待渲染虚拟对象的运动轨迹的方法、一种渲染虚拟对象的方法、一种绘制追踪图元的方法、一种绘制追踪图元的装置、一种电子设备和一种计算机可读存储介质。其中,捕捉待渲染虚拟对象的运动轨迹的方法包括:基于虚拟场景中的待渲染虚拟对象,确定待渲染虚拟对象对应的至少一个追踪图元,待渲染虚拟对象对应的追踪图元覆盖待渲染虚拟对象的可运动部分(401);在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元(402);及,基于所绘制的追踪图元对应的运动轨迹,捕捉虚拟场景中的待渲染虚拟对象的运动轨迹(403)。

Description

捕捉待渲染虚拟对象的运动轨迹的方法、装置及电子设备
本申请要求于2022年01月18日提交中国专利局,申请号为202210056206.0,申请名称为“捕捉待渲染虚拟对象的运动轨迹的方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及云计算技术领域,特别是涉及一种捕捉待渲染虚拟对象的运动轨迹的方法、一种渲染虚拟对象的方法、一种捕捉待渲染虚拟对象的运动轨迹的装置、一种渲染虚拟对象的装置、一种电子设备和一种计算机可读存储介质。
背景技术
在电子游戏或虚拟现实等场景下通常需要构建虚拟环境,在虚拟环境中显示3D的虚拟对象。例如,在3D游戏中的人物、场景和基础地形等都是使用三维立体模型实现的,玩家可以旋转视角,从多个角度观察各个虚拟对象,增加了游戏的自由度和趣味性。
3D虚拟角色需要经过渲染才能显示。在现有的3D模型的渲染方案中,往往会对场景中的虚拟对象直接进行补充和绘制。然而,直接对场景中的虚拟对象进行绘制/追踪存在多种问题。例如,直接进行绘制往往造成冗余面数很多,导致运算量过大。同时由于同一场景中的需要绘制的虚拟对象数量很多,如果对场景中的所有虚拟对象均进行绘制/追踪,会导致过多的计算资源的占用。
为此,本申请的各种实施例被提出以解决现有的绘制技术导致的运算量过大的问题,以减少对计算机资源的占用。
发明内容
根据本申请提供的各种实施例,提供了一种捕捉待渲染虚拟对象的运动轨迹的方法、一种渲染虚拟对象的方法、一种确定追踪图元的方法、一种确定追踪图元的装置、一种电子设备和一种计算机可读存储介质。
本申请的实施例提供了一种捕捉待渲染虚拟对象的运动轨迹的方法,由电子设备执行,包括:基于虚拟场景中的待渲染虚拟对象,确定所述待渲染虚拟对象对应的至少一个追踪图元,所述待渲染虚拟对象对应的追踪图元覆盖所述待渲染虚拟对象的可运动部分;在所述虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元;及,基于所绘制的追踪图元对应的运动轨迹,捕捉所述虚拟场景中的待渲染虚拟对象的运动轨迹。
本申请还提供了一种渲染虚拟对象的方法,由电子设备执行,包括:在虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元,所绘制的追踪图元覆盖所述待渲染虚拟对象的可运动部分并且不显示在显示屏幕上;基于所绘制的追踪图元对应的运动轨迹,捕捉所述虚拟场景中的待渲染虚拟对象的运动轨迹;及,基于所述虚拟场景中的待渲染虚拟对象的运动轨迹,将所述待渲染虚拟对象显示在显示屏幕。
本申请还提供了一种绘制追踪图元的方法,由电子设备执行,所述追踪图元用于辅助虚拟对象的渲染,所述方法包括:获取虚拟场景中的待渲染虚拟对象的运动状态;基于所述待渲染虚拟对象的运动状态,确定所述待渲染虚拟对象对应的至少一个图元标记;基于所述待渲染虚拟对象对应的至少一个图元标记,确定所述待渲染虚拟对象对应的至少一个追踪图元,所述待渲染虚拟对象对应的追踪图元覆盖所述待渲染虚拟对象的可运动部分;及,在所述虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元。
本申请还提供了一种捕捉待渲染虚拟对象的运动轨迹的装置,包括:追踪图元确定模块,用于基于虚拟场景中的待渲染虚拟对象,确定所述待渲染虚拟对象对应的至少一个追踪图元,所述待渲染虚拟对象对应的追踪图元覆盖所述待渲染虚拟对象的可运动部分;追踪图元绘制模块,用于在所述虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元;及,运动轨迹捕捉模块,用于基于所绘制的追踪图元对应的运动轨迹,捕捉所述虚拟场景中的待渲染虚拟对象的运动轨迹。
本申请还提供了一种渲染虚拟对象的装置,包括:追踪图元绘制模块,用于在虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元,所绘制的追踪图元覆盖所述待渲染虚拟对象的可运动部分;运动轨迹捕捉模块,用于基于所绘制的追踪图元对应的运动轨迹,捕捉所述虚拟场景中的待渲染虚拟对象的运动轨迹;及,显示模块,用于基于所述虚拟场景中的待渲染虚拟对象的运动轨迹,将所述待渲染虚拟对象显示在显示屏幕上。
本申请还提供了一种绘制追踪图元的装置,所述追踪图元用于辅助虚拟对象的渲染,所述装置包括:场景管理模块,用于:获取虚拟场景中的待渲染虚拟对象的运动状态;基于所述待渲染虚拟对象的运动状态,确定所述待渲染虚拟对象对应的至少一个图元标记;图元创建模块,用于:基于所述待渲染虚拟对象对应的至少一个图元标记,确定所述待渲染虚拟对象对应的至少一个追踪图元,所述待渲染虚拟对象对应的追踪图元覆盖所述待渲染虚拟对象的可运动部分;及,图元绘制模块,用于:在所述虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元。
本申请的一些实施例提供了一种电子设备,包括:处理器;存储器,存储器存储有计算机指令,该计算机指令被处理器执行时实现上述的方法。
本申请的一些实施例提供了一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令被处理器执行时实现上述的方法。
本申请的一些实施例提供了一种计算机程序产品,其包括计算机可读指令,所述计算机可读指令在被处理器执行时,使得所述处理器执行上述的方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其他特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例的描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的示例性实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是示出根据本申请的一些实施例的应用场景的示意图;
图2是示出根据本申请的一些实施例的待渲染的虚拟对象的示意图;
图3是示出根据本申请的一些实施例的待渲染的虚拟对象的投影面的示意图;
图4是是示出根据本申请的一些实施例的捕捉待渲染虚拟对象的运动轨迹的方法的流程图;
图5A为根据本申请的各个实施例的追踪图元的示例的示意图;
图5B为根据本申请的各个实施例的追踪图元的示例的又一示意图;
图6为根据本申请的各个实施例的捕捉待渲染虚拟对象的运动轨迹的方法的示意图;
图7A为根据本申请的各个实施例的绘制追踪图元的装置的示意图;
图7B为根据本申请的各个实施例的绘制追踪图元的装置的场景管理模块被配置为执行 的步骤对应的流程图;
图7C为根据本申请的各个实施例的绘制追踪图元的装置的图元创建模块执行的步骤对应的流程图;
图7D为根据本申请的各个实施例的绘制追踪图元的装置的图元绘制模块执行的步骤对应的流程图;
图8是示出根据本申请的一些实施例的电子设备的示意图;
图9是示出根据本申请的一些实施例的计算机架构的示意图;
图10是示出根据本申请的一些实施例的计算机可读存储介质的示意图。
具体实施方式
为了使得本申请的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。
在本说明书和附图中,具有基本上相同或相似步骤和元素用相同或相似的附图标记来表示,且对这些步骤和元素的重复描述将被省略。同时,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性或排序。
本申请的实施例提供的方案涉及云计算技术,具体通过如下实施例进行说明。以下,参照图1和图2描述可应用本申请的实施例的场景。图1示出了根据本申请的一些实施例的应用场景100的示意图,其中示意性地示出了服务器110和多个终端120。终端120以及服务器110可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
例如,服务器110可以是独立的服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(CDN,Content Delivery Network)、等基础云计算服务的云服务器,本申请实施例对此不作具体限制。
例如,多个终端120中的每个终端可以是诸如台式计算机等的固定终端,诸如智能手机、平板电脑、便携式计算机、手持设备、个人数字助理、智能可穿戴设备、车载终端等具有网络功能的移动终端,或者它们的任意组合,本申请实施例对此不作具体限制。
可选地,服务器110与终端120可以通过网络进行连接以实现游戏应用的正常执行。网络可以是基于互联网和/或电信网的物联网(IOT,Internet of Things),其可以是有线网也可以是无线网,例如,其可以是局域网(LAN,Local Area Network)、城域网(MAN,Metropolitan Area Network)、广域网(WAN,Wide Area Network)、蜂窝数据通信网络等能实现信息交换功能的电子网络。联网的游戏应用通常会依靠服务器上的显卡来合成终端将显示的游戏画面(也即渲染游戏画面),这样的游戏应用也称为云游戏。
例如,终端120可以通过控制流将用户操作游戏的数据传输到服务器110,而服务器110则通过数据流(例如,以Steam的方式)将一帧或多帧音频帧和视频帧返回至终端120。此外,终端120还可以对用户操作(也即输入事件)进行编码,并将其通过控制流传输至服务器。而服务器也可以根据接收到的输入事件确定对应的一帧或多帧音频帧和视频帧。
例如,服务器110与终端120二者可以通过实时串流协议(Real Time Streaming Protocol,RTSP)、实时传输协议(Real-time Transport Protocol,RTP)、或实时传输控制协议(Real-time Transport Control Protocol,RTCP)等协议来传输数据流和控制流,但本申请并不以此为限。
例如,如图2所示,其示出了某个云游戏中可能出现的四个场景。在该云游戏中,用户 操纵游戏角色来追踪虚拟机械狗。例如,在一些游戏视频帧中,虚拟机械狗可能正在用户操纵的游戏角色上方跑动,而在一些游戏视频帧中,虚拟机械狗可能在用户操纵的游戏角色下方或旁边跑动,或者迎面向用户操纵的游戏角色跑来。本领域技术人员应当理解,图2仅为示例场景,本申请并不限于此。
为了得到图2中所示的各个游戏视频帧中的跑动的虚拟机械狗,服务器110在获取终端120中的用户操作游戏的数据后,对音频帧/视频帧进行编码和渲染,然后通过数据流将编码和渲染后的音频帧/视频帧传送给终端120。此外,在另一些示例中,在服务器110从终端120接收到用户的输入事件的情况下,服务器110可以通过重现该输入事件来处理云游戏的下一帧音频帧/视频帧或者获取云游戏运行结果。用户的输入事件例如可以包括指挥游戏中的某个人物前进、后退、射击或跳跃等的指令中的至少一个,云游戏运行结果例如可以包括游戏胜利或失败中的至少一个。在接收到服务器110通过数据流传输的音频帧/视频帧后,终端120可以通过对音频帧/视频帧进行解码,然后播放解码后音频帧/视频帧。
服务器110主要是利用其自身的显卡资源(例如GPU计算资源)对各种虚拟对象进行编码或渲染(render)处理来获得如图2所示的各个场景画面。GPU计算资源主要是指图形处理器(Graphics Processing Unit,缩写:GPU)对应的计算资源。GPU又称显示芯片,是一种专门在个人电脑、工作站、游戏机和一些移动设备上做图像和图形相关运算工作的微处理器。当然,在一些实施例中服务器也可以利用其自身的CPU计算资源来进行上述的渲染操作。CPU的中文名称是中央处理器(Central Processing Unit),其主要作为计算机系统的运算和控制核心,是信息处理、程序运行的最终执行单元。
例如,渲染处理例如是利用每个云游戏特定的模型来生成音频帧或视频帧。云游戏特定的模型是指用语言或者数据结构进行严格定义的三维物体或虚拟场景的描述,它包括几何、视点、纹理、照明和阴影等信息。例如,在某个云游戏中,当用户操控的某个游戏角色行进到某个地点,触发了某个游戏事件(例如,突然出现某个虚拟机械狗),该游戏的模型就可以描述游戏事件的场景(例如,虚拟机械狗的形象,包括从用户操控的游戏角色的视点看过去的虚拟机械狗的形态、衣服纹理、光照情况、虚拟机械狗的声音、背景音等等)。渲染计算就可以将这些描述转化为音频帧和/或视频帧,形成在游戏客户端处用户即将看到的图像和听到的声音。
在一些实施例中,服务器110中的CPU和GPU可以彼此配合以完成一次渲染。可选地,CPU和GPU是并行工作的,它们之间存在一个命令缓冲区。例如,当服务器需要渲染某个游戏视频帧时,CPU将向命令缓冲区里面提交命令(例如,draw call命令),以命令GPU渲染的操作。CPU在提交draw call的时候需要处理大量的数据计算,比如一些数据、状态、命令等等。在服务器的GPU接收到draw call命令后,对应的GPU单元可以依次执行顶点着色(Vertex Shadering)、形状装配(Shape Assembly)、几何着色(Geometry Shader)、光栅化(Rasterization)、和片段着色器(Fragment Shadder)等操作计算出视频帧中的每个像素的RGB(Red,Green,blue)值,进而获取游戏客户端即将显示的画面。
此外,可以理解的是,进行渲染处理的装置还可以是终端120,也即根据本申请的实施例的方法可以全部或部分地搭载在终端120上,以利用终端120的GPU计算资源以及CPU资源对各种虚拟对象或虚拟场景进行渲染处理。此外还可以是由终端和服务器组成的系统来执行上述的渲染处理。本申请对此不进行限制。
然而,在现有的3D模型的渲染方案中,往往对场景中的虚拟对象进行直接绘制来捕捉 该虚拟对象的运动轨迹。例如,如图3所示,展示了待渲染的虚拟机械狗的投影面的示意图,如果要追踪图2中虚拟机械狗的运动,需要将虚拟机械狗的运动投射到对应的运动平面上,然后在该平面上追踪和记录图2中的虚拟机械狗的运动。例如,对于从游戏角色上方或下方跑过的虚拟机械狗,可以从仰视视角/俯视角对其在水平面(也即在XY轴构成的平面)上的运动进行追踪和记录。对于从游戏角色旁边跑过的虚拟机械狗和迎面跑来的机械狗,可以从平视角对其在数值面上的运动进行追踪和记录。
上述的每个投影面又称为3D渲染过程中的一个面,而每一个面的处理都需要提交一次绘制,这个过程又叫做绘制提交(draw call)。例如,绘制提交是指CPU调用一次渲染API的绘制接口,比如DirectX的DrawPrimitive/DrawIndexedPrimitive,或者OpenGL的glDrawElement/glDrawArrays。
由于3D场景中,往往需要对虚拟对象进行高精度的绘制,从而,往往造成需要使用不止一个面来绘制一个虚拟对象。并且,由于同一场景中往往存在多个虚拟对象等待渲染和绘制,对于每个虚拟对象都要使用对应的投影平面来追踪和记录虚拟对象的轨迹。因此直接对场景中的每个虚拟对象的所需的面都进行绘制追踪会增大绘制提交(draw call)的数量。即使对虚拟对象进行专门的裁剪,裁剪算法也会占用相当大的计算资源。
因此,采用直接绘制的方案对虚拟对象本身进行绘制/追踪,会占用过多的计算资源并造成性能下降,甚至在某些在高性能的3D渲染程序中,相当于对场景进行一次海量的裁剪和重复绘制操作。
为此,本申请的实施例提供了一种捕捉待渲染虚拟对象的运动轨迹的方法、一种捕捉待渲染虚拟对象的运动轨迹的方法、一种渲染虚拟对象的方法、一种确定追踪图元的方法、一种确定追踪图元的装置、一种捕捉待渲染虚拟对象的运动轨迹的装置、一种渲染虚拟对象的装置、一种电子设备和一种计算机可读存储介质。本申请的实施例使用追踪图元替代场景中原本的虚拟对象,使得3D渲染程序能够对运动追踪实现更高的可控性和精度控制,减少了运算量,从而减少了对计算机资源的占用。此外,本申请的一些实施例还使用现代多实例绘制技术以减少硬件绘制提交量的数量,从而优化了3D渲染的性能,提升了3D渲染的计算效率。
以下参考图4至图6来进一步描述本申请的各个实施例涉及的方法。图4为根据本申请的实施例的捕捉待渲染虚拟对象的运动轨迹的方法400的流程图。图5A和图5B为根据本申请的各个实施例的追踪图元的示例的示意图。图6为根据本申请的各个实施例的捕捉待渲染虚拟对象的运动轨迹的方法的示意图。
可选地,方法400对应的各个实施例包括步骤401至步骤402。其中,步骤401对应的计算又称为方法400的初始化阶段。步骤402至步骤403对应的计算又称为方法400的绘制阶段。本领域技术人员应当理解,本申请的各个实施例还可以包括其他可能的步骤,本申请的范围并不限于图4至图5A-图5B中的示例。
例如,在步骤401之前,方法400还可能包括与虚拟场景和待渲染虚拟对象的创建/确定相关的一个或多个步骤。例如,虚拟环境可以是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,本申请对此不加以限定。下述实施例以虚拟环境是三维虚拟环境来举例说明。而虚拟对象可以是指虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物 等,比如:在三维虚拟环境中显示的人物、动物、植物、油桶、墙壁、石块等。可选地,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。经过方法400,待渲染虚拟对象将被呈现在显示屏幕上,例如,终端120的显示屏幕上。在待渲染虚拟对象运动的情况下,显示屏幕上将显示待渲染虚拟对象对应的运动动画。
此外,本申请的各个实施例还涉及计算机视觉技术(Computer Vision,CV)。计算机视觉是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。例如,在使用终端120的玩家控制被控虚拟对象的情况下,虚拟场景可以呈现从被控虚拟对象的视角“看”出去的虚拟场景。此时,可以将被控虚拟对象绑定在3D渲染过程中的摄像机模型上。摄像机模型相当于3D游戏世界的眼睛,通过摄像机,被控虚拟对象才能“看到”游戏中三维世界,进而看到场景中的各种可能与其交互的其它虚拟对象。当然,也可以设置其它的视角,并基于该视角设置一个或多个摄像机模型,并在终端120的显示屏幕上呈现对应的场景画面。
在一些实施例中,提供了一种捕捉待渲染虚拟对象的运动轨迹的方法,由电子设备执行,该电子设备可以为终端或服务器,方法包括:
步骤401,基于虚拟场景中的待渲染虚拟对象,确定待渲染虚拟对象对应的至少一个追踪图元,待渲染虚拟对象对应的追踪图元覆盖待渲染虚拟对象的可运动部分。
其中,在一些实施例中待渲染虚拟对象是当前正在运动的被控虚拟对象,例如可以为行走的虚拟角色。追踪图元是图元的一种。具体地,图元(Primitive)是一种图形数据,其对应于绘图界面上看得见的实体,例如,3D或是2D空间中的一些简单形状。图元可以在软件中被简单的描述和绘制。3D空间常见的图元包括立方体、球体、椭球或者柱体等。2D空间常见的图元包括正方形、圆形、椭圆或者长方形等。例如,在OpenGL中,图元是能够被渲染的基本单位,其是能直接用glDrawArray或glDrawElement绘制出来的几何体。在一些实施例中,图元可以由一系列的顶点(Vertex)来描述,每个顶点包含多个属性(Attribute),比如图元的顶点坐标、法向量、颜色和深度值等。本申请仅示例性地说明了图元的概念,本申请并不以此为限。
参考图5A,摄像机模型对应的视角为虚拟角色的左前侧方。图5A中示意地示出了该虚拟角色对应的两个追踪图元,该两个追踪图元为虚拟角色的头部和右小臂。作为又一个示例,参考图5B,待渲染虚拟对象不仅包括行走的虚拟角色,还包括行走的虚拟角色身后的房屋和树木。摄像机模型对应的视角为该被控虚拟对象的左前侧方。图5B中示意地示出了该被控虚拟对象对应的两个追踪图元,该两个追踪图元为虚拟角色的头部和右小臂。如图5B所示,除了可运动的被控虚拟对象以外,对于静止的虚拟对象并不设置追踪图元,以减少绘制的面数。
在一些实施例中,待渲染虚拟对象对应的追踪图元跟随待渲染虚拟对象对应的网格模型的运动和姿态。继续参考图5A,追踪图元可以蒙罩复杂的虚拟角色的头部和右小臂网格表面。例如,虚拟角色的头部可以对应于一个球体类型的追踪图元,而右小臂可以对应于一个盒体类型的追踪图元。进一步地,参考图5B,虚拟角色的头部可以对应于一个球体类型的追踪图元,图5B中以黑色线条和灰色填充的圆形示出该球体类型的追踪图元,而右小臂可以对应于一个盒体类型的追踪图元,图5B以黑色线条和灰色填充的圆形示出该盒体类型的追踪图 元。图5B中还示出了不止一个虚拟对象,例如,房屋和树木对应的虚拟对象。房屋和树木对应的虚拟对象并不运动,因此,为减小计算量,可以不为这些虚拟对象设置对应的追踪图元。当然本申请并不以此为限。本实施例中,由于待渲染虚拟对象对应的追踪图元跟随待渲染虚拟对象对应的网格模型的,从而可以针对运动的虚拟对象设置追踪图元,减少了计算量。
在一些实施例中,追踪图元是根据图元标记自动生成/创建的。这些图元标记可以是开发人员提前为各个作为游戏角色的虚拟对象提前标记的。例如,在一些实施例中,开发人员可能会为避免各个虚拟对象之间产生元素间的冲突或叠加,而提前为各个虚拟对象设置图元标记以进行碰撞检测。本申请的追踪图元可以复用用于碰撞检测的图元标记。值得注意的是,虽然在可能的碰撞检测中也对应地生成了与用于碰撞检测的图元标记相对应的碰撞检测图元,但是碰撞检测图元往往紧贴虚拟对象对应的网格表面,而不会覆盖虚拟对象的可运动部分。也即,在这样的情况下,虽然本申请复用了用于碰撞检测的图元标记,但是方法400却基于这样的图元标记生成了与碰撞检测图元不同的追踪图元。当然,本申请并不限于此。
在一些实施例中,作为初始化阶段的步骤401还包括场景管理子步骤和图元创建子步骤,场景管理子步骤和图元创建子步骤基于图元标记确定了追踪图元。例如,参考图6,在3D虚拟场景中收集到的虚拟对象还可能是多种多样的,有着各种不同的几何形状和网格模型。图6简单表示3D场景中的一些复杂虚拟对象。与传统的、直接对图6中的这些虚拟对象进行绘制的方案不同,在步骤401利用简单的追踪图元蒙罩这些复杂的虚拟对象的网格表面。为使得追踪图元尽量地简单以进一步减少计算量,步骤401可以仅使用三到五种类型的图元来表达所有的复杂虚拟对象几何特征和运动轨迹。这些图元类型可选地包括:球型、(圆角)盒体、准圆柱体、椭球、三角锥等等。虽然本文只列举了上述图元类型,但是实际情况下可能存在的其他种类图元。例如,在2D场景中,图元类型可选地包括:圆形、(圆角)长方形、椭圆、三角形、正方形和长方形等等。无论使用怎样的图元类型,即使未使用上述未列出图元类型,都可落入本申请涵盖的范围内。
在一些实施例中,场景管理子步骤可以由后续详述的场景管理模块701来执行。具体地,在场景管理子步骤中,电子设备可以获取虚拟场景中的待渲染虚拟对象的运动状态,基于待渲染虚拟对象的运动状态,确定待渲染虚拟对象对应的至少一个图元标记。在图元创建子步骤中,电子设备基于待渲染虚拟对象对应的至少一个图元标记,确定待渲染虚拟对象对应的至少一个追踪图元。其中,在创建或确定虚拟场景后,电子设备还可以将被控虚拟对象的状态和从被控虚拟对象所在的环境收集的参数提供给云游戏的运动追踪系统,以确定虚拟场景中运动或静止的虚拟对象。本实施例中,根据运动状态确定图元标记,根据图元标记确定追踪图元,从而实现了根据运动状态确定追踪图元的目的。为进一步减少计算量,可以仅追踪运动的虚拟对象。然后,将待渲染虚拟对象对应的至少一个图元标记添加至图元标记列表。具体地,图元标记列表将如后续详述的表2所示。
在一些实施例中,为进一步减少计算量,电子设备还可以针对躯干等运动简单、变化不频繁、或变化较小的可运动部分,使用面积较大的追踪图元来覆盖该可运动部分。为提高虚拟场景的逼真程度,还可以针对面部、胳膊、手等运动复杂、变化频繁或变化较大的可运动部分,使用面积较小的、精细化的追踪图元来覆盖该可运动部分。
在一些实施例中,图元创建子步骤可以由后续详述的图元创建模块702来执行。具体地,在图元创建子步骤中,电子设备基于待渲染虚拟对象对应的至少一个图元标记,确定待渲染虚拟对象对应的追踪图元是否已被创建;在待渲染虚拟对象对应的追踪图元已被创建的情况 下,对待绘制图元列表中对应的追踪图元进行更新;在待渲染虚拟对象对应的追踪图元未被创建的情况下,创建对应的追踪图元,并将所创建的追踪图元添加至待绘制图元列表。具体地,待绘制图元列表将如后续详述的表3所示。本实施例中,对追踪图元进行了创建和更新,提高了追踪图元的管理效率。
步骤402,在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元。
其中,所绘制的追踪图元不显示在显示屏幕上。
在一些实施例中,电子设备可以将上述确定的追踪图元通过绘制提交(draw call)进行绘制。在draw call中,除了上述已详细描述的追踪图元对应的各个属性以外,draw call中还可以对应的增加各个追踪图元对应的运动轨迹相关的值。例如,这些追踪图元对应的运动轨迹相关的值可以是由运动追踪系统基于上述创建的追踪图元而确定的。例如在虚拟场景中,被控虚拟对象所控制的虚拟控制物和周围的虚拟交互物将根据终端用户发出的指令而对应地变化。云服务器110的CPU将根据终端120的用户触发的各类指令,判断各个追踪图元对应的运动轨迹相关的值,然后将这些值通过draw call提交至云服务器110的GPU进行后续的渲染。
在一些实施例中,绘制待渲染虚拟对象对应的追踪图元还包括:基于虚拟场景中的各个虚拟对象的交互数据以及被控虚拟对象的操作数据,确定待渲染虚拟对象对应的追踪图元的属性变化;以及基于追踪图元对应的属性变化,绘制待渲染虚拟对象对应的追踪图元。本实施例中,可以使得追踪图元跟随原虚拟对象的网格模型的运动和姿态,进行适当的位移变换、缩放和旋转,提高了追踪图元的追踪准确性。可选地,虽然这些追踪图元不会被实际显示到终端120的显示屏幕上,但是本申请实施例可以利用它们来追踪虚拟对象运动轨迹。针对游戏调试/开发场景,这些追踪图元也可以被显示在调试人员或开发人员的显示屏幕上,以便于调试人员或开发人员对于游戏进行细节调整。
在一些实施例中,步骤402还可以采用实例化的技术来绘制各个追踪图元。例如,步骤402还包括:将追踪图元按照追踪图元的图元类型进行分类,以获得各个图元类型对应的追踪图元集合,然后针对每个图元类型对应追踪图元集合,提交用于一次性绘制追踪图元集合中的所有追踪图元的绘制提交。例如,电子设备可以通过OpenGL中的glDrawArraysInstanced命令或glDrawElementsInstanced命令来提交用于一次性绘制追踪图元集合中的所有追踪图元的绘制提交。又例如,电子设备可以通过direct3D中DrawIndexedPrimitive命令来提交(submit)/调用(call)用于一次性绘制追踪图元集合中的所有追踪图元的绘制提交(draw call)。本申请对此不进行限制。本实施例中,可以在1次绘制提交(draw call)中完成所有同类型的追踪图元的绘制,提高了运算效率。例如全部的三角锥类型图元会在一次渲染提交中被完成,而用于捕获虚拟对象的运动轨迹的绘制量会被压缩到3至5次。
作为又一示例,步骤402还可以采用间接绘制(Indirect Draw)的技术来绘制各个追踪图元。利用间接绘制\计算(IndirectDraw\IndirectCompute),可实现渲染时更少的CPU/GPU之间的切换。例如可以在CPU中准备虚拟场景数据,然后一次性提交到GPU中。本申请对此不进行限制。
步骤403,基于所绘制的追踪图元对应的运动轨迹,捕捉虚拟场景中的待渲染虚拟对象的运动轨迹。
在一些实施例中,电子设备可以基于所绘制的追踪图元,确定所绘制的追踪图元之间的交互数据,然后基于所绘制的透明的追踪图元之间的交互数据,确定所绘制的追踪图元对应 的运动轨迹;以及基于所绘制的追踪图元对应的运动轨迹,捕捉虚拟场景中的待渲染虚拟对象的运动轨迹。
具体地,电子设备可以基于距离场的快速绘制步骤技术,来进一步实现从追踪图元的运动轨迹到待渲染对象的运动轨迹之间的映射。例如,电子设备可以将所绘制的追踪图元对应的运动轨迹绘制到一张目标贴图或缓冲区中。然后,电子设备可以使用距离场函数对已按照图元类型批量绘制的追踪图元进行快速捕捉,从而确定待渲染对象的运动轨迹。例如,电子设备可以使用有符号距离场(Signed Distance Field,SDF)函数来实现从追踪图元的运动轨迹到待渲染对象的运动轨迹之间的映射。SDF函数可以用一个标量场函数或者一张体贴图来表示,其通过空间中一个点到最近的三角面的距离实现了从追踪图元到待渲染对象之间的映射。例如,电子设备可以使用上述的距离场函数对缓存区进行采样(这一过程又称为轨迹拾取),然后通过采样到的值来判断采样点是否在追踪图元的运动轨迹内,从而确定待渲染对象的运动轨迹。
本实施例中,基于所绘制的透明的追踪图元之间的交互数据,确定所绘制的追踪图元对应的运动轨迹,提高了运动轨迹的准确度。
在一些实施例中,如图4所示,方法400还包括步骤404。在步骤404中,基于虚拟场景中的待渲染虚拟对象的运动轨迹将待渲染虚拟对象显示在显示屏幕。具体地,电子设备可以基于虚拟场景中的待渲染虚拟对象的运动轨迹,渲染虚拟场景中的待渲染虚拟对象,并将待渲染虚拟对象显示在显示屏幕。本实施例中,由于是根据待渲染虚拟对象的运动轨迹对待渲染虚拟对象进行显示的,从而能够根据运动轨迹精准的定位待渲染虚拟对象的显示位置,从而提高了显示效果。
由此,已经详细地描述各个步骤。然而在另一些示例中,待渲染虚拟对象对应的追踪图元已经在事先被确定了。例如,在这样的示例中,云游戏已经平稳运行了一段时间,而不需再创建各个游戏角色对应的虚拟对象的追踪图元,而是可以根据这些已知的追踪图元直接负责虚拟对象的渲染。为此本申请还提供了一种渲染虚拟对象的方法,由电子设备执行,该电子设备可以为终端或服务器,渲染虚拟对象的方法包括:在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元,所绘制的追踪图元覆盖待渲染虚拟对象的可运动部分并且不显示在显示屏幕上;基于所绘制的追踪图元对应的运动轨迹,捕捉虚拟场景中的待渲染虚拟对象的运动轨迹;基于虚拟场景中的待渲染虚拟对象的运动轨迹,将待渲染虚拟对象显示在显示屏幕。本申请的实施例使用追踪图元替代场景中原本的虚拟对象(例如,可运动的虚拟对象),使得3D渲染程序能够对运动追踪实现更高的可控性和精度控制。
又例如,在另一些示例中,可能无需实际渲染虚拟对象,仅需要确定各个追踪图元以进行快速的游戏测试,为此本申请还提供了一种确定追踪图元的方法,追踪图元用于辅助虚拟对象的渲染,由电子设备执行,该电子设备可以为终端或服务器,方法包括:获取虚拟场景中的待渲染虚拟对象的运动状态;基于待渲染虚拟对象的运动状态,确定待渲染虚拟对象对应的至少一个图元标记;基于待渲染虚拟对象对应的至少一个图元标记,确定待渲染虚拟对象对应的至少一个追踪图元,待渲染虚拟对象对应的追踪图元覆盖待渲染虚拟对象的可运动部分;以及在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元。可选地,对于游戏玩家而言,所绘制的追踪图元不显示在显示屏幕上。针对游戏调试/开发场景,这些追踪图元也可以被显示在调试人员或开发人员的显示屏幕上,以便于调试人员或开发人员对于游戏进行细节调整。本申请的实施例使用追踪图元替代场景中原本的虚拟对象(例如,可运动的虚拟对象), 使得3D渲染程序能够对运动追踪实现更高的可控性和精度控制。本申请的实施例还可选地仅对运动的虚拟对象的部分设置追踪图元,从而避免了直接对场景中的每个虚拟对象的各个面都进行绘制追踪,从而进一步减少了计算资源的占用。
此外,本申请的一些实施例还使用现代多实例绘制技术以减少硬件绘制提交量的数量,从而优化了3D渲染的性能,增加了3D渲染的计算效率。
以下,参考图7A至图7D来进一步描述本申请的各个实施例涉及的装置700。
在一些实施例中,根据本申请的各个实施例的装置700包括场景管理模块701、图元创建模块702和图元绘制模块703。
可选地,场景管理模块701用于获取虚拟场景中的待渲染虚拟对象的运动状态;基于待渲染虚拟对象的运动状态,确定待渲染虚拟对象对应的至少一个图元标记;以及将待渲染虚拟对象对应的至少一个图元标记添加至图元标记列表。也即,场景管理模块701可以用于管理虚拟场景内的可运动的虚拟对象(例如图2中的虚拟机械狗),观测它们的运动状态并记录那些正在运动的虚拟对象。
在一些实施例中,场景管理模块701可以执行图7B中示出的操作。在游戏过程中,当终端120或服务器110确定需要对场景中的虚拟对象进行渲染时,场景管理模块701将遍历虚拟对象追踪列表。虚拟对象追踪列表的示例如图表1所示。
Figure PCTCN2022130798-appb-000001
在遍历虚拟对象追踪列表的过程中,场景管理模块701首先需要确定当前虚拟对象的运动状态以确定是否追踪该虚拟对象。例如,如果该虚拟对象为静止对象(例如,土地、草丛等)或者即使不对该虚拟对象更新渲染结果也不会影响游戏画面的呈现,那么场景管理模块701将确定不需要追踪该虚拟对象。针对可能运动的虚拟对象,场景管理模块701将判断该虚拟对象是否包括图元标记。场景管理模块701将识别该虚拟对象上的图元标记,并生成图元标记列表。图元标记列表的示例为表2所示。
表2-图元标记列表
图元标记
QB
HB
……
例如,如果图元标记列表中不包括该虚拟对象的图元标记,那么场景管理模块701可以将该虚拟对象的图元标记添加至图元标记列表。如果图元标记列表中已经包括该对象对应的图元标记,那么场景管理模块701将对该图元标记的数据进行更新。可选地,在一些实施例中,如果场景管理模块701确定不需要继续追踪某个图元,场景管理模块701还可能删除该图元标记列表中的图元。在该示例中,场景管理模块701将循环地执行上述过程,直至遍历完虚拟对象追踪列表中的所有虚拟对象。
可选地,图元创建模块702用于基于待渲染虚拟对象对应的至少一个图元标记,确定待渲染虚拟对象对应的追踪图元是否已被创建;在待渲染虚拟对象对应的追踪图元已被创建的 情况下,对待绘制图元列表中对应的追踪图元进行更新;在待渲染虚拟对象对应的追踪图元未被创建的情况下,创建对应的追踪图元,并将所创建的追踪图元添加至待绘制图元列表。由此,图元创建模块702可以用于匹配场景中正在运动的虚拟对象,并为正在运动的虚拟对象生成具有对应变换属性(位置、缩放以及旋转)的追踪图元。
在一些实施例中,图元创建模块702将执行图7C中示出的操作。图元创建模块702将遍历上述的图元标记列表。在遍历图元标记列表的过程中,图元创建模块702可能会确定当前图元对应的信息以创建对应的追踪图元。例如,图元创建模块702将基于图元标记确定追踪图元对应的属性,并基于这些属性创建追踪图元。在该示例中,图元创建模块702将循环地执行上述过程,直至遍历完图元标记列表中的所有图元标记。待绘制图元列表的示例为表3所示。
Figure PCTCN2022130798-appb-000002
可选地,图元绘制模块703被配置为在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元。由此,图元绘制模块703可以用于收集生成好的图元,记录它们的信息并最终提交到GPU进行绘制。
在一些实施例中,图元绘制模块703将执行图7D中示出的操作。图元绘制模块703将遍历待绘制图元列表。在遍历该待绘制图元列表的过程中,图元绘制模块703将按照图元标记对应的图元类型对图元标记进行分类和记录,以便于生成后续的instanced draw指令或indirect draw指令。例如,图元绘制模块703可以如图7D中所示,为各个图元类型记录一组绘制变换参数,然后为所有的图元类型提交draw call。又例如,图元绘制模块703可能整理并生成与下述的绘制指令列表相对应instanced draw指令或indirect draw指令。
Figure PCTCN2022130798-appb-000003
绘制指令列表的示例为表4所示。在该示例中,图元绘制模块703将循环地执行上述过程,直至遍历完待绘制图元列表中的所有图元标记。
由此,装置700使用追踪图元替代场景中原本的虚拟对象,使得3D渲染程序能够对运动追踪实现更高的可控性和精度控制。此外,本申请的一些实施例还使用现代多实例绘制技术以减少硬件绘制提交(Draw Call)量的数量,从而优化了3D渲染的性能,增加了3D渲染的计算效率。
此外,根据本申请的又一方面,还提供了一种捕捉待渲染虚拟对象的运动轨迹的装置,包括:追踪图元确定模块,用于基于虚拟场景中的待渲染虚拟对象,确定待渲染虚拟对象对应的至少一个追踪图元,待渲染虚拟对象对应的追踪图元覆盖待渲染虚拟对象的可运动部分;追踪图元绘制模块,用于在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元;以及运动轨迹捕捉模块,用于基于所绘制的追踪图元对应的运动轨迹,捕捉虚拟场景中的待渲染虚拟对 象的运动轨迹。
在一些实施例中,追踪图元绘制模块,还用于将追踪图元按照追踪图元的图元类型进行分类,以获得各个图元类型对应的追踪图元集合;及,针对每个图元类型对应追踪图元集合,提交用于一次性绘制追踪图元集合中的所有追踪图元的绘制提交。
在一些实施例中,追踪图元绘制模块,还用于基于虚拟场景中的各个虚拟对象的交互数据以及被控虚拟对象的操作数据,确定待渲染虚拟对象对应的追踪图元的属性变化;及,基于追踪图元对应的属性变化,绘制待渲染虚拟对象对应的追踪图元。
在一些实施例中,捕捉待渲染虚拟对象的运动轨迹的装置,还用于基于所绘制的追踪图元,确定所绘制的追踪图元之间的交互数据;基于所绘制的透明的追踪图元之间的交互数据,确定所绘制的追踪图元对应的运动轨迹。
在一些实施例中,追踪图元确定模块,还用于获取虚拟场景中的待渲染虚拟对象的运动状态;基于待渲染虚拟对象的运动状态,确定待渲染虚拟对象对应的至少一个图元标记;基于待渲染虚拟对象对应的至少一个图元标记,确定待渲染虚拟对象对应的至少一个追踪图元。
在一些实施例中,追踪图元确定模块包括图元创建模块,图元创建模块用于基于待渲染虚拟对象对应的至少一个图元标记,确定待渲染虚拟对象对应的追踪图元是否已被创建;在待渲染虚拟对象对应的追踪图元已被创建的情况下,对待绘制图元列表中对应的追踪图元进行更新;在待渲染虚拟对象对应的追踪图元未被创建的情况下,创建对应的追踪图元,并将所创建的追踪图元添加至待绘制图元列表。
在一些实施例中,捕捉待渲染虚拟对象的运动轨迹的装置,还用于基于虚拟场景中的待渲染虚拟对象的运动轨迹,将待渲染虚拟对象显示在显示屏幕。
根据本申请的又一方面,还提供了本申请还提供了一种渲染虚拟对象的装置,包括:追踪图元绘制模块,用于在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元,所绘制的追踪图元覆盖待渲染虚拟对象的可运动部分;运动轨迹捕捉模块,用于基于所绘制的追踪图元对应的运动轨迹,捕捉虚拟场景中的待渲染虚拟对象的运动轨迹;显示模块,用于基于虚拟场景中的待渲染虚拟对象的运动轨迹,将待渲染虚拟对象显示在显示屏幕上。
在一些实施例中,追踪图元绘制模块,还用于将追踪图元按照追踪图元的图元类型进行分类,以获得各个图元类型对应的追踪图元集合;及,针对每个图元类型对应追踪图元集合,提交用于一次性绘制追踪图元集合中的所有追踪图元的绘制提交。
此外根据本申请的又一方面,还提供了一种电子设备,用于实施根据本申请的一些实施例的方法。图8示出了根据本申请的一些实施例的电子设备2000的示意图。
如图8所示,电子设备2000可以包括一个或多个处理器2010,和一个或多个存储器2020。其中,存储器2020中存储有计算机可读代码,计算机可读代码当由一个或多个处理器2010运行时,可以执行如上的搜索请求处理方法。
本申请的一些实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。上述处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请的一些实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,可以是X86架构或ARM架构的。
一般而言,本申请的各种示例实施例可以在硬件或专用电路、软件、固件、逻辑,或其任何组合中实施。某些方面可以在硬件中实施,而其他方面可以在可以由控制器、微处理器或其他计算设备执行的固件或软件中实施。当本申请的一些实施例的各方面被图示或描述为框图、流程图或使用某些其他图形表示时,将理解此处描述的方框、装置、系统、技术或方法可以作为非限制性的示例在硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其 他计算设备,或其某些组合中实施。
例如,根据本申请的一些实施例的方法或装置也可以借助于图9所示的计算设备3000的架构来实现。如图9所示,计算设备3000可以包括总线3010、一个或多个CPU 3020、只读存储器(ROM)3030、随机存取存储器(RAM)3040、连接到网络的通信端口3050、输入/输出组件3060、硬盘3070等。计算设备3000中的存储设备,例如ROM 3030或硬盘3070可以存储本申请提供的用于确定车辆的驾驶风险的方法的处理和/或通信使用的各种数据或文件以及CPU所执行的程序指令。计算设备3000还可以包括用户界面3080。当然,图9所示的架构只是示例性的,在实现不同的设备时,根据实际需要,可以省略图9示出的计算设备中的一个或多个组件。
根据本申请的又一方面,还提供了一种计算机可读存储介质。图10示出了根据本申请的存储介质的示意图4000。
如图10所示,计算机存储介质4020上存储有计算机可读指令4010。当计算机可读指令4010由处理器运行时,可以执行参照以上附图描述的根据本申请的一些实施例的方法。本申请的一些实施例中的计算机可读存储介质可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM)、可编程只读存储器(PROM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)或闪存。易失性存储器可以是随机存取存储器(RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(SDRAM)、双倍数据速率同步动态随机存取存储器(DDRSDRAM)、增强型同步动态随机存取存储器(ESDRAM)、同步连接动态随机存取存储器(SLDRAM)和直接内存总线随机存取存储器(DR RAM)。应注意,本文描述的方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。应注意,本文描述的方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请的一些实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行根据本申请的一些实施例的方法。
需要说明的是,附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
一般而言,本申请的各种示例实施例可以在硬件或专用电路、软件、固件、逻辑,或其任何组合中实施。某些方面可以在硬件中实施,而其他方面可以在可以由控制器、微处理器或其他计算设备执行的固件或软件中实施。当本申请的一些实施例的各方面被图示或描述为框图、流程图或使用某些其他图形表示时,将理解此处描述的方框、装置、系统、技术或方 法可以作为非限制性的示例在硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其他计算设备,或其某些组合中实施。
在上面详细描述的本申请的示例实施例仅仅是说明性的,而不是限制性的。本领域技术人员应该理解,在不脱离本申请的原理和精神的情况下,可对这些实施例或其特征进行各种修改和组合,这样的修改应落入本申请的范围内。

Claims (18)

  1. 一种捕捉待渲染虚拟对象的运动轨迹的方法,由电子设备执行,包括:
    基于虚拟场景中的待渲染虚拟对象,确定所述待渲染虚拟对象对应的至少一个追踪图元,所述待渲染虚拟对象对应的追踪图元覆盖所述待渲染虚拟对象的可运动部分;
    在所述虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元;及
    基于所绘制的追踪图元对应的运动轨迹,捕捉所述虚拟场景中的待渲染虚拟对象的运动轨迹。
  2. 根据权利要求1所述的方法,其特征在于,所述待渲染虚拟对象对应的追踪图元跟随所述待渲染虚拟对象对应的网格模型的运动或姿态。
  3. 根据权利要求1所述的方法,其特征在于,所述绘制所述待渲染虚拟对象对应的追踪图元,包括:
    将所述追踪图元按照追踪图元的图元类型进行分类,以获得各个图元类型对应的追踪图元集合;及
    针对每个图元类型对应追踪图元集合,提交用于一次性绘制所述追踪图元集合中的所有追踪图元的绘制提交。
  4. 根据权利要求1所述的方法,其特征在于,所述绘制所述待渲染虚拟对象对应的追踪图元,包括:
    基于所述虚拟场景中的各个虚拟对象的交互数据以及被控虚拟对象的操作数据,确定所述待渲染虚拟对象对应的追踪图元的属性变化;及
    基于所述追踪图元对应的属性变化,绘制所述待渲染虚拟对象对应的追踪图元。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    基于所绘制的追踪图元,确定所绘制的追踪图元之间的交互数据;
    基于所绘制的透明的追踪图元之间的交互数据,确定所绘制的追踪图元对应的运动轨迹。
  6. 根据权利要求1所述的方法,其特征在于,所述基于虚拟场景中的待渲染虚拟对象,确定所述待渲染虚拟对象对应的至少一个追踪图元,包括:
    获取虚拟场景中的待渲染虚拟对象的运动状态;
    基于所述待渲染虚拟对象的运动状态,确定所述待渲染虚拟对象对应的至少一个图元标记;
    基于所述待渲染虚拟对象对应的至少一个图元标记,确定所述待渲染虚拟对象对应的至少一个追踪图元。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述待渲染虚拟对象对应的至少一个图元标记,确定所述待渲染虚拟对象对应的至少一个追踪图元,包括:
    基于所述待渲染虚拟对象对应的至少一个图元标记,确定所述待渲染虚拟对象对应的追踪图元是否已被创建;
    在所述待渲染虚拟对象对应的追踪图元已被创建的情况下,对待绘制图元列表中对应的追踪图元进行更新;
    在所述待渲染虚拟对象对应的追踪图元未被创建的情况下,创建对应的追踪图元,并将所创建的追踪图元添加至待绘制图元列表。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    基于所述虚拟场景中的待渲染虚拟对象的运动轨迹,将所述待渲染虚拟对象显示在显示屏幕。
  9. 一种渲染虚拟对象的方法,由电子设备执行,包括,
    在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元,所绘制的追踪图元覆盖所述待渲染虚拟对象的可运动部分;
    基于所绘制的追踪图元对应的运动轨迹,捕捉所述虚拟场景中的待渲染虚拟对象的运动轨迹;及
    基于所述虚拟场景中的待渲染虚拟对象的运动轨迹,将所述待渲染虚拟对象显示在显示屏幕。
  10. 根据权利要求9所述的方法,其特征在于,所述待渲染虚拟对象对应的追踪图元跟随所述待渲染虚拟对象对应的网格模型的运动和姿态。
  11. 根据权利要求9所述的方法,其特征在于,所述绘制所述待渲染虚拟对象对应的追踪图元,包括:
    将所述追踪图元按照追踪图元的图元类型进行分类,以获得各个图元类型对应的追踪图元集合;及
    针对每个图元类型对应追踪图元集合,提交用于一次性绘制所述追踪图元集合中的所有追踪图元的绘制提交。
  12. 一种绘制追踪图元的方法,由电子设备执行,其特征在于,所述追踪图元用于辅助虚拟对象的渲染,所述方法包括:
    获取虚拟场景中的待渲染虚拟对象的运动状态;
    基于所述待渲染虚拟对象的运动状态,确定所述待渲染虚拟对象对应的至少一个图元标记;
    基于所述待渲染虚拟对象对应的至少一个图元标记,确定所述待渲染虚拟对象对应的至少一个追踪图元,所述待渲染虚拟对象对应的追踪图元覆盖所述待渲染虚拟对象的可运动部分;及
    在所述虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元。
  13. 一种捕捉待渲染虚拟对象的运动轨迹的装置,包括:
    追踪图元确定模块,用于基于虚拟场景中的待渲染虚拟对象,确定所述待渲染虚拟对象对应的至少一个追踪图元,所述待渲染虚拟对象对应的追踪图元覆盖所述待渲染虚拟对象的可运动部分;
    追踪图元绘制模块,用于在所述虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元;及
    运动轨迹捕捉模块,用于基于所绘制的追踪图元对应的运动轨迹,捕捉所述虚拟场景中的待渲染虚拟对象的运动轨迹。
  14. 一种渲染虚拟对象的装置,包括:
    追踪图元绘制模块,用于在虚拟场景中,绘制待渲染虚拟对象对应的追踪图元,所绘制的追踪图元覆盖所述待渲染虚拟对象的可运动部分;
    运动轨迹捕捉模块,用于基于所绘制的追踪图元对应的运动轨迹,捕捉所述虚拟场景中的待渲染虚拟对象的运动轨迹;及
    显示模块,用于基于所述虚拟场景中的待渲染虚拟对象的运动轨迹,将所述待渲染虚拟对象显示在显示屏幕上。
  15. 一种绘制追踪图元的装置,所述追踪图元用于辅助虚拟对象的渲染,包括:
    场景管理模块,用于获取虚拟场景中的待渲染虚拟对象的运动状态;基于所述待渲染虚拟对象的运动状态,确定所述待渲染虚拟对象对应的至少一个图元标记;
    图元创建模块,用于基于所述待渲染虚拟对象对应的至少一个图元标记,确定所述待渲染虚拟对象对应的至少一个追踪图元,所述待渲染虚拟对象对应的追踪图元覆盖所述待渲染虚拟对象的可运动部分;
    图元绘制模块,用于在所述虚拟场景中,绘制所述待渲染虚拟对象对应的追踪图元。
  16. 一种电子设备,包括:
    处理器;
    存储器,存储器存储有计算机指令,该计算机指令被处理器执行时实现如权利要求1-12中任一项所述的方法。
  17. 一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令被处理器执行时实现如权利要求1-12中的任一项所述的方法。
  18. 一种计算机程序产品,其包括计算机可读指令,所述计算机可读指令在被处理器执行时,使得所述处理器执行如权利要求1-12中任一项所述的方法。
PCT/CN2022/130798 2022-01-18 2022-11-09 捕捉待渲染虚拟对象的运动轨迹的方法、装置及电子设备 WO2023138170A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/329,897 US20230316541A1 (en) 2022-01-18 2023-06-06 Method and apparatus for capturing motion trajectory of to-be-rendered virtual object and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210056206.0A CN114419099A (zh) 2022-01-18 2022-01-18 捕捉待渲染虚拟对象的运动轨迹的方法
CN202210056206.0 2022-01-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/329,897 Continuation US20230316541A1 (en) 2022-01-18 2023-06-06 Method and apparatus for capturing motion trajectory of to-be-rendered virtual object and electronic device

Publications (1)

Publication Number Publication Date
WO2023138170A1 true WO2023138170A1 (zh) 2023-07-27

Family

ID=81274392

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/130798 WO2023138170A1 (zh) 2022-01-18 2022-11-09 捕捉待渲染虚拟对象的运动轨迹的方法、装置及电子设备

Country Status (3)

Country Link
US (1) US20230316541A1 (zh)
CN (1) CN114419099A (zh)
WO (1) WO2023138170A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419099A (zh) * 2022-01-18 2022-04-29 腾讯科技(深圳)有限公司 捕捉待渲染虚拟对象的运动轨迹的方法
CN117495847B (zh) * 2023-12-27 2024-03-19 安徽蔚来智驾科技有限公司 路口检测方法、可读存储介质及智能设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819684A (zh) * 2010-04-12 2010-09-01 长春理工大学 一种动画电影虚拟三维场景的空间加速结构及其创建与更新方法
JP2017188041A (ja) * 2016-04-08 2017-10-12 株式会社バンダイナムコエンターテインメント プログラム及びコンピュータシステム
CN108961376A (zh) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 虚拟偶像直播中实时绘制三维场景的方法及系统
CN111383313A (zh) * 2020-03-31 2020-07-07 歌尔股份有限公司 一种虚拟模型渲染方法、装置、设备及可读存储介质
CN113064540A (zh) * 2021-03-23 2021-07-02 网易(杭州)网络有限公司 基于游戏的绘制方法、绘制装置、电子设备及存储介质
CN114419099A (zh) * 2022-01-18 2022-04-29 腾讯科技(深圳)有限公司 捕捉待渲染虚拟对象的运动轨迹的方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2307352A1 (en) * 1999-06-30 2000-12-30 International Business Machines Corporation System and method for displaying a three-dimensional object using motion vectors to generate object blur
US9129644B2 (en) * 2009-06-23 2015-09-08 Disney Enterprises, Inc. System and method for rendering in accordance with location of virtual objects in real-time
CN103258338A (zh) * 2012-02-16 2013-08-21 克利特股份有限公司 利用真实数据来驱动仿真的虚拟环境的方法和系统
US10303323B2 (en) * 2016-05-18 2019-05-28 Meta Company System and method for facilitating user interaction with a three-dimensional virtual environment in response to user input into a control device having a graphical interface
US11030813B2 (en) * 2018-08-30 2021-06-08 Snap Inc. Video clip object tracking
CN110047124A (zh) * 2019-04-23 2019-07-23 北京字节跳动网络技术有限公司 渲染视频的方法、装置、电子设备和计算机可读存储介质
US10853994B1 (en) * 2019-05-23 2020-12-01 Nvidia Corporation Rendering scenes using a combination of raytracing and rasterization
US11270492B2 (en) * 2019-06-25 2022-03-08 Arm Limited Graphics processing systems
CN111325822B (zh) * 2020-02-18 2022-09-06 腾讯科技(深圳)有限公司 热点图的显示方法、装置、设备及可读存储介质
CN111161391B (zh) * 2020-04-02 2020-06-30 南京芯瞳半导体技术有限公司 一种生成追踪路径的方法、装置及计算机存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819684A (zh) * 2010-04-12 2010-09-01 长春理工大学 一种动画电影虚拟三维场景的空间加速结构及其创建与更新方法
JP2017188041A (ja) * 2016-04-08 2017-10-12 株式会社バンダイナムコエンターテインメント プログラム及びコンピュータシステム
CN108961376A (zh) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 虚拟偶像直播中实时绘制三维场景的方法及系统
CN111383313A (zh) * 2020-03-31 2020-07-07 歌尔股份有限公司 一种虚拟模型渲染方法、装置、设备及可读存储介质
CN113064540A (zh) * 2021-03-23 2021-07-02 网易(杭州)网络有限公司 基于游戏的绘制方法、绘制装置、电子设备及存储介质
CN114419099A (zh) * 2022-01-18 2022-04-29 腾讯科技(深圳)有限公司 捕捉待渲染虚拟对象的运动轨迹的方法

Also Published As

Publication number Publication date
US20230316541A1 (en) 2023-10-05
CN114419099A (zh) 2022-04-29

Similar Documents

Publication Publication Date Title
WO2023138170A1 (zh) 捕捉待渲染虚拟对象的运动轨迹的方法、装置及电子设备
KR102472152B1 (ko) 멀티 서버 클라우드 가상 현실 (vr) 스트리밍
JP6959365B2 (ja) 中心窩レンダリングシステムにおけるシャドーの最適化及びメッシュスキンの適応
CN110178370A (zh) 使用用于立体渲染的光线步进和虚拟视图广播器进行这种渲染
JP2022528432A (ja) ハイブリッドレンダリング
CN113808245B (zh) 用于遍历光线追踪加速结构的增强技术
US11373358B2 (en) Ray tracing hardware acceleration for supporting motion blur and moving/deforming geometry
US20190355170A1 (en) Virtual reality content display method and apparatus
JP7050883B2 (ja) 中心窩レンダリングシステムにおける、遅延ライティングの最適化、パーティクルの中心窩適応、及びシミュレーションモデル
US8363051B2 (en) Non-real-time enhanced image snapshot in a virtual world system
CN113781624A (zh) 具有可选的世界空间变换的光线跟踪硬件加速
US11450057B2 (en) Hardware acceleration for ray tracing primitives that share vertices
US10818078B2 (en) Reconstruction and detection of occluded portions of 3D human body model using depth data from single viewpoint
CN106780707A (zh) 模拟场景中全局光照的方法和装置
TWI780995B (zh) 圖像處理方法、設備及電腦儲存媒體
CN114225386A (zh) 场景画面的编码方法、装置、电子设备及存储介质
Li et al. Immersive neural graphics primitives
WO2023202254A1 (zh) 图像渲染方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN112891940B (zh) 图像数据处理方法及装置、存储介质、计算机设备
JP4754385B2 (ja) プログラム、情報記録媒体および画像生成システム
CN113476835B (zh) 一种画面显示的方法及装置
CN115861500B (zh) 2d模型碰撞体生成方法及装置
CN114047998B (zh) 对象更新方法及装置
CN116726501A (zh) 游戏场景中生成投影的方法、装置、存储介质及电子装置
CN115845363A (zh) 渲染方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22921594

Country of ref document: EP

Kind code of ref document: A1