CN116645461A - Ray tracing adjustment method and device for virtual three-dimensional model and storage medium - Google Patents

Ray tracing adjustment method and device for virtual three-dimensional model and storage medium Download PDF

Info

Publication number
CN116645461A
CN116645461A CN202310376981.9A CN202310376981A CN116645461A CN 116645461 A CN116645461 A CN 116645461A CN 202310376981 A CN202310376981 A CN 202310376981A CN 116645461 A CN116645461 A CN 116645461A
Authority
CN
China
Prior art keywords
virtual
dimensional model
light
target
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310376981.9A
Other languages
Chinese (zh)
Inventor
邢文武
陈东洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310376981.9A priority Critical patent/CN116645461A/en
Publication of CN116645461A publication Critical patent/CN116645461A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a ray tracing adjustment method and device for a virtual three-dimensional model and a storage medium. The method comprises the following steps: creating a texture reference object corresponding to the virtual three-dimensional model, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model; determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model; adjusting the light ray attribute of the target area to obtain a target adjusting result; and carrying out layered rendering on the virtual three-dimensional model and the target adjustment result. The application solves the technical problems of high manufacturing cost, low efficiency and poor flexibility caused by adopting a mode of adding light in a later period to manufacture the light effect representation for the virtual three-dimensional model in the related technology.

Description

Ray tracing adjustment method and device for virtual three-dimensional model and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for ray tracing adjustment of a virtual three-dimensional model, and a storage medium.
Background
In the process of computer image rendering, it is often involved in the production of light effect representations of virtual three-dimensional models, such as catch light effects, contour light effects, clothing light effects, etc. In the prior art, the method for adding light to the virtual three-dimensional model and testing and adjusting the light effect for multiple times is generally used for manufacturing the light effect expression, but the method has the defects that: when animation rendering is involved, key frames (key frames) are needed for light effect production, the production results are difficult to adjust in real time, and the method has the advantages of high test cost, low efficiency, easiness in error and poor flexibility.
In view of the above problems, no effective solution has been proposed at present.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
At least some embodiments of the present application provide a method, an apparatus, and a storage medium for adjusting ray tracing of a virtual three-dimensional model, so as to at least solve the technical problems of high manufacturing cost, low efficiency, and poor flexibility caused by adopting a mode of adding light in a later period to manufacture light effect representation for the virtual three-dimensional model in the related art.
According to one embodiment of the present application, there is provided a ray tracing adjustment method for a virtual three-dimensional model, including: creating a texture reference object corresponding to the virtual three-dimensional model, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model; determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model; adjusting the light ray attribute of the target area to obtain a target adjusting result; and carrying out layered rendering on the virtual three-dimensional model and the target adjustment result.
According to an embodiment of the present application, there is further provided a ray tracing adjustment apparatus for a virtual three-dimensional model, including: the creating module is used for creating a texture reference object corresponding to the virtual three-dimensional model, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model; the determining module is used for determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model; the adjusting module is used for adjusting the light ray attribute of the target area to obtain a target adjusting result; and the rendering module is used for carrying out layered rendering on the virtual three-dimensional model and the target adjusting result.
According to one embodiment of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the ray tracing adjustment method of the virtual three-dimensional model in any one of the above-mentioned claims when run.
According to one embodiment of the present application, there is also provided an electronic apparatus including: comprising a memory in which a computer program is stored and a processor arranged to run the computer program to perform the ray tracing adjustment method of the virtual three-dimensional model in any one of the above.
In at least some embodiments of the present application, a texture referencing object corresponding to a virtual three-dimensional model is created, where the texture referencing object is used to lock textures that fit the contours of the virtual three-dimensional model; determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model; a mode of adjusting the light ray attribute of the target area is adopted to obtain a target adjusting result; the method further carries out layered rendering on the virtual three-dimensional model and the target adjusting result, achieves the aim of adjusting the light attribute of the light tracking area based on the texture referencing object corresponding to the virtual three-dimensional model to obtain a real-time rendering result, achieves the technical effects of improving the efficiency and flexibility of manufacturing the light effect expression of the virtual three-dimensional model and reducing the cost, and further solves the technical problems of high manufacturing cost, low efficiency and poor flexibility caused by adopting a mode of adding light in a later stage to manufacture the light effect expression of the virtual three-dimensional model in the related art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a mobile terminal of a ray tracing adjustment method of a virtual three-dimensional model according to one embodiment of the application;
FIG. 2 is a flow chart of a ray trace adjustment method for a virtual three-dimensional model according to one embodiment of the application;
FIG. 3 is a schematic diagram of an alternative light attribute adjustment state of a virtual three-dimensional model according to one embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative light attribute adjustment state of a virtual three-dimensional model according to one embodiment of the present application;
FIG. 5 is a schematic diagram of a light attribute adjustment state of yet another alternative virtual three-dimensional model according to one embodiment of the present application;
FIG. 6 is a schematic diagram of ray tracing results of an alternative virtual three-dimensional model according to one embodiment of the application;
FIG. 7 is a block diagram of a ray tracing adjustment apparatus for a virtual three-dimensional model according to one embodiment of the application;
FIG. 8 is a block diagram of an alternative ray tracing tuning apparatus for a virtual three-dimensional model according to one embodiment of the application;
fig. 9 is a schematic diagram of an electronic device according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the description of the present application, the term "for example" is used to mean "serving as an example, illustration, or description". Any embodiment described as "for example" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In describing embodiments of the present application, partial terms or terms that appear are used in the following explanation:
arnold renderer: a renderer employing physically implemented Monte Carlo (Monte Carlo) ray tracing while employing a physical sensitivity model is capable of rapidly providing a smooth animation effect at high resolution. The Arnold renderer can be applied to integrated various special effects tools (e.g., maya, houdini, 3ds Max, katana, etc.).
Computer images (Computer Graphics, CG) refer to pictures created using computer software without the use of live photos or videos. The computer image may include a three-dimensional animation, a two-dimensional graphic, and the like.
Key frames (key frames) refer to a frame of interest in using a particular photograph and animation authoring process, the key frames can be used to measure the speed of the animation, and can measure the progress of the animation.
In one possible implementation manner of the present application, aiming at a method for adding light to a virtual three-dimensional model to make light effect expression in the background related to CG making in the technical field of computers, after the inventor is practiced and studied carefully, the technical problems of high making cost, low efficiency and poor flexibility still exist.
The embodiment of the application provides a ray tracing adjustment method of a virtual three-dimensional model, which adopts the technical conception that the ray attribute of a ray tracing area is adjusted based on a texture reference object corresponding to the virtual three-dimensional model to obtain a real-time rendering result, achieves the technical effects of improving the efficiency and flexibility of manufacturing the virtual three-dimensional model and reducing the cost, and further solves the technical problems of high manufacturing cost, low efficiency and poor flexibility caused by adopting a mode of adding light for manufacturing the virtual three-dimensional model in a later period in the related art.
The above-described method embodiments to which the present application relates may be performed in a terminal device (e.g. a mobile terminal, a computer terminal or similar computing means). Taking the mobile terminal as an example, the mobile terminal can be a terminal device such as a smart phone, a tablet computer, a palm computer, a mobile internet device, a game machine and the like.
Fig. 1 is a block diagram of a hardware structure of a mobile terminal according to a ray tracing adjustment method of a virtual three-dimensional model according to an embodiment of the present application. As shown in fig. 1, a mobile terminal may include one or more (only one shown in fig. 1) processors 102, memory 104, transmission devices 106, input output devices 108, and display devices 110. Taking the example that the ray tracing adjustment method of the virtual three-dimensional model is applied to the electronic game scene through the mobile terminal, the processor 102 invokes and runs the computer program stored in the memory 104 to execute the ray tracing adjustment method of the virtual three-dimensional model, and the rendering result obtained by performing layered rendering on the virtual three-dimensional model and the target adjustment result in the electronic game scene is transmitted to the input and output device 108 and/or the display device 110 through the transmission device 106, so that the rendering result is provided to the player.
As also shown in fig. 1, the processor 102 may include, but is not limited to: a central processor (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), a digital signal processing (Digital Signal Processing, DSP) chip, a microprocessor (Microcontroller Unit, MCU), a programmable logic device (Field Programmable Gate Array, FPGA), a Neural network processor (Neural-Network Processing Unit, NPU), a tensor processor (Tensor Processing Unit, TPU), an artificial intelligence (Artificial Intelligence, AI) type processor, and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
In some optional embodiments based on game scenes, the terminal device may further provide a human-machine interaction interface with a touch-sensitive surface, where the human-machine interaction interface may sense finger contacts and/or gestures to interact with a graphical user interface (Graphical User Interface, GUI), where the human-machine interaction functions may include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
The above-mentioned method embodiments related to the present application may also be executed in a server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform. Taking an example that the ray tracing adjustment method of the virtual three-dimensional model is applied to an electronic game scene through an electronic game server, the electronic game server can conduct layered rendering on the virtual three-dimensional model and a target adjustment result in the electronic game scene based on the ray tracing adjustment method of the virtual three-dimensional model, and provide the obtained rendering result for a player (for example, the virtual three-dimensional model and the target adjustment result can be rendered and displayed on a display screen of a player terminal, or provided for the player through holographic projection, and the like).
According to one embodiment of the present application, there is provided an embodiment of a ray trace adjustment method for a virtual three-dimensional model, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and that, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown.
In this embodiment, a method for adjusting ray tracing of a virtual three-dimensional model running on the mobile terminal is provided, and fig. 2 is a flowchart of a method for adjusting ray tracing of a virtual three-dimensional model according to one embodiment of the present application, as shown in fig. 2, the method includes the following steps:
step S21, creating a texture reference object corresponding to the virtual three-dimensional model, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model;
step S22, determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model;
step S23, adjusting the light ray attribute of the target area to obtain a target adjustment result;
and step S24, performing hierarchical rendering on the virtual three-dimensional model and the target adjustment result.
The ray tracing adjustment method of the virtual three-dimensional model provided by the embodiment of the application can be applied to, but is not limited to: is a relevant application scene in the fields of virtual reality/augmented reality (such as three-dimensional model production of virtual reality), electronic games (such as virtual game role model production), film and television propaganda (such as virtual three-dimensional animation role production), industrial design, product development (such as virtual three-dimensional product model production) and the like. The virtual three-dimensional model may be a model of the light effect representation to be rendered in the application scene. The texture application object (Texture Reference Object) is associated with texture resources corresponding to the virtual three-dimensional model, and illustratively, the texture application object is locked with textures matched with the outline of the virtual three-dimensional model.
In particular, the game types corresponding to the relevant application scenario in the electronic game field may be: action classes (e.g., first or third person shooter games, two-or three-dimensional combat games, war action games, sports action games, etc.), adventure classes (e.g., adventure games, collection games, puzzle games, etc.), simulation classes (e.g., simulated sand table games, simulated foster games, strategy simulation games, city building simulation games, business simulation games, etc.), role playing classes and leisure classes (e.g., chess and card game games, recreation game games, music rhythm games, trade foster games, etc.), etc.
The ray tracing area is used for carrying out ray tracing on the target part, so that the texture corresponding to the texture reference object follows the dynamic change of the target part. The target part in the virtual three-dimensional model can be pre-designated or can be determined in real time according to the application scene requirement. For example, the target region may be a region of an eye, a head, a hand, limbs, a trunk, or the like of the virtual three-dimensional character model. The target site has Visual Effects (VFX) attributes.
The adjusting the light attribute of the target area may be adjusting part or all of a plurality of optical parameters corresponding to the light attribute of the target area in real time according to the application scene requirement, so as to obtain the target adjustment result. The above target adjustment results may be stored as a renderer-readable data file. And carrying out layered rendering on the virtual three-dimensional model and the target adjustment result to obtain a rendering result. And the target adjustment result is a real-time adjustment result corresponding to the light attribute, and the rendering result is a display picture or a display video corresponding to the virtual three-dimensional model after the light effect is adjusted in real time.
Specifically, the texture reference object corresponding to the virtual three-dimensional model is created, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model; determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model; the method may further include other method steps, and may refer to further description of the embodiments of the present application below, which are not repeated herein.
In at least some embodiments of the present application, a texture referencing object corresponding to a virtual three-dimensional model is created, where the texture referencing object is used to lock textures that fit the contours of the virtual three-dimensional model; determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model; a mode of adjusting the light ray attribute of the target area is adopted to obtain a target adjusting result; the method further carries out layered rendering on the virtual three-dimensional model and the target adjusting result, achieves the aim of adjusting the light attribute of the light tracking area based on the texture referencing object corresponding to the virtual three-dimensional model to obtain a real-time rendering result, achieves the technical effects of improving the efficiency and flexibility of manufacturing the light effect expression of the virtual three-dimensional model and reducing the cost, and further solves the technical problems of high manufacturing cost, low efficiency and poor flexibility caused by adopting a mode of adding light in a later stage to manufacture the light effect expression of the virtual three-dimensional model in the related art.
The above-described methods of embodiments of the present application are further described below.
Optionally, in step S21, creating a texture referencing object corresponding to the virtual three-dimensional model may include the following steps:
step S211, determining a target surface corresponding to the virtual three-dimensional model based on the display view angle of the first image frame;
in step S212, a texture referencing object is created for the target surface, where the texture referencing object is used to lock a two-dimensional texture adapted to the contour of the virtual three-dimensional model, and the two-dimensional texture is obtained by projecting the three-dimensional texture adapted to the contour of the virtual three-dimensional model onto the target surface.
The method provided by the embodiment of the application can be operated at the client, the first image frame is the current image frame displayed on the graphical user interface of the client, and the display content of the current image frame at least comprises the virtual three-dimensional model.
As an exemplary embodiment, a visual surface of the virtual three-dimensional model corresponding to the display view angle is determined as the above-described target surface according to the display view angle of the first image frame.
Taking ray tracing adjustment of a virtual three-dimensional character model in an electronic game scene as an example, after determining a target surface of the virtual three-dimensional character model, creating a functional node corresponding to a texture reference object (Texture Reference Object) for the target surface by using an image editing tool; further, a preset three-dimensional texture matched with the outline of the virtual three-dimensional model is obtained, and the preset three-dimensional texture is projected to the target surface to obtain a two-dimensional texture; and locking the two-dimensional texture association to the functional node corresponding to the texture reference object, namely completing the creation (and configuration) of the texture reference object for the target surface.
As an exemplary embodiment, the determining, for the target model corresponding to the display perspective, the target surface corresponding to the virtual three-dimensional model based on the display perspective of the first image frame includes: determining a plurality of candidate models based on a display perspective of the first image frame; and selecting a target model from the plurality of candidate models according to a demand parameter, wherein the demand parameter is a parameter acquired in real time, and the demand parameter is used for determining a model to be rendered of the application scene at the current moment.
Optionally, in step S22, determining the target region based on the texture referencing object may include performing the steps of:
step S221, setting a rendering image layer corresponding to the texture reference object to be in a layered rendering start state;
step S222, in a layered rendering start state, acquiring first coordinate information corresponding to a first image frame through a rendering image layer, wherein the first coordinate information is coordinate information of a texture reference object;
step S223, determining a target area based on the first coordinate information.
The executing step of the above-mentioned alternative embodiment of the present application may be executed by using a renderer, taking ray tracing adjustment of a virtual three-dimensional character model in an electronic game scene as an example, where the renderer is an Arnold renderer integrated in Maya, and further setting a rendered image layer corresponding to a texture reference object to a hierarchical rendering start state includes: determining a rendered image layer corresponding to the texture reference object by using an Arnold renderer, and marking the rendered image layer as a Pref layer; the rendering state of the Pref layer is set to be a layered rendering start state, that is, the data state of the output classification value (Output Attribute Value, AOV) of the Pref layer is set to be an available (Active) state, wherein the AOV is used for determining information of color, position, light intensity, scattering and the like corresponding to the model to be rendered. AOV can help handle and adjust complex image layering.
As an exemplary embodiment, in the hierarchical rendering start state, acquiring, by the rendering image layer, first coordinate information corresponding to the first image frame includes: coordinate information of the texture referencing object in the first image frame is determined based on an AOV of a rendered image layer (Pref layer).
As an exemplary embodiment, determining the target area based on the first coordinate information includes: and determining a ray tracing area corresponding to the target part in the virtual three-dimensional model as a target area based on the coordinate information of the texture reference object in the first image frame and the preset target part.
Optionally, in step S222, acquiring the first coordinate information by rendering the image layer may include the following steps:
step S2221, obtaining second coordinate information corresponding to the first image frame, wherein the second coordinate information is the coordinate information of the virtual three-dimensional model;
in step S2222, the first coordinate information is determined by the second coordinate information.
In the hierarchical rendering start state, based on the AOV of the rendering image layer (Pref layer), acquiring coordinate information (i.e., the second coordinate information) of the virtual three-dimensional model in the first image frame; and determining coordinate information (namely the first coordinate information) of the texture reference object according to the position of the target surface corresponding to the texture reference object on the virtual three-dimensional model through the coordinate information of the virtual three-dimensional model.
Optionally, in the above ray tracing adjustment method for a virtual three-dimensional model, adjusting the ray attribute of the target area to obtain the target adjustment result includes at least one of the following method steps:
step S251, responding to the light ray adjusting operation executed on the light ray range control corresponding to the target area, and adjusting the light ray range of the target area based on the first coordinate information to obtain a light ray range adjusting result;
step S252, in response to the light adjustment operation executed on the light brightness control corresponding to the target area, adjusting the light brightness of the target area based on the first coordinate information to obtain a light brightness adjustment result;
step S253, in response to the ray adjustment operation executed on the ray position control corresponding to the target area, adjusting the ray tracing position of the target area based on the first coordinate information to obtain a ray position adjustment result;
step S254, in response to the light adjustment operation executed on the light color control corresponding to the target area, adjusting the light color change of the target area based on the first coordinate information to obtain a light color adjustment result;
step S255, responding to the light ray regulation operation executed on the light ray shape control corresponding to the target area, and regulating the light ray shape change of the target area based on the first coordinate information to obtain a light ray shape regulation result;
In step S256, a target adjustment result is determined based on at least one of the light range adjustment result, the light brightness adjustment result, the light position adjustment result, the light color adjustment result, and the light shape adjustment result.
The ray tracing adjustment method for the virtual three-dimensional model provided by the embodiment of the application can be operated at the client, and the display content of the graphical user interface corresponding to the client at least comprises a plurality of adjustment controls corresponding to the virtual three-dimensional model and the target area. For example, the client may be a client of digital synthesis software (e.g., nuke, etc.).
When the plurality of adjustment spaces include the light range control, the client receives a light adjustment operation performed on the light range control, and responds as follows: and adjusting the size of the light range of the target area based on the first coordinate information to obtain a light range adjusting result. Specifically, the target size is determined according to the light ray adjustment operation, and the size parameter of the target area is set to the target size.
Fig. 3 is a schematic diagram of an optional light attribute adjustment state of a virtual three-dimensional model according to an embodiment of the present application, as shown in fig. 3, where the method provided by the embodiment of the present application is applied to perform light attribute adjustment on catch light of a virtual character model in an electronic game scene, and an eye circular area of the virtual character model shown in fig. 3 is a target area, and when a light adjustment operation performed on a light range control is detected, the size of the eye circular area is adjusted. FIG. 4 is a schematic diagram of a light attribute adjustment state of another alternative virtual three-dimensional model according to an embodiment of the present application, when the above-mentioned light adjustment operation is an operation of adjusting a target area (for example, an operation of inputting a target radius value), a target radius value of a circular area is determined according to the operation of adjusting the target area, and the radius of an eye circular area of the virtual character model shown in FIG. 3 is set as the target radius value, so as to obtain the virtual character model with the light range adjusted as shown in FIG. 4.
When the plurality of adjustment spaces comprise the light brightness control, the client receives the light adjustment operation executed on the light brightness control and responds as follows: and adjusting the light brightness of the target area based on the first coordinate information to obtain a light brightness adjusting result. Specifically, a target luminance value is determined according to the light ray adjustment operation, and the luminance parameter of the target area is set to the target luminance value.
When the plurality of adjustment spaces include the light position control, the client receives a light adjustment operation performed on the light position control, and responds as follows: and adjusting the ray tracing position of the target area based on the first coordinate information to obtain a ray position adjusting result. Specifically, the target coordinate position is determined according to the light ray adjustment operation, and the target area is adjusted to the target coordinate position.
FIG. 5 is a schematic diagram of a light attribute adjustment state of an alternative virtual three-dimensional model according to an embodiment of the present application, when a light adjustment operation (e.g., a click operation for selecting a position on the virtual character model) performed on a light position control corresponding to the virtual character model shown in FIG. 3 is detected, a target coordinate position (e.g., a position on the forehead of the virtual character model shown in FIG. 5) is determined according to the light adjustment operation, and the target region shown in FIG. 3 is adjusted to the target coordinate position, thereby obtaining the virtual character model with the light position adjusted as shown in FIG. 5.
When the plurality of adjustment spaces include the light color control, the client receives a light adjustment operation performed on the light position control, and responds as follows: and adjusting the light color change of the target area based on the first coordinate information to obtain a light color adjusting result. Specifically, a target color value (e.g., a target RGB value) is determined according to the ray adjustment operation, and a ray color corresponding to the target area is set as the target color value.
When the plurality of adjustment spaces include the light shape control, the client receives a light adjustment operation performed on the light position control, and responds as follows: and adjusting the light shape change of the target area based on the first coordinate information to obtain a light shape adjusting result. Specifically, a target shape identifier is determined according to the light ray adjustment operation, and the light ray shape corresponding to the target area is adjusted to the shape corresponding to the target shape identifier. For example, the display shape corresponding to the catch light of the virtual character model is adjusted from a circular shape to a pentagram shape.
Further, the client of the digital synthesis software (such as Nuke) synthesizes the adjusted virtual character model (i.e. the target adjustment result) based on at least one of the above-mentioned real-time adjusted light range adjustment result, light brightness adjustment result, light position adjustment result, light color adjustment result and light shape adjustment result.
It is easy to understand that the ray tracing adjustment method for the virtual three-dimensional model provided by the embodiment of the application can realize real-time adjustment of the ray attribute of the target area of the virtual three-dimensional model in the later stage of the CG manufacturing process, thereby improving CG manufacturing efficiency and reducing rendering cost.
Optionally, in the ray tracing adjustment method for a virtual three-dimensional model, the virtual three-dimensional model is a virtual character model, the target portion is an eye of the virtual character model, and the target area is an eye tracing area in the rendered image layer.
The virtual three-dimensional model can be a virtual character model in the fields of virtual reality/augmented reality, electronic games, video propaganda and the like, and when the target part is determined to be an eye part of the virtual character model and the target area is an eye tracking area in a rendered image layer, the eye light tracking adjustment method of the virtual three-dimensional model can realize eye light tracking of the virtual character model. It is easy to understand that, in the application scenario in the above field, when the virtual character model moves (such as running, fighting, dancing, etc.) or deforms (such as expression, special effect, etc.), the ray tracing adjustment method for the virtual three-dimensional model provided by the embodiment of the application ensures that the eye light of the virtual character model changes along with the virtual character model, and the above process is an automatic update process, and does not need to be adjusted manually or interacted between multiple application software or multiple terminals, so that the rendering manufacturing cost is low, the efficiency is high, and the flexibility is strong.
Optionally, in the above ray tracing adjustment method for a virtual three-dimensional model, the first image frame is obtained by a preset ray tracing renderer and output to a preset special effect synthesis software, where the preset special effect synthesis software is used for performing a ray adjustment operation on a ray adjustment control corresponding to an eye tracing area, and adjusting a catch light attribute of the eye tracing area to obtain a catch light adjustment result.
Illustratively, the preset ray tracing renderer is an Arnold renderer integrated into Maya, and the preset special effects synthesizing software is Nuke synthesizing software. The method is used for carrying out ray tracing adjustment on the eye tracing area of the virtual three-dimensional model in the field of electronic games, and further generating the eye light adjustment result of the virtual three-dimensional model. According to the ray tracing adjustment method of the virtual three-dimensional model, manual adjustment and repeated testing are not needed by game developers, and interaction among a plurality of game development software or a plurality of game development terminals is not needed, so that the game development cost is reduced, and the game development efficiency and flexibility are improved.
Optionally, the ray tracing adjustment method of the virtual three-dimensional model may further include the following execution steps:
Step S261, obtaining model vertexes corresponding to the target areas in the first image frame;
in step S262, in response to the current display screen being switched from the first image frame to the second image frame, the tracking position of the target area is determined using the display position of the model vertex in the second image frame.
The second image frame is an image frame to be switched. For example, the first image frame is the 1 st frame in the image sequence in which the virtual three-dimensional model is displayed, and the second image frame is the 11 th frame in the image sequence. When the display screen of the graphical user interface of the client is switched from the first image frame to the second image frame, the texture (i.e., the light effect expression texture) of the target area of the virtual three-dimensional model needs to be synchronously adjusted, so that the texture change of the target area follows the change of the display screen, and the dynamic reality and fluency of the virtual three-dimensional model are ensured.
Taking ray tracing adjustments made to the virtual character model as shown in fig. 5 as an example, fig. 6 is a schematic diagram of ray tracing results of an alternative virtual three-dimensional model according to one embodiment of the application. The virtual character model shown in fig. 5 corresponds to a first image frame, the virtual character model shown in fig. 6 corresponds to a second image frame, and model vertices corresponding to a circular target area (i.e., a circular area on the forehead) of the virtual character model shown in fig. 5 are obtained, wherein the model vertices may be vertices on the virtual character model closest to the center of the circular target area; then, the display position (in the screen space) of the model vertex in the second image frame shown in fig. 6 is determined using the coordinate position (in the model space) of the model vertex, and the display position is taken as the center position of the circular target area of the virtual character model corresponding to the second image frame. That is, the ray tracing adjustment method of the virtual three-dimensional model provided by the application is based on the ray tracing adjustment of the model vertexes of the virtual three-dimensional model, can realize three-dimensional tracing, can trace the model vertexes which are shielded after the model rotates on a two-dimensional display interface, and solves the problems that the shielded points cannot be traced and the tracing effect is poor due to the adoption of two-dimensional view port tracing in the prior art.
And further, performing layered rendering on the virtual three-dimensional model and the target adjusting result to obtain a display picture or a display video corresponding to the virtual three-dimensional model after real-time ray tracing adjustment. When the real-time ray tracing adjustment is performed, the texture referencing object of the virtual three-dimensional model is utilized to ensure that the texture of the target area moves along with the movement or deformation of the virtual three-dimensional object, so that the physical reality and fluency of a display picture or a display video corresponding to the rendered virtual three-dimensional model are stronger, and the user experience is better.
According to the ray tracing adjustment method for the virtual three-dimensional model, which is provided by the embodiment of the application, the position, the size, the brightness and other attributes corresponding to the ray effect of the target area of the virtual three-dimensional model are adjusted in real time, so that the CG manufacturing efficiency is improved; the ray tracing adjustment method of the virtual three-dimensional model can also trace the image sequence of the virtual three-dimensional model, the stability of a tracing point is high, the time (Proto time) required by interaction among a plurality of application programs or a plurality of terminals in CG manufacture is saved, the CG manufacture efficiency is further improved, the cost is reduced, and the CG manufacture flexibility is high.
According to the embodiment of the application, aiming at CG manufacture of a virtual three-dimensional model related to light effect, the following technical conception is provided: and creating a texture reference object for the selected surface of the virtual three-dimensional model, and locking the three-dimensional texture or the two-dimensional texture obtained by projection to the selected surface, so that when the selected surface of the virtual three-dimensional model moves or deforms, the texture and the selected surface synchronously change, and the appearance naturalness of the virtual three-dimensional model is ensured.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. a magnetic disc, an optical disc), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiment also provides a ray tracing adjustment device for a virtual three-dimensional model, which is used for realizing the above embodiment and the preferred implementation manner, and the description is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
FIG. 7 is a block diagram of a ray tracing adjustment apparatus for a virtual three-dimensional model according to one embodiment of the application, as shown in FIG. 7, the apparatus comprises: a creating module 701, configured to create a texture reference object corresponding to the virtual three-dimensional model, where the texture reference object is used to lock a texture adapted to a contour of the virtual three-dimensional model; a determining module 702, configured to determine a target area based on the texture referencing object, where the target area is a ray tracing area corresponding to a target location in the virtual three-dimensional model; the adjusting module 703 is configured to adjust a light attribute of the target area to obtain a target adjustment result; and the rendering module 704 is used for performing hierarchical rendering on the virtual three-dimensional model and the target adjustment result.
Optionally, the creating module 701 is further configured to: determining a target surface corresponding to the virtual three-dimensional model based on a display view angle of the first image frame; and creating a texture reference object for the target surface, wherein the texture reference object is used for locking a two-dimensional texture matched with the outline of the virtual three-dimensional model, and the two-dimensional texture is obtained by projecting the three-dimensional texture matched with the outline of the virtual three-dimensional model to the target surface.
Optionally, the determining module 702 is further configured to: setting a rendering image layer corresponding to the texture reference object to be in a layered rendering starting state; in a layered rendering start state, first coordinate information corresponding to a first image frame is obtained through a rendering image layer, wherein the first coordinate information is coordinate information of a texture reference object; the target area is determined based on the first coordinate information.
Optionally, the determining module 702 is further configured to: acquiring second coordinate information corresponding to the first image frame, wherein the second coordinate information is the coordinate information of the virtual three-dimensional model; the first coordinate information is determined by the second coordinate information.
Optionally, the above-mentioned adjusting module 703 is further used for at least one of the following: responding to the light ray adjusting operation executed on the light ray range control corresponding to the target area, and adjusting the light ray range of the target area based on the first coordinate information to obtain a light ray range adjusting result; responding to the light adjustment operation executed on the light brightness control corresponding to the target area, and adjusting the light brightness of the target area based on the first coordinate information to obtain a light brightness adjustment result; responding to the light ray adjustment operation executed on the light ray position control corresponding to the target area, and adjusting the light ray tracing position of the target area based on the first coordinate information to obtain a light ray position adjustment result; responding to the light adjustment operation executed on the light color control corresponding to the target area, and adjusting the light color change of the target area based on the first coordinate information to obtain a light color adjustment result; responding to the light ray regulation operation executed on the light ray shape control corresponding to the target area, and regulating the light ray shape change of the target area based on the first coordinate information to obtain a light ray shape regulation result; the target adjustment result is determined based on at least one of the light range adjustment result, the light brightness adjustment result, the light position adjustment result, the light color adjustment result, and the light shape adjustment result.
Optionally, in the ray tracing adjustment apparatus for a virtual three-dimensional model, the virtual three-dimensional model is a virtual character model, the target portion is an eye of the virtual character model, and the target area is an eye tracing area in the rendered image layer.
Optionally, in the ray tracing adjustment device for a virtual three-dimensional model, the first image frame is obtained through a preset ray tracing renderer and output to a preset special effect synthesis software, wherein the preset special effect synthesis software is used for performing a ray adjustment operation on a ray adjustment control corresponding to an eye tracing area, and adjusting a catch light attribute of the eye tracing area to obtain a catch light adjustment result.
Alternatively, fig. 8 is a block diagram of an optional ray tracing adjustment apparatus for a virtual three-dimensional model according to an embodiment of the present application, as shown in fig. 8, which includes, in addition to all the modules shown in fig. 7: a switching module 705, configured to obtain, in a first image frame, a model vertex corresponding to a target area; responsive to the current display screen switching from the first image frame to the second image frame, a tracking position of the target region is determined using a display position of the model vertices in the second image frame.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media in which a computer program can be stored.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
S1, creating a texture reference object corresponding to a virtual three-dimensional model, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model;
s2, determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model;
s3, adjusting the light ray attribute of the target area to obtain a target adjustment result;
and S4, performing layered rendering on the virtual three-dimensional model and the target adjustment result.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining a target surface corresponding to the virtual three-dimensional model based on a display view angle of the first image frame; and creating a texture reference object for the target surface, wherein the texture reference object is used for locking a two-dimensional texture matched with the outline of the virtual three-dimensional model, and the two-dimensional texture is obtained by projecting the three-dimensional texture matched with the outline of the virtual three-dimensional model to the target surface.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: setting a rendering image layer corresponding to the texture reference object to be in a layered rendering starting state; in a layered rendering start state, first coordinate information corresponding to a first image frame is obtained through a rendering image layer, wherein the first coordinate information is coordinate information of a texture reference object; the target area is determined based on the first coordinate information.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: acquiring second coordinate information corresponding to the first image frame, wherein the second coordinate information is the coordinate information of the virtual three-dimensional model; the first coordinate information is determined by the second coordinate information.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: responding to the light ray adjusting operation executed on the light ray range control corresponding to the target area, and adjusting the light ray range of the target area based on the first coordinate information to obtain a light ray range adjusting result; responding to the light adjustment operation executed on the light brightness control corresponding to the target area, and adjusting the light brightness of the target area based on the first coordinate information to obtain a light brightness adjustment result; responding to the light ray adjustment operation executed on the light ray position control corresponding to the target area, and adjusting the light ray tracing position of the target area based on the first coordinate information to obtain a light ray position adjustment result; responding to the light adjustment operation executed on the light color control corresponding to the target area, and adjusting the light color change of the target area based on the first coordinate information to obtain a light color adjustment result; responding to the light ray regulation operation executed on the light ray shape control corresponding to the target area, and regulating the light ray shape change of the target area based on the first coordinate information to obtain a light ray shape regulation result; the target adjustment result is determined based on at least one of the light range adjustment result, the light brightness adjustment result, the light position adjustment result, the light color adjustment result, and the light shape adjustment result.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: the virtual three-dimensional model is a virtual character model, the target site is the eye of the virtual character model, and the target area is the eye tracking area in the rendered image layer.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: the first image frame is obtained through a preset ray tracing renderer and is output to preset special effect synthesis software, wherein the preset special effect synthesis software is used for responding to the ray adjustment operation executed by the ray adjustment control corresponding to the eye tracing area, adjusting the eye light attribute of the eye tracing area and obtaining an eye light adjustment result.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: obtaining model vertexes corresponding to the target areas in a first image frame; responsive to the current display screen switching from the first image frame to the second image frame, a tracking position of the target region is determined using a display position of the model vertices in the second image frame.
In the computer-readable storage medium of the above embodiment, a technical solution for implementing a ray tracing adjustment method for a virtual three-dimensional model is provided. A texture reference object corresponding to the virtual three-dimensional model is created, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model; determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model; a mode of adjusting the light ray attribute of the target area is adopted to obtain a target adjusting result; the method further carries out layered rendering on the virtual three-dimensional model and the target adjusting result, achieves the aim of adjusting the light attribute of the light tracking area based on the texture referencing object corresponding to the virtual three-dimensional model to obtain a real-time rendering result, achieves the technical effects of improving the efficiency and flexibility of manufacturing the light effect expression of the virtual three-dimensional model and reducing the cost, and further solves the technical problems of high manufacturing cost, low efficiency and poor flexibility caused by adopting a mode of adding light in a later stage to manufacture the light effect expression of the virtual three-dimensional model in the related art.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, a computer-readable storage medium stores thereon a program product capable of implementing the method described above in this embodiment. In some possible implementations, the various aspects of the embodiments of the application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the application as described in the "exemplary methods" section of this embodiment, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present application may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the present application is not limited thereto, and in the embodiments of the present application, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Any combination of one or more computer readable media may be employed by the program product described above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
An embodiment of the application also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, creating a texture reference object corresponding to a virtual three-dimensional model, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model;
s2, determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model;
s3, adjusting the light ray attribute of the target area to obtain a target adjustment result;
And S4, performing layered rendering on the virtual three-dimensional model and the target adjustment result.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining a target surface corresponding to the virtual three-dimensional model based on a display view angle of the first image frame; and creating a texture reference object for the target surface, wherein the texture reference object is used for locking a two-dimensional texture matched with the outline of the virtual three-dimensional model, and the two-dimensional texture is obtained by projecting the three-dimensional texture matched with the outline of the virtual three-dimensional model to the target surface.
Optionally, the above processor may be further configured to perform the following steps by a computer program: setting a rendering image layer corresponding to the texture reference object to be in a layered rendering starting state; in a layered rendering start state, first coordinate information corresponding to a first image frame is obtained through a rendering image layer, wherein the first coordinate information is coordinate information of a texture reference object; the target area is determined based on the first coordinate information.
Optionally, the above processor may be further configured to perform the following steps by a computer program: acquiring second coordinate information corresponding to the first image frame, wherein the second coordinate information is the coordinate information of the virtual three-dimensional model; the first coordinate information is determined by the second coordinate information.
Optionally, the above processor may be further configured to perform the following steps by a computer program: responding to the light ray adjusting operation executed on the light ray range control corresponding to the target area, and adjusting the light ray range of the target area based on the first coordinate information to obtain a light ray range adjusting result; responding to the light adjustment operation executed on the light brightness control corresponding to the target area, and adjusting the light brightness of the target area based on the first coordinate information to obtain a light brightness adjustment result; responding to the light ray adjustment operation executed on the light ray position control corresponding to the target area, and adjusting the light ray tracing position of the target area based on the first coordinate information to obtain a light ray position adjustment result; responding to the light adjustment operation executed on the light color control corresponding to the target area, and adjusting the light color change of the target area based on the first coordinate information to obtain a light color adjustment result; responding to the light ray regulation operation executed on the light ray shape control corresponding to the target area, and regulating the light ray shape change of the target area based on the first coordinate information to obtain a light ray shape regulation result; the target adjustment result is determined based on at least one of the light range adjustment result, the light brightness adjustment result, the light position adjustment result, the light color adjustment result, and the light shape adjustment result.
Optionally, the above processor may be further configured to perform the following steps by a computer program: the virtual three-dimensional model is a virtual character model, the target site is the eye of the virtual character model, and the target area is the eye tracking area in the rendered image layer.
Optionally, the above processor may be further configured to perform the following steps by a computer program: the first image frame is obtained through a preset ray tracing renderer and is output to preset special effect synthesis software, wherein the preset special effect synthesis software is used for responding to the ray adjustment operation executed by the ray adjustment control corresponding to the eye tracing area, adjusting the eye light attribute of the eye tracing area and obtaining an eye light adjustment result.
Optionally, the above processor may be further configured to perform the following steps by a computer program: obtaining model vertexes corresponding to the target areas in a first image frame; responsive to the current display screen switching from the first image frame to the second image frame, a tracking position of the target region is determined using a display position of the model vertices in the second image frame.
In the electronic device of the above embodiment, a technical solution for implementing a ray tracing adjustment method for a virtual three-dimensional model is provided. A texture reference object corresponding to the virtual three-dimensional model is created, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model; determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model; a mode of adjusting the light ray attribute of the target area is adopted to obtain a target adjusting result; the method further carries out layered rendering on the virtual three-dimensional model and the target adjusting result, achieves the aim of adjusting the light attribute of the light tracking area based on the texture referencing object corresponding to the virtual three-dimensional model to obtain a real-time rendering result, achieves the technical effects of improving the efficiency and flexibility of manufacturing the light effect expression of the virtual three-dimensional model and reducing the cost, and further solves the technical problems of high manufacturing cost, low efficiency and poor flexibility caused by adopting a mode of adding light in a later stage to manufacture the light effect expression of the virtual three-dimensional model in the related art.
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the application. As shown in fig. 9, the electronic device 900 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present application.
As shown in fig. 9, the electronic apparatus 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: the at least one processor 910, the at least one memory 920, a bus 930 that connects the different system components (including the memory 920 and the processor 910), and a display 940.
Wherein the above-mentioned memory 920 stores program code that can be executed by the processor 910 to cause the processor 910 to perform the steps according to various exemplary embodiments of the present application described in the above-mentioned method section of the embodiment of the present application.
The memory 920 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 9201 and/or cache memory 9202, and may further include Read Only Memory (ROM) 9203, and may also include nonvolatile memory such as one or more magnetic storage devices, flash memory, or other nonvolatile solid state memory.
In some examples, memory 920 may also include a program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Memory 920 may further include memory located remotely from processor 910, which may be connected to electronic device 900 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The bus 930 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processor 910, or a local bus using any of a variety of bus architectures.
The display 940 may be, for example, a touch screen type liquid crystal display (Liquid Crystal Display, LCD) that may enable a user to interact with a user interface of the electronic device 900.
Optionally, the electronic apparatus 900 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 900, and/or with any device (e.g., router, modem, etc.) that enables the electronic apparatus 900 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 950. Also, electronic device 900 may communicate with one or more networks such as a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN), and/or a public network such as the internet via network adapter 960. As shown in fig. 9, the network adapter 960 communicates with other modules of the electronic device 900 over the bus 930. It should be appreciated that although not shown in fig. 9, other hardware and/or software modules may be used in connection with electronic device 900, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk array (Redundant Arrays of Independent Disks, RAID) systems, tape drives, data backup storage systems, and the like.
The electronic device 900 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power supply, and/or a camera.
It will be appreciated by those skilled in the art that the configuration shown in fig. 9 is merely illustrative and is not intended to limit the configuration of the electronic device. For example, the electronic device 900 may also include more or fewer components than shown in fig. 9, or have a different configuration than shown in fig. 9. The memory 920 may be used to store a computer program and corresponding data, such as a computer program and corresponding data corresponding to a ray tracing adjustment method for a virtual three-dimensional model in an embodiment of the present application. The processor 910 executes a computer program stored in the memory 920 to perform various functional applications and data processing, i.e., to implement the ray tracing adjustment method of the virtual three-dimensional model described above.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a read-only memory (ROM), a random-access memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, etc., which can store program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (11)

1. A method for ray tracing adjustment of a virtual three-dimensional model, the method comprising:
creating a texture reference object corresponding to a virtual three-dimensional model, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model;
determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model;
adjusting the light ray attribute of the target area to obtain a target adjustment result;
and carrying out layered rendering on the virtual three-dimensional model and the target adjustment result.
2. The method of claim 1, wherein creating the texture referencing object corresponding to the virtual three-dimensional model comprises:
determining a target surface corresponding to the virtual three-dimensional model based on a display view angle of the first image frame;
And creating the texture reference object for the target surface, wherein the texture reference object is used for locking a two-dimensional texture matched with the outline of the virtual three-dimensional model, and the two-dimensional texture is obtained by projecting the three-dimensional texture matched with the outline of the virtual three-dimensional model to the target surface.
3. The method of claim 2, wherein determining the target region based on the texture referencing object comprises:
setting a rendering image layer corresponding to the texture reference object to be in a layered rendering starting state;
in the layered rendering start state, acquiring first coordinate information corresponding to the first image frame through the rendering image layer, wherein the first coordinate information is the coordinate information of the texture reference object;
the target area is determined based on the first coordinate information.
4. The method of claim 3, wherein obtaining the first coordinate information by the rendered image layer comprises:
acquiring second coordinate information corresponding to the first image frame, wherein the second coordinate information is the coordinate information of the virtual three-dimensional model;
and determining the first coordinate information through the second coordinate information.
5. A method according to claim 3, wherein adjusting the light properties of the target area to obtain the target adjustment result comprises at least one of:
responding to the light ray adjusting operation executed on the light ray range control corresponding to the target area, and adjusting the light ray range of the target area based on the first coordinate information to obtain a light ray range adjusting result;
responding to the light adjustment operation executed on the light brightness control corresponding to the target area, and adjusting the light brightness of the target area based on the first coordinate information to obtain a light brightness adjustment result;
responding to the light ray adjustment operation executed on the light ray position control corresponding to the target area, and adjusting the light ray tracing position of the target area based on the first coordinate information to obtain a light ray position adjustment result;
responding to the light adjustment operation executed on the light color control corresponding to the target area, and adjusting the light color change of the target area based on the first coordinate information to obtain a light color adjustment result;
responding to the light ray regulation operation executed on the light ray shape control corresponding to the target area, and regulating the light ray shape change of the target area based on the first coordinate information to obtain a light ray shape regulation result;
And determining the target adjustment result based on at least one of the light range adjustment result, the light brightness adjustment result, the light position adjustment result, the light color adjustment result, and the light shape adjustment result.
6. The method of claim 3, wherein the virtual three-dimensional model is a virtual character model, the target site is an eye of the virtual character model, and the target region is an eye tracking region in the rendered image layer.
7. The method of claim 6, wherein the first image frame is obtained by a preset ray tracing renderer and output to a preset special effect synthesis software, wherein the preset special effect synthesis software is configured to adjust a catch light attribute of the eye tracking area in response to a ray adjustment operation performed on a ray adjustment control corresponding to the eye tracking area, so as to obtain a catch light adjustment result.
8. The method according to claim 1, wherein the method further comprises:
obtaining a model vertex corresponding to the target area in a first image frame;
responsive to a current display screen switching from the first image frame to a second image frame, a tracking position of the target region is determined using a display position of the model vertices in the second image frame.
9. A ray tracing adjustment apparatus for a virtual three-dimensional model, the apparatus comprising:
the creating module is used for creating a texture reference object corresponding to the virtual three-dimensional model, wherein the texture reference object is used for locking textures matched with the outline of the virtual three-dimensional model;
the determining module is used for determining a target area based on the texture reference object, wherein the target area is a ray tracing area corresponding to a target part in the virtual three-dimensional model;
the adjusting module is used for adjusting the light ray attribute of the target area to obtain a target adjusting result;
and the rendering module is used for conducting layered rendering on the virtual three-dimensional model and the target adjusting result.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program, wherein the computer program is arranged, when being executed by a processor, to perform the ray tracing adjustment method of a virtual three-dimensional model as claimed in any one of claims 1 to 8.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the ray tracing tuning method of a virtual three-dimensional model as claimed in any one of claims 1 to 8.
CN202310376981.9A 2023-04-10 2023-04-10 Ray tracing adjustment method and device for virtual three-dimensional model and storage medium Pending CN116645461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310376981.9A CN116645461A (en) 2023-04-10 2023-04-10 Ray tracing adjustment method and device for virtual three-dimensional model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310376981.9A CN116645461A (en) 2023-04-10 2023-04-10 Ray tracing adjustment method and device for virtual three-dimensional model and storage medium

Publications (1)

Publication Number Publication Date
CN116645461A true CN116645461A (en) 2023-08-25

Family

ID=87638839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310376981.9A Pending CN116645461A (en) 2023-04-10 2023-04-10 Ray tracing adjustment method and device for virtual three-dimensional model and storage medium

Country Status (1)

Country Link
CN (1) CN116645461A (en)

Similar Documents

Publication Publication Date Title
US8411092B2 (en) 2D imposters for simplifying processing of plural animation objects in computer graphics generation
JP7008733B2 (en) Shadow generation for inserted image content
Bénard et al. State‐of‐the‐art report on temporal coherence for stylized animations
JP7050883B2 (en) Foveal rendering optimization, delayed lighting optimization, particle foveal adaptation, and simulation model
US7019742B2 (en) Dynamic 2D imposters of 3D graphic objects
JP2006528395A (en) System and method for providing a real-time three-dimensional interactive environment
KR20230110364A (en) Detection of false virtual objects
WO2022051460A1 (en) 3d asset generation from 2d images
US20230177755A1 (en) Predicting facial expressions using character motion states
CN113826147A (en) Improvements in animated characters
US11645805B2 (en) Animated faces using texture manipulation
US10460497B1 (en) Generating content using a virtual environment
CN115082607A (en) Virtual character hair rendering method and device, electronic equipment and storage medium
Lang et al. Massively multiplayer online worlds as a platform for augmented reality experiences
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
CN108986228B (en) Method and device for displaying interface in virtual reality
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN116645461A (en) Ray tracing adjustment method and device for virtual three-dimensional model and storage medium
US11983819B2 (en) Methods and systems for deforming a 3D body model based on a 2D image of an adorned subject
Lan Simulation of Animation Character High Precision Design Model Based on 3D Image
CN118247404A (en) Model rendering method and device, storage medium and electronic device
CN117097919A (en) Virtual character rendering method, apparatus, device, storage medium, and program product
Baroya Real-Time Body Tracking and Projection Mapping n the Interactive Arts
Walia et al. A framework for interactive 3D rendering on mobile devices
CN116271814A (en) Scene picture processing method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination