WO2023197860A1 - Highlight rendering method and apparatus, medium, and electronic device - Google Patents

Highlight rendering method and apparatus, medium, and electronic device Download PDF

Info

Publication number
WO2023197860A1
WO2023197860A1 PCT/CN2023/084540 CN2023084540W WO2023197860A1 WO 2023197860 A1 WO2023197860 A1 WO 2023197860A1 CN 2023084540 W CN2023084540 W CN 2023084540W WO 2023197860 A1 WO2023197860 A1 WO 2023197860A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel point
texture
highlight
coordinate value
pixel
Prior art date
Application number
PCT/CN2023/084540
Other languages
French (fr)
Chinese (zh)
Inventor
罗汉铭
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023197860A1 publication Critical patent/WO2023197860A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Definitions

  • the present disclosure relates to the field of image processing, and specifically, to a highlight rendering method, device, medium, electronic equipment, computer program product, and program.
  • the present disclosure provides a highlight rendering method, which method includes:
  • the image is determined based on the light source vector and the line of sight vector corresponding to the pixel.
  • Sampling is performed from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
  • a highlight rendering device which includes:
  • An acquisition module configured to acquire a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image
  • the first processing module is used to determine the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in the world space;
  • the second processing module is used to determine, for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
  • a determination module configured to determine, for each pixel point, the texture bias of the pixel point in the vertical direction corresponding to the horizontal direction in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point. displacement;
  • a rendering module configured to sample from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
  • the present disclosure provides a computer-readable medium having a computer program stored thereon, which implements the steps of the method described in the first aspect when executed by a processing device.
  • an electronic device including:
  • a processing device configured to execute the computer program in the storage device to implement the steps of the method described in the first aspect.
  • the present disclosure provides a computer program product, including a computer program that implements the steps of the method described in the first aspect when executed by a processing device.
  • the present disclosure provides a computer program that, when executed by a processing device, implements the steps of the method described in the first aspect.
  • Figure 1 is a comparison diagram of highlight images of realistic hair and cartoon hair
  • Figure 2 is a flow chart of a highlight rendering method based on an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of a highlight image provided based on an embodiment of the present disclosure
  • Figures 4 and 5 are schematic diagrams of highlight rendering images provided based on embodiments of the present disclosure.
  • Figure 6 is a block diagram of a highlight rendering device provided based on an embodiment of the present disclosure.
  • FIG. 7 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
  • the term “include” and its variations are open-ended, ie, “including but not limited to.”
  • the term “based on” means “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • Figure 2 shows a flow chart of a highlight rendering method based on an embodiment of the present disclosure.
  • the method may include:
  • step 11 a highlight image to be rendered is obtained, wherein a highlight shape to be rendered is drawn in the highlight image.
  • the correspondence between the pixels in the hair model and the sampling positions in the highlight image is usually implemented based on UV coordinates.
  • the UV coordinates can be the percentage of the highlight image.
  • the horizontal direction can be recorded as U
  • the vertical direction can be recorded as V, as shown in the highlight image in Figure 3.
  • the white icon part is the highlight shape to be rendered.
  • step 12 the light source vector in the target coordinate space corresponding to the hair model is determined according to the light source direction in the world space.
  • step 13 for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space is determined according to the line of sight direction in the world space.
  • rendering can be performed based on the rendering models commonly used in this field, such as Unity Shader for highlight rendering.
  • parameters such as the light source direction and line of sight direction in the world space can be obtained.
  • the above parameters can be converted from the world space to the model space corresponding to the hair model, that is, the target coordinate space, so as to express the impact of changes in the light source direction and the line of sight direction on the highlight position.
  • the execution sequence shown in Figure 2 is an exemplary illustration. Step 12 and step 13 can be executed one after another or in parallel, and this disclosure does not limit this.
  • step 14 for each pixel, the texture offset of the pixel in the vertical direction corresponding to the horizontal direction in the highlight image is determined based on the light source vector and the line of sight vector corresponding to the pixel.
  • the position of the highlight usually moves up, down, left, and right accordingly, making it difficult to ensure the smoothness of the highlight changes.
  • Highlight shape in this embodiment, in order to ensure the fixation of the highlight shape on the hair, when the light source direction and the line of sight direction change, only the texture corresponding to the highlight shape can be position-shifted in the vertical direction to ensure the highlight shape. while achieving anisotropic change effects.
  • step 15 samples are sampled from the highlight image according to the texture offset corresponding to each pixel point to render the pixel points to obtain a rendered highlight rendering image.
  • the offset effect on the position of the highlight shape in the vertical direction can be determined based on changes in the viewing direction and the light source direction, so that the position after the offset can be sampled from the highlight image based on the texture offset, That is, based on the offset position, the color value of the corresponding position is sampled from the highlight image for rendering, so that the color rendered at this time matches the current light source direction and line of sight direction.
  • the light source direction and line of sight direction in the world space can be converted to the hair model
  • the corresponding target coordinate space allows the influence of the line of sight direction and the light source direction on the highlight position to be determined in the same model space.
  • only the offset of the highlight position in the vertical direction corresponding to the horizontal direction is considered, and the same pixel in the hair model can be changed from the highlight image based on the line of sight direction and the light source direction.
  • the texture sampling position can change the rendering color value obtained by sampling the same pixel, so that on the basis of ensuring the highlight shape, the highlight position can be offset to achieve anisotropic highlight rendering effects and simplify the process of highlight rendering.
  • the process not only improves the efficiency of highlight rendering, but also improves the accuracy of animation highlight rendering and improves the user's viewing experience of the rendered animation.
  • an exemplary implementation of determining the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in the world space is as follows, including:
  • lightDir_O mul((float3 ⁇ 3)unity_WorldToObject,LightDirection.xyz)
  • mul(M,v) is used to represent the matrix multiplication of the calculation matrix M and the vector v for matrix conversion.
  • unity_WorldToObject is used to represent the matrix converted from world space to object space (ie, target coordinate space).
  • LightDirection.xyz uses Used to represent the coordinates of the light source direction in world space, lightDir_O is used to represent the light source vector in the target coordinate space.
  • the calculation of the above mul() function and unity_WorldToObject is a conventional calculation method in this field, and will not be described again here.
  • This step can include:
  • the camera position in the world space can be obtained through _WorldSpaceCameraPos().
  • the camera position can be converted to the target coordinate space based on the transformation matrix.
  • the formula is as follows: mul(unity_WorldToObject,float4(_WorldSpaceCameraPos.xyz,1))
  • unity_WorldToObject represents the transformation matrix corresponding to the conversion from world space to target coordinate space
  • _WorldSpaceCameraPos.xyz represents the coordinates of the camera position in world space.
  • the vector obtained by subtracting the position coordinates of the pixel point from the camera position coordinates is normalized and the vector is used as the line of sight vector corresponding to the pixel point.
  • vector subtraction can be used to determine the direction from the camera position to the pixel point, that is, the line of sight direction.
  • the standardization process can be a normalization process.
  • v.vertex.xyz represents the position coordinates of the pixel vertex in the hair model v, and normalize is used to normalize the vector.
  • viewDir_O is used to represent the line of sight vector corresponding to the pixel.
  • the sight direction of each pixel in the hair model can be determined in the space corresponding to the hair model, thereby converting the representation of the hair model and the representation of the sight direction into the same space.
  • it can provide reliable data support for subsequent highlight texture sampling.
  • the vertical direction corresponding to the horizontal direction of the pixel point in the highlight image is determined based on the light source vector and the line of sight vector corresponding to the pixel point.
  • An exemplary implementation of the texture offset on is as follows. This step may include:
  • the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction is determined.
  • the y component of the determined light source vector can be determined as the light source component, and the y component of each line of sight vector is determined as the corresponding line of sight component.
  • the texture offset corresponding to the pixel point is determined.
  • the light source component and the line of sight component can be corresponding to the texture offset according to the preset correspondence relationship.
  • speTexUVOffset represents the texture offset
  • lightDir_O.y represents the light source component
  • viewDir_O.y represents the line of sight component
  • the offset influence parameters of the light source direction and the line of sight direction on the highlight position offset can be set according to the actual application scenario, so that the light source component and the line of sight component can be weighted based on their corresponding offset influence parameters to obtain the corresponding Texture offset.
  • the offset effect of the light source direction on the highlight position and the offset effect of the line of sight direction on the highlight position can be determined respectively, thereby determining the direction in which the highlight should be offset under the current light source direction and line of sight direction. and offset, so that the highlight shape is offset and matches the direction of the line of sight and the direction of the light source.
  • Anisotropic effects are achieved based on controlling the highlight position, improving the accuracy of highlight rendering in animation, and simplifying the highlight rendering process. , improve highlight rendering efficiency.
  • sampling is performed from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point, and obtain the rendered highlight rendering image.
  • This step may include:
  • the based texture coordinate value is the coordinate value corresponding to when there is no offset between the light source vector and the line of sight vector.
  • the basic texture coordinate value corresponding to each pixel in the highlight image can be obtained in advance.
  • it can be the UV value obtained by sampling based on the texture sampler when the y component of the light source vector and the line of sight vector is determined to be 0.
  • the texture sampling coordinate value corresponding to the pixel point is determined.
  • the offset can be performed based on the basic texture coordinate value according to the texture offset, so as to control the offset of the highlight shape.
  • the texture color value corresponding to the texture sampling coordinate value is sampled from the highlight image as the color value corresponding to the pixel point.
  • the sampler can be used to sample from the corresponding position in the highlight image according to the texture sampling coordinate value to obtain the color value corresponding to the texture sampling coordinate value.
  • the sampling method of the sampler from the highlight image can be set based on the actual application scenario, such as constant interpolation method, linear interpolation method, etc. to handle the situation of image enlargement and reduction, which is not limited by the present disclosure.
  • Figures 4 and 5 are highlight rendering images rendered based on different sight directions and light source directions, in which the positions of the highlight shapes G are different to achieve an anisotropic rendering effect.
  • the texture sampling coordinate value of the pixel in the hair model sampled from the highlight image can be determined based on the texture offset, so as to obtain the corresponding color value from the highlight image to render the pixel.
  • the texture sampling coordinate value in the highlight image is determined in real time, so that the rendering color value obtained by sampling the same pixel changes to achieve the mobility of the surface texture of the hair model, that is, the highlight shape. Rendered scene with animated highlights.
  • Steps can include:
  • the sub-coordinate value in the vertical direction may be the value in the V direction in the basic texture coordinate value (UV coordinate).
  • the coordinate value obtained by subtracting the texture offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and
  • the sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
  • the texture offset only adjusts the vertical component of the basic texture coordinate value.
  • the texture offset can be directly superimposed on the sub-coordinate value in the vertical direction of the basic texture coordinate value to realize the movement of the coordinates of the highlight in the vertical direction, and the updated coordinate value can be replaced in the basic texture coordinate value.
  • the coordinate value in the vertical direction to generate the current corresponding texture sampling coordinate value.
  • the color obtained by sampling based on the texture sampling coordinate value can be determined according to the real-time line of sight direction and light source direction, so that the same pixel in the hair model can be changed in color by adjusting the sampling texture coordinate value, and the coloring of the same pixel can be realized. Control of highlight movement.
  • this step can include:
  • the sub-coordinate value in the vertical direction may be the value of the V direction in the basic texture coordinate value (UV coordinate).
  • the target offset corresponding to the pixel is determined according to the texture offset corresponding to the pixel and the preset offset adjustment parameter.
  • the preset offset adjustment parameters can be set according to actual application scenarios.
  • the offset adjustment parameters may include an offset degree parameter and an offset position parameter, where the offset degree parameter controls the amplitude of the offset, and the offset position parameter is used to control the readjustment of the offset position, such as the target offset.
  • speTexUVOffset' is used to represent the target offset
  • _DisScale is used to represent the offset degree parameter
  • speTexUVOffset is used to represent the texture offset
  • _SpecularShift is used to represent the offset position parameter.
  • the coordinate value obtained by subtracting the target offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and The sub-pixel corresponding to the pixel point The coordinate value is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
  • the method of generating the texture sampling coordinate value based on the basic texture coordinate value after the target offset is determined is similar to the above, and will not be described again here.
  • the texture sampling syntax SAMPLE_TEXTURE2D (Tex, sampler_Tex, uv)
  • the parameters are the texture (that is, the highlight image), the texture sampler, and the UV corresponding to the sampled texture (that is, the texture sampling coordinate value)
  • i.uv.xy represents the basic texture coordinate value
  • float2(0,_DisScale*speTexUVOffset+_SpecularShift) represents the target offset.
  • the offset in the V direction that is, the offset in the U direction is 0.
  • the offset position when determining the offset of the highlight position, the offset position can be further controlled based on the offset adjustment parameter, so that the offset movement of the highlight is more suitable for the rendering scene to which it is applied, and can improve
  • the diversity of highlight rendering further broadens the application scenarios of the highlight rendering method.
  • the present disclosure also provides a highlight rendering device.
  • the device 10 includes:
  • the acquisition module 100 is used to acquire a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image;
  • the first processing module 200 is used to determine the light source vector in the target coordinate space corresponding to the hair model according to the light source direction in the world space;
  • the second processing module 300 is used to determine, for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
  • the determination module 400 is configured to determine, for each pixel point, the texture of the pixel point in the vertical direction corresponding to the horizontal direction in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point. Offset;
  • the rendering module 500 is configured to sample from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
  • the determining module includes:
  • the first determination sub-module is used to determine the light source component of the light source vector in the vertical direction
  • the second determination sub-module is used to determine, for each pixel point, the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction;
  • the third determination sub-module is used to determine the texture offset corresponding to the pixel point based on the light source component and the line of sight component corresponding to each pixel point.
  • the second processing module includes:
  • the fourth determination sub-module is used to determine the camera position coordinates of the camera position in the world space and the camera position coordinates in the target coordinate space;
  • the processing submodule is configured to normalize the vector obtained by subtracting the position coordinates of the pixel from the camera position coordinates for each of the pixels, and use the vector as the line of sight vector corresponding to the pixel.
  • the rendering module includes:
  • Obtaining sub-module used to obtain the basic texture coordinate value corresponding to each pixel point in the highlight image, wherein the based on texture coordinate value is the corresponding basic texture coordinate value when there is no offset between the light source vector and the line of sight vector. coordinate value;
  • the fifth determination sub-module is used to determine the texture sampling coordinate value corresponding to each pixel point based on the texture offset corresponding to the pixel point and the basic texture coordinate value;
  • a sampling submodule configured to sample the texture color value corresponding to the texture sampling coordinate value from the highlight image as the color value corresponding to the pixel point;
  • a rendering sub-module is used to render the pixel based on the color value corresponding to each pixel to obtain the highlight rendering image.
  • the fifth determination sub-module includes:
  • the sixth determination sub-module is used to determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction;
  • the seventh determination sub-module is used to, for each pixel point, subtract the coordinate value obtained by subtracting the texture offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point as the vertical direction. on the updated coordinate value, and update the sub-coordinate value corresponding to the pixel point to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
  • the fifth determination sub-module includes:
  • the eighth determination sub-module is used to determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction;
  • the ninth determination sub-module is used for determining, for each pixel point, the target offset amount corresponding to the pixel point according to the texture offset amount corresponding to the pixel point and the preset offset adjustment parameter;
  • the tenth determination sub-module is configured to, for each pixel point, subtract the coordinate value obtained by subtracting the target offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point as the vertical direction. update coordinate values on the The sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDA (Personal Digital Assistant, personal digital assistant), PAD (tablet computer), PMP (Portable Multimedia Player, portable multimedia player), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TV (Television, television), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PAD tablet computer
  • PMP Portable Multimedia Player, portable multimedia player
  • vehicle-mounted terminals such as vehicle-mounted navigation terminals
  • fixed terminals such as digital TV (Television, television), desktop computers, etc.
  • the electronic device shown in FIG. 7 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may process data according to a program stored in a read-only memory (Read Only Memory, ROM) 602 or from a storage device 608
  • a processing device such as a central processing unit, a graphics processor, etc.
  • the program loaded into the random access memory (Random Access Memory, RAM) 603 performs various appropriate actions and processes.
  • RAM 603 Random Access Memory
  • various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602 and RAM 603 are connected to each other via a bus 604.
  • An input/output (I/O) interface 605 is also connected to bus 604.
  • input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 607 such as a speaker, a vibrator, etc.; a storage device 608 including a magnetic tape, a hard disk, etc.; and a communication device 609.
  • Communication device 609 may allow electronic device 600 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 7 illustrates electronic device 600 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 609, or from storage device 608, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above functions defined in the method of the embodiment of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: electrical connections having one or more conductors, Portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), electrical programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory electrical programmable read-only memory
  • CD-ROM Compact Disc Read Only Memory
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program codes contained on computer-readable media can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, RF (Radio Frequency, Radio Frequency), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communications e.g., communications network
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any current network for knowledge or future research and development.
  • LANs Local Area Networks
  • WANs Wide Area Networks
  • the Internet e.g., the Internet
  • end-to-end networks e.g., ad hoc end-to-end networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device obtains a highlight image to be rendered, wherein the highlight image to be rendered is drawn Highlight shape; determine the light source vector in the target coordinate space corresponding to the hair model according to the light source direction in the world space; for each pixel to be rendered in the hair model, determine the target coordinate space according to the line of sight direction in the world space The line of sight vector corresponding to the pixel point; for each pixel point, determine the vertical direction corresponding to the horizontal direction of the pixel point in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point.
  • the texture offset amount is sampled from the highlight image according to the texture offset amount corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as "C" or similar programming languages.
  • program The code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider). connected via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the modules involved in the embodiments of the present disclosure can be implemented in software or hardware.
  • the name of the module does not constitute a limitation on the module itself under certain circumstances.
  • the acquisition module can also be described as "a module that acquires the highlight image to be rendered.”
  • exemplary types of hardware logic components include: field programmable gate array (Field Programmable Gate Array, FPGA), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), application specific standard product (Application Specific Standard Product, ASSP), System On Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • Example 1 provides a highlight rendering method, wherein the method includes:
  • the image is determined based on the light source vector and the line of sight vector corresponding to the pixel.
  • Sampling is performed from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
  • Example 2 provides the method of Example 1, wherein for each pixel point, the light source vector and the line of sight vector corresponding to the pixel point are determined.
  • the texture offset of pixels in the vertical direction corresponding to the horizontal direction in the highlight image includes:
  • the texture offset corresponding to the pixel point is determined.
  • Example 3 provides the method of Example 1, wherein for each pixel to be rendered in the hair model, the target coordinate space is determined according to the line of sight direction in the world space.
  • the line of sight vector corresponding to the pixel below includes:
  • the vector obtained by subtracting the position coordinates of the pixel point from the camera position coordinates is normalized and the vector is used as the line of sight vector corresponding to the pixel point.
  • Example 4 provides the method of Example 1, wherein the sampling is performed from the highlight image according to the texture offset corresponding to each pixel point, so as to Render pixels to obtain the rendered highlight rendering image, including:
  • the texture-based coordinate value is the corresponding coordinate value when there is no offset between the light source vector and the line of sight vector;
  • Example 5 provides the method of Example 4, wherein, according to the texture offset corresponding to each pixel point and the basic texture coordinate value, it is determined that the pixel point corresponds to
  • the texture sampling coordinate values include:
  • the coordinate value obtained by subtracting the texture offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and
  • the sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
  • Example 6 provides the method of Example 4, wherein, according to the texture offset corresponding to each pixel point and the basic texture coordinate value, it is determined that the pixel point corresponds to
  • the texture sampling coordinate values include:
  • the coordinate value obtained by subtracting the target offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and
  • the sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
  • Example 7 provides a highlight rendering device, wherein the device includes:
  • An acquisition module configured to acquire a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image
  • the first processing module is used to determine the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in the world space;
  • the second processing module is used to determine, for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
  • a determination module configured to determine, for each pixel point, the texture bias of the pixel point in the vertical direction corresponding to the horizontal direction in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point. displacement;
  • a rendering module configured to sample from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
  • Example 8 provides the apparatus of Example 7, wherein the determining module includes:
  • the first determination sub-module is used to determine the light source component of the light source vector in the vertical direction
  • the second determination sub-module is used to determine, for each pixel point, the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction;
  • the third determination sub-module is used to determine the texture offset corresponding to the pixel point based on the light source component and the line of sight component corresponding to each pixel point.
  • Example 9 provides a computer-readable medium having a computer program stored thereon, and when the computer program is executed by a processing device, the method of any one of Examples 1-6 is implemented. step.
  • Example 10 provides an electronic device, including:
  • a processing device configured to execute the computer program in the storage device to implement the steps of the method in any one of Examples 1-6.
  • Example 11 provides a computer program product, including a computer program that, when executed by a processing device, implements the steps of the method in any one of Examples 1-6.
  • Example 12 provides a computer program that, when executed by a processing device, implements the steps of the method in any one of Examples 1-6.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The present invention relates to a highlight rendering method and apparatus, a medium, and an electronic device. The method comprises: obtaining a highlight image to be rendered, and drawing, in the highlight image, a highlight shape to be rendered; according to a light source direction in a world space, determining a light source vector in a target coordinate space corresponding to a hair model; for each pixel point to be rendered in the hair model, according to a sight line direction in the world space, determining a sight line vector corresponding to the pixel point in the target coordinate space; for each pixel point, according to the light source vector and the sight line vector corresponding to the pixel point, determining a texture offset of the pixel point in a vertical direction corresponding to a horizontal direction in the highlight image; and sampling from the highlight image according to the texture offset corresponding to each pixel point so as to render the pixel points to obtain a rendered highlight rendered image. Therefore, a rendering effect of anisotropic highlight can be achieved while the highlight shape of rendering is ensured.

Description

高光渲染方法、装置、介质及电子设备Highlight rendering methods, devices, media and electronic equipment
相关申请交叉引用Related application cross-references
本公开要求于2022年4月13日提交的、申请号为202210386625.0、名称为“高光渲染方法、装置、介质及电子设备”的中国专利申请的优先权,其全部内容通过引用并入本文。This disclosure claims priority to the Chinese patent application with application number 202210386625.0 and titled "Highlight Rendering Method, Device, Medium and Electronic Equipment" filed on April 13, 2022, the entire content of which is incorporated herein by reference.
技术领域Technical field
本公开涉及图像处理领域,具体地,涉及一种高光渲染方法、装置、介质、电子设备、计算机程序产品及程序。The present disclosure relates to the field of image processing, and specifically, to a highlight rendering method, device, medium, electronic equipment, computer program product, and program.
背景技术Background technique
在卡通人物的头发渲染中,不同于写实头发的高光渲染,其通常是具有块状的高光形状,如图1中的A处所示为写实头发的高光图像,图1中的B处所示为卡通渲染下的头发高光图像。In the rendering of cartoon characters' hair, it is different from the highlight rendering of realistic hair. It usually has a blocky highlight shape. As shown at A in Figure 1, it is the highlight image of realistic hair, as shown at B in Figure 1. Highlighted image of hair rendered for cartoon.
写实头发场景下,当不同角度的光照射在头发上时会在视线中展示出不同的高光表示。相关技术中,通常采用各向异性算法对头发高光进行渲染,从而使得渲染的卡通动画中的头发高光可以随着光源和视线的变化而变化。然而通过上述渲染方式,高光通过光照模型叠加扰动纹理进行实时计算,难以保证高光渲染中的高光形状。In a realistic hair scene, when light from different angles shines on the hair, different highlights will be displayed in the line of sight. In related technologies, anisotropic algorithms are usually used to render hair highlights, so that the hair highlights in the rendered cartoon animation can change with changes in the light source and line of sight. However, through the above rendering method, the highlight is calculated in real time by superimposing the disturbance texture on the lighting model, making it difficult to guarantee the highlight shape in highlight rendering.
发明内容Contents of the invention
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。This Summary is provided to introduce in a simplified form concepts that are further described in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed technical solution, nor is it intended to be used to limit the scope of the claimed technical solution.
第一方面,本公开提供一种高光渲染方法,所述方法包括:In a first aspect, the present disclosure provides a highlight rendering method, which method includes:
获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状;Obtain a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image;
根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量;Determine the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in world space;
针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量;For each pixel to be rendered in the hair model, determine the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像 素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量;For each pixel, the image is determined based on the light source vector and the line of sight vector corresponding to the pixel. The texture offset of the pixel in the vertical direction corresponding to the horizontal direction in the highlight image;
根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像。Sampling is performed from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
第二方面,本公开提供一种高光渲染装置,所述装置包括:In a second aspect, the present disclosure provides a highlight rendering device, which includes:
获取模块,用于获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状;An acquisition module, configured to acquire a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image;
第一处理模块,用于根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量;The first processing module is used to determine the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in the world space;
第二处理模块,用于针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量;The second processing module is used to determine, for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
确定模块,用于针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量;A determination module configured to determine, for each pixel point, the texture bias of the pixel point in the vertical direction corresponding to the horizontal direction in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point. displacement;
渲染模块,用于根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像。A rendering module, configured to sample from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
第三方面,本公开提供一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理装置执行时实现第一方面所述方法的步骤。In a third aspect, the present disclosure provides a computer-readable medium having a computer program stored thereon, which implements the steps of the method described in the first aspect when executed by a processing device.
第四方面,本公开提供一种电子设备,包括:In a fourth aspect, the present disclosure provides an electronic device, including:
存储装置,其上存储有计算机程序;a storage device having a computer program stored thereon;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现第一方面所述方法的步骤。A processing device, configured to execute the computer program in the storage device to implement the steps of the method described in the first aspect.
第五方面,本公开提供一种计算机程序产品,包括计算机程序,该计算机程序被处理装置执行时实现第一方面所述方法的步骤。In a fifth aspect, the present disclosure provides a computer program product, including a computer program that implements the steps of the method described in the first aspect when executed by a processing device.
第六方面,本公开提供一种计算机程序,该计算机程序被处理装置执行时实现第一方面所述方法的步骤。In a sixth aspect, the present disclosure provides a computer program that, when executed by a processing device, implements the steps of the method described in the first aspect.
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。Other features and advantages of the present disclosure will be described in detail in the detailed description that follows.
附图说明Description of the drawings
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。在附图中: The above and other features, advantages, and aspects of various embodiments of the present disclosure will become more apparent with reference to the following detailed description taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It is to be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale. In the attached picture:
图1是写实头发和卡通头发的高光图像的对比示意图;Figure 1 is a comparison diagram of highlight images of realistic hair and cartoon hair;
图2是基于本公开的实施方案提供的一种高光渲染方法的流程图;Figure 2 is a flow chart of a highlight rendering method based on an embodiment of the present disclosure;
图3是基于本公开的实施方案提供的高光图像的示意图;Figure 3 is a schematic diagram of a highlight image provided based on an embodiment of the present disclosure;
图4和图5是基于本公开的实施方案提供的高光渲染图像的示意图;Figures 4 and 5 are schematic diagrams of highlight rendering images provided based on embodiments of the present disclosure;
图6是基于本公开的实施方案提供的高光渲染装置的框图;Figure 6 is a block diagram of a highlight rendering device provided based on an embodiment of the present disclosure;
图7示出了适于用来实现本公开实施例的电子设备的结构示意图。FIG. 7 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the disclosure are shown in the drawings, it should be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, which rather are provided for A more thorough and complete understanding of this disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of the present disclosure.
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。It should be understood that various steps described in the method implementations of the present disclosure may be executed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performance of illustrated steps. The scope of the present disclosure is not limited in this regard.
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。As used herein, the term "include" and its variations are open-ended, ie, "including but not limited to." The term "based on" means "based at least in part on." The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; and the term "some embodiments" means "at least some embodiments". Relevant definitions of other terms will be given in the description below.
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。It should be noted that concepts such as “first” and “second” mentioned in this disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order of functions performed by these devices, modules or units. Or interdependence.
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。It should be noted that the modifications of "one" and "plurality" mentioned in this disclosure are illustrative and not restrictive. Those skilled in the art will understand that unless the context clearly indicates otherwise, it should be understood as "one or Multiple”.
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are for illustrative purposes only and are not used to limit the scope of these messages or information.
本公开中所有获取信号、信息或数据的动作都是在遵照所在地国家相应的数据保护法规政策的前提下,并获得由相应装置所有者给予授权的情况下进行的。All actions to obtain signals, information or data in this disclosure are carried out in compliance with the corresponding data protection regulations and policies of the country where they are located, and with the authorization given by the owner of the corresponding device.
图2所示,为基于本公开的一种实施方式提供的高光渲染方法的流程图,所述方法可以包括: Figure 2 shows a flow chart of a highlight rendering method based on an embodiment of the present disclosure. The method may include:
在步骤11中,获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状。In step 11, a highlight image to be rendered is obtained, wherein a highlight shape to be rendered is drawn in the highlight image.
示例地,在基于高光图像在头发模型上进行高光渲染时,通常是基于UV坐标实现头发模型中的像素点和高光图像中的采样位置之间的对应关系,UV坐标可以是该高光图像的百分比坐标,水平方向可以记为U,垂直方向可以记为V,如图3所示的高光图像。其中,白色图示部分即为待渲染的高光形状。在基于该高光图像在头发模型上进行渲染时,则将该高光图像贴图在该头发模型的表面上,以实现高光渲染。For example, when performing highlight rendering on a hair model based on a highlight image, the correspondence between the pixels in the hair model and the sampling positions in the highlight image is usually implemented based on UV coordinates. The UV coordinates can be the percentage of the highlight image. Coordinates, the horizontal direction can be recorded as U, and the vertical direction can be recorded as V, as shown in the highlight image in Figure 3. Among them, the white icon part is the highlight shape to be rendered. When rendering on the hair model based on the highlight image, the highlight image is mapped on the surface of the hair model to achieve highlight rendering.
在步骤12中,根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量。In step 12, the light source vector in the target coordinate space corresponding to the hair model is determined according to the light source direction in the world space.
在步骤13中,针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定目标坐标空间下该像素点对应的视线向量。In step 13, for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space is determined according to the line of sight direction in the world space.
其中,可以基于本领域中常用的渲染模型进行渲染,例如Unity Shader进行高光渲染,在渲染模型中则可以获得世界空间下的光源方向和视线方向等参数。在该实施例中,可以将上述参数从世界空间下转换至头发模型对应的模型空间,即目标坐标空间中,以便于表示光源方向和视线方向的变化对高光位置的影响。需要进行说明的是,图2所示执行顺序为示例性说明,步骤12和步骤13可以先后执行也可以并列执行,本公开对此不进行限定。Among them, rendering can be performed based on the rendering models commonly used in this field, such as Unity Shader for highlight rendering. In the rendering model, parameters such as the light source direction and line of sight direction in the world space can be obtained. In this embodiment, the above parameters can be converted from the world space to the model space corresponding to the hair model, that is, the target coordinate space, so as to express the impact of changes in the light source direction and the line of sight direction on the highlight position. It should be noted that the execution sequence shown in Figure 2 is an exemplary illustration. Step 12 and step 13 can be executed one after another or in parallel, and this disclosure does not limit this.
在步骤14中,针对每一像素点,根据光源向量和像素点对应的视线向量,确定像素点在高光图像中与水平方向对应的垂直方向上的纹理偏移量。In step 14, for each pixel, the texture offset of the pixel in the vertical direction corresponding to the horizontal direction in the highlight image is determined based on the light source vector and the line of sight vector corresponding to the pixel.
其中,在各项异性的高光变化中,当光源方向或视线方向相对于头发模型进行上下左右的移动时,高光的位置通常会相应的进行上下左右的位置移动,从而难以保证高光变化过程中的高光形状。基于此,在该实施例中,为了保证头发上的高光形状的固定,在光源方向和视线方向变化时,可以只对高光形状对应的纹理在垂直方向上进行位置偏移,以在保证高光形状的同时达到各向异性的变化效果。Among them, in the anisotropic highlight changes, when the light source direction or the line of sight direction moves up, down, left, and right relative to the hair model, the position of the highlight usually moves up, down, left, and right accordingly, making it difficult to ensure the smoothness of the highlight changes. Highlight shape. Based on this, in this embodiment, in order to ensure the fixation of the highlight shape on the hair, when the light source direction and the line of sight direction change, only the texture corresponding to the highlight shape can be position-shifted in the vertical direction to ensure the highlight shape. while achieving anisotropic change effects.
在步骤15中,根据每一像素点对应的纹理偏移量从高光图像中进行采样,以对像素点进行渲染,获得渲染后的高光渲染图像。In step 15, samples are sampled from the highlight image according to the texture offset corresponding to each pixel point to render the pixel points to obtain a rendered highlight rendering image.
如上所示,可以基于视线方向和光源方向的变化确定其对高光形状在垂直方向上的位置的偏移影响,从而可以基于纹理偏移量从该高光图像中基于偏移之后的位置进行采样,即基于偏移之后的位置从高光图像中采样相应位置的颜色值进行渲染,使得此时渲染的颜色与当前的光源方向和视线方向相匹配。As shown above, the offset effect on the position of the highlight shape in the vertical direction can be determined based on changes in the viewing direction and the light source direction, so that the position after the offset can be sampled from the highlight image based on the texture offset, That is, based on the offset position, the color value of the corresponding position is sampled from the highlight image for rendering, so that the color rendered at this time matches the current light source direction and line of sight direction.
由此,在上述技术方案中,可以将世界空间下的光源方向和视线方向转换至头发模型 对应的目标坐标空间,从而可以在同一模型空间中确定视线方向和光源方向对高光位置的影响。并且,在本公开实施例中,只考虑高光位置在与水平方向对应的垂直方向上的偏移,可以针对头发模型中的同一个像素点,基于视线方向和光源方向以改变其从高光图像中的纹理采样位置,以使得同一个像素点采样获得的渲染颜色值发生变化,从而可以在保证高光形状的基础上,对高光位置进行偏移,实现各向异性的高光渲染效果,简化高光渲染的过程,提高高光渲染效率的同时,可以提高动画高光渲染的准确度,提升用户对渲染后所得动画的观看体验。Therefore, in the above technical solution, the light source direction and line of sight direction in the world space can be converted to the hair model The corresponding target coordinate space allows the influence of the line of sight direction and the light source direction on the highlight position to be determined in the same model space. Moreover, in the embodiment of the present disclosure, only the offset of the highlight position in the vertical direction corresponding to the horizontal direction is considered, and the same pixel in the hair model can be changed from the highlight image based on the line of sight direction and the light source direction. The texture sampling position can change the rendering color value obtained by sampling the same pixel, so that on the basis of ensuring the highlight shape, the highlight position can be offset to achieve anisotropic highlight rendering effects and simplify the process of highlight rendering. The process not only improves the efficiency of highlight rendering, but also improves the accuracy of animation highlight rendering and improves the user's viewing experience of the rendered animation.
在一种可能的实施例中,所述根据世界空间下的光源方向确定所述头发模型对应的目标坐标空间下的光源向量的示例性实现方式如下,包括:In a possible embodiment, an exemplary implementation of determining the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in the world space is as follows, including:
获得世界空间下的光源方向,并基于转换矩阵进行转换,以获得目标坐标空间下的光源向量。示例地,可以通过如下公式进行确定:
lightDir_O=mul((float3×3)unity_WorldToObject,LightDirection.xyz)
Obtain the light source direction in world space and perform conversion based on the transformation matrix to obtain the light source vector in the target coordinate space. For example, it can be determined by the following formula:
lightDir_O=mul((float3×3)unity_WorldToObject,LightDirection.xyz)
其中,mul(M,v)用于表示计算矩阵M和向量v的矩阵乘法,以进行矩阵转换,unity_WorldToObject用于表示从世界空间转换到对象空间(即目标坐标空间)的矩阵,LightDirection.xyz用于表示世界空间下的光源方向的坐标,lightDir_O用于表示目标坐标空间下的光源向量。其中,上述mul()函数和unity_WorldToObject的计算为本领域中的常规计算方式,在此不再赘述。Among them, mul(M,v) is used to represent the matrix multiplication of the calculation matrix M and the vector v for matrix conversion. unity_WorldToObject is used to represent the matrix converted from world space to object space (ie, target coordinate space). LightDirection.xyz uses Used to represent the coordinates of the light source direction in world space, lightDir_O is used to represent the light source vector in the target coordinate space. Among them, the calculation of the above mul() function and unity_WorldToObject is a conventional calculation method in this field, and will not be described again here.
在一种可能的实施例中,所述针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定目标坐标空间下该像素点对应的视线向量的示例性实现方式如下,该步骤可以包括:In a possible embodiment, for each pixel to be rendered in the hair model, an exemplary implementation of determining the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space is as follows: This step can include:
确定所述世界空间下的相机位置在所述目标坐标空间下的相机位置坐标。Determine the camera position coordinates of the camera position in the world space and the camera position coordinates in the target coordinate space.
其中,可以通过_WorldSpaceCameraPos()获取世界空间下的相机位置,同样地,可以基于转换矩阵将该相机位置转换至目标坐标空间下,公式如下:
mul(unity_WorldToObject,float4(_WorldSpaceCameraPos.xyz,1))
Among them, the camera position in the world space can be obtained through _WorldSpaceCameraPos(). Similarly, the camera position can be converted to the target coordinate space based on the transformation matrix. The formula is as follows:
mul(unity_WorldToObject,float4(_WorldSpaceCameraPos.xyz,1))
其中,unity_WorldToObject即表示该从世界空间转换至目标坐标空间对应的转换矩阵,_WorldSpaceCameraPos.xyz表示相机位置在世界空间下的坐标。Among them, unity_WorldToObject represents the transformation matrix corresponding to the conversion from world space to target coordinate space, and _WorldSpaceCameraPos.xyz represents the coordinates of the camera position in world space.
针对每一所述像素点,将所述相机位置坐标减去所述像素点的位置坐标所得的向量进行标准化处理后所得的向量作为所述像素点对应的视线向量。For each pixel point, the vector obtained by subtracting the position coordinates of the pixel point from the camera position coordinates is normalized and the vector is used as the line of sight vector corresponding to the pixel point.
示例地,通过矢量减法可以确定相机位置至像素点的方向,即视线方向。标准化处理可以是归一化处理,相应地,可以通过如下公式确定像素点对应的视线向量:
viewDir_O=
normalize(mul(unity_WorldToObject,
float4(_WorldSpaceCameraPos.xyz,1)).xyz-v.vertex.xyz)
For example, vector subtraction can be used to determine the direction from the camera position to the pixel point, that is, the line of sight direction. The standardization process can be a normalization process. Correspondingly, the line of sight vector corresponding to the pixel can be determined through the following formula:
viewDir_O=
normalize(mul(unity_WorldToObject,
float4(_WorldSpaceCameraPos.xyz,1)).xyz-v.vertex.xyz)
其中,v.vertex.xyz表示头发模型v中的像素点vertex的位置坐标,normalize用于表示对向量进行归一化处理。viewDir_O用于表示像素点对应的视线向量。Among them, v.vertex.xyz represents the position coordinates of the pixel vertex in the hair model v, and normalize is used to normalize the vector. viewDir_O is used to represent the line of sight vector corresponding to the pixel.
由此,通过上述技术方案,可以针对于头发模型中的每一像素点确定出该像素点在头发模型对应的空间中的视线方向,从而将头发模型的表示和视线方向的表示转换到同一空间下,以便于基于同一空间标准获得该视线方向对高光反射的影响,为后续进行高光纹理采样提供可靠的数据支持。Therefore, through the above technical solution, the sight direction of each pixel in the hair model can be determined in the space corresponding to the hair model, thereby converting the representation of the hair model and the representation of the sight direction into the same space. In order to obtain the influence of the line of sight direction on highlight reflection based on the same spatial standard, it can provide reliable data support for subsequent highlight texture sampling.
在一种可能的实施例中,所述针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在高光图像中与水平方向对应的垂直方向上的纹理偏移量的示例性实现方式如下,该步骤可以包括:In a possible embodiment, for each pixel point, the vertical direction corresponding to the horizontal direction of the pixel point in the highlight image is determined based on the light source vector and the line of sight vector corresponding to the pixel point. An exemplary implementation of the texture offset on is as follows. This step may include:
确定所述光源向量在所述垂直方向上的光源分量。Determine the light source component of the light source vector in the vertical direction.
针对每一所述像素点,确定所述像素点对应的视线向量在所述垂直方向上的视线分量。For each pixel point, the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction is determined.
如上文所述,在本公开中在光源方向或视线方向变化时,只会影响高光形状在垂直方向上的偏移,因此,在该实施例中只需要确定出光源向量和视线向量在垂直方向上的分量。根据头发模型对应的目标坐标空间可知,垂直方向上的分量即为该目标坐标空间下的向量对应的y分量。因此,在该实施例中,可以确定出的光源向量的y分量确定为该光源分量,并将每一视线向量的y分量确定为对应的视线分量。As mentioned above, in the present disclosure, when the light source direction or the line of sight direction changes, it will only affect the shift of the highlight shape in the vertical direction. Therefore, in this embodiment, it is only necessary to determine the vertical direction of the light source vector and the line of sight vector. on the weight. According to the target coordinate space corresponding to the hair model, it can be known that the component in the vertical direction is the y component corresponding to the vector in the target coordinate space. Therefore, in this embodiment, the y component of the determined light source vector can be determined as the light source component, and the y component of each line of sight vector is determined as the corresponding line of sight component.
根据所述光源分量和每一所述像素点对应的视线分量,确定所述像素点对应的纹理偏移量。According to the light source component and the line of sight component corresponding to each pixel point, the texture offset corresponding to the pixel point is determined.
其中,可以根据预设的对应关系将光源分量和视线分量对应为纹理偏移量。作为示例,针对每一像素点,可以将光源分量和该像素点对应的视线分量的平均值确定为所述像素点对应的纹理偏移量,即:
speTexUVOffset=0.5*(lightDir_O.y+viewDir_O.y)
Among them, the light source component and the line of sight component can be corresponding to the texture offset according to the preset correspondence relationship. As an example, for each pixel, the average value of the light source component and the line of sight component corresponding to the pixel can be determined as the texture offset corresponding to the pixel, that is:
speTexUVOffset=0.5*(lightDir_O.y+viewDir_O.y)
其中,speTexUVOffset即表示纹理偏移量,lightDir_O.y表示光源分量,viewDir_O.y表示视线分量。Among them, speTexUVOffset represents the texture offset, lightDir_O.y represents the light source component, and viewDir_O.y represents the line of sight component.
作为另一示例,可以根据实际应用场景设置光源方向和视线方向对高光位置偏移的偏移影响参数,从而可以基于其分别对应的偏移影响参数对光源分量和视线分量进行加权,获得对应的纹理偏移量。 As another example, the offset influence parameters of the light source direction and the line of sight direction on the highlight position offset can be set according to the actual application scenario, so that the light source component and the line of sight component can be weighted based on their corresponding offset influence parameters to obtain the corresponding Texture offset.
由此,通过上述技术方案,可以分别确定光源方向对高光位置的偏移影响和视线方向对高光位置的偏移影响,从而确定出在当前的光源方向和视线方向下高光应该进行偏移的方向和偏移量,以使得高光形状进行偏移,并且与视线方向和光源方向相匹配,基于控制高光位置的方式达到各向异性的效果,提高动画中高光渲染的准确度,并且简化高光渲染过程中,提高高光渲染效率。Therefore, through the above technical solution, the offset effect of the light source direction on the highlight position and the offset effect of the line of sight direction on the highlight position can be determined respectively, thereby determining the direction in which the highlight should be offset under the current light source direction and line of sight direction. and offset, so that the highlight shape is offset and matches the direction of the line of sight and the direction of the light source. Anisotropic effects are achieved based on controlling the highlight position, improving the accuracy of highlight rendering in animation, and simplifying the highlight rendering process. , improve highlight rendering efficiency.
在一种可能的实施例中,所述根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像的示例性实现方式如下,该步骤可以包括:In a possible embodiment, sampling is performed from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point, and obtain the rendered highlight rendering image. An exemplary implementation is as follows. This step may include:
获取每一所述像素点在所述高光图像中对应的基础纹理坐标值,其中,所述基于纹理坐标值为所述光源向量和所述视线向量无偏移时对应的坐标值。Obtain the basic texture coordinate value corresponding to each pixel point in the highlight image, wherein the based texture coordinate value is the coordinate value corresponding to when there is no offset between the light source vector and the line of sight vector.
其中,每一像素点在高光图像中对应的基础纹理坐标值可以预先获得,示例地,可以是确定光源向量和视线向量的y分量为0时基于纹理采样器进行采样获得的UV值。The basic texture coordinate value corresponding to each pixel in the highlight image can be obtained in advance. For example, it can be the UV value obtained by sampling based on the texture sampler when the y component of the light source vector and the line of sight vector is determined to be 0.
之后,根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值。Then, according to the texture offset corresponding to each pixel point and the basic texture coordinate value, the texture sampling coordinate value corresponding to the pixel point is determined.
在确定出像素点对应的纹理偏移量后,则可以按照该纹理偏移量在基础纹理坐标值的基础上进行偏移,以便于控制高光形状的偏移。After the texture offset corresponding to the pixel is determined, the offset can be performed based on the basic texture coordinate value according to the texture offset, so as to control the offset of the highlight shape.
之后,从所述高光图像中采样所述纹理采样坐标值对应的纹理颜色值,作为所述像素点对应的颜色值。After that, the texture color value corresponding to the texture sampling coordinate value is sampled from the highlight image as the color value corresponding to the pixel point.
其中,确定像素点对应的纹理采样坐标值后,则可以基于采样器根据该纹理采样坐标值从高光图像中的对应位置进行采样,获得纹理采样坐标值对应的颜色值。其中,该采样器从该高光图像进行采样的方式可以基于实际应用场景进行设置,例如常量插值法、线性插值法等处理图片放大、缩小的情况,本公开对此不进行限定。After the texture sampling coordinate value corresponding to the pixel point is determined, the sampler can be used to sample from the corresponding position in the highlight image according to the texture sampling coordinate value to obtain the color value corresponding to the texture sampling coordinate value. The sampling method of the sampler from the highlight image can be set based on the actual application scenario, such as constant interpolation method, linear interpolation method, etc. to handle the situation of image enlargement and reduction, which is not limited by the present disclosure.
基于每一像素点对应的所述颜色值对所述像素点进行渲染,获得所述高光渲染图像。Render the pixels based on the color value corresponding to each pixel point to obtain the highlight rendering image.
示例地,如图4和图5所示,为基于不同的视线方向和光源方向下渲染所得的高光渲染图像,其中的高光形状G的位置不同,达到各向异性的渲染效果。For example, as shown in Figures 4 and 5, they are highlight rendering images rendered based on different sight directions and light source directions, in which the positions of the highlight shapes G are different to achieve an anisotropic rendering effect.
由此,通过上述技术方案,可以基于纹理偏移量确定头发模型中的像素点从高光图像中进行采样的纹理采样坐标值,以从高光图像中获得相应的颜色值进行对该像素点进行渲染,针对头发模型中的同一个像素点,同时实时确定高光图像中的纹理采样坐标值,使得同一个像素点采样获得的渲染颜色值发生变化,以达到头发模型表面纹理即高光形状的移动性,贴合动画高光的渲染场景。 Therefore, through the above technical solution, the texture sampling coordinate value of the pixel in the hair model sampled from the highlight image can be determined based on the texture offset, so as to obtain the corresponding color value from the highlight image to render the pixel. For the same pixel in the hair model, the texture sampling coordinate value in the highlight image is determined in real time, so that the rendering color value obtained by sampling the same pixel changes to achieve the mobility of the surface texture of the hair model, that is, the highlight shape. Rendered scene with animated highlights.
在一种可能的实施例中,所述根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值的示例性实现方式如下,该步骤可以包括:In a possible embodiment, an exemplary implementation of determining the texture sampling coordinate value corresponding to each pixel point based on the texture offset corresponding to the pixel point and the basic texture coordinate value is as follows: Steps can include:
确定每一所述像素点的基础纹理坐标值在所述垂直方向上的子坐标值。示例地,该垂直方向的子坐标值可以是基础纹理坐标值(UV坐标)中的V方向的取值。Determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction. For example, the sub-coordinate value in the vertical direction may be the value in the V direction in the basic texture coordinate value (UV coordinate).
针对每一所述像素点,将所述像素点对应的所述子坐标值减去所述像素点对应的纹理偏移量所得的坐标值,作为所述垂直方向上的更新坐标值,并将所述像素点对应的所述子坐标值更新为所述更新坐标值,以获得所述像素点对应的纹理采样坐标值。For each pixel point, the coordinate value obtained by subtracting the texture offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and The sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
其中,如上文所示,在本公开实施例中需要保证高光形状的固定性,因此在对高光的偏移变化过程中只对高光在垂直方向上进行偏移。相应地,在基于纹理偏移量确定像素点对应的纹理采样坐标值时,该纹理偏移量也只对基础纹理坐标值中垂直方向上的分量进行调整。As shown above, in the embodiment of the present disclosure, it is necessary to ensure the fixity of the highlight shape, so during the process of shifting the highlight, only the highlight is shifted in the vertical direction. Correspondingly, when the texture sampling coordinate value corresponding to the pixel point is determined based on the texture offset, the texture offset only adjusts the vertical component of the basic texture coordinate value.
作为示例,可以直接在基础纹理坐标值中垂直方向上的子坐标值叠加该纹理偏移量,从而实现高光在垂直方向上的坐标的移动,并将该该更新坐标值替换基础纹理坐标值中垂直方向上的坐标值,以生成当前对应的纹理采样坐标值。由此,在基于纹理采样坐标值进行采样所得的颜色可以根据实时的视线方向和光源方向进行确定,以通过调整采样纹理坐标值的方式实现头发模型中的同一像素点可着色的变化,实现对高光移动的控制。As an example, the texture offset can be directly superimposed on the sub-coordinate value in the vertical direction of the basic texture coordinate value to realize the movement of the coordinates of the highlight in the vertical direction, and the updated coordinate value can be replaced in the basic texture coordinate value. The coordinate value in the vertical direction to generate the current corresponding texture sampling coordinate value. As a result, the color obtained by sampling based on the texture sampling coordinate value can be determined according to the real-time line of sight direction and light source direction, so that the same pixel in the hair model can be changed in color by adjusting the sampling texture coordinate value, and the coloring of the same pixel can be realized. Control of highlight movement.
在一种可能的实施例中,所述根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值的另一示例性实现方式如下,该步骤可以包括:In a possible embodiment, another exemplary implementation of determining the texture sampling coordinate value corresponding to the pixel point based on the texture offset corresponding to the pixel point and the basic texture coordinate value is as follows: , this step can include:
确定每一所述像素点的基础纹理坐标值在所述垂直方向上的子坐标值,同样地,该垂直方向的子坐标值可以是基础纹理坐标值(UV坐标)中的V方向的取值。Determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction. Similarly, the sub-coordinate value in the vertical direction may be the value of the V direction in the basic texture coordinate value (UV coordinate). .
针对每一所述像素点,根据所述像素点对应的纹理偏移量和预设的偏移调整参数,确定所述像素点对应的目标偏移量。For each pixel, the target offset corresponding to the pixel is determined according to the texture offset corresponding to the pixel and the preset offset adjustment parameter.
其中,该预设的偏移调整参数可以根据实际应用场景进行设置。示例地,该偏移调整参数可以包括偏移程度参数和偏移位置参数,其中,偏移程度参数控制偏移的幅度大小,偏移位置参数用于控制偏移位置的再调节,例如目标偏移量的表示如下:
speTexUVOffset’=_DisScale*speTexUVOffset+_SpecularShift
Among them, the preset offset adjustment parameters can be set according to actual application scenarios. For example, the offset adjustment parameters may include an offset degree parameter and an offset position parameter, where the offset degree parameter controls the amplitude of the offset, and the offset position parameter is used to control the readjustment of the offset position, such as the target offset. The displacement is expressed as follows:
speTexUVOffset'=_DisScale*speTexUVOffset+_SpecularShift
其中,speTexUVOffset’用于表示目标偏移量,_DisScale用于表示该偏移程度参数,speTexUVOffset用于表示纹理偏移量,_SpecularShift用于表示偏移位置参数。Among them, speTexUVOffset' is used to represent the target offset, _DisScale is used to represent the offset degree parameter, speTexUVOffset is used to represent the texture offset, and _SpecularShift is used to represent the offset position parameter.
针对每一所述像素点,将所述像素点对应的所述子坐标值减去所述像素点对应的目标偏移量所得的坐标值,作为所述垂直方向上的更新坐标值,并将所述像素点对应的所述子 坐标值更新为所述更新坐标值,以获得所述像素点对应的纹理采样坐标值。For each pixel point, the coordinate value obtained by subtracting the target offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and The sub-pixel corresponding to the pixel point The coordinate value is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
其中,在确定出目标偏移量后基于基础纹理坐标值生成纹理采样坐标值的方式与上文类似,在此不再赘述。Among them, the method of generating the texture sampling coordinate value based on the basic texture coordinate value after the target offset is determined is similar to the above, and will not be described again here.
在确定出纹理采样坐标值后,则可以基于纹理采样坐标值从高光图像中进行采样,示例地,可以通过如下算法进行采样:
specularTex=
SAMPLE_TEXTURE2D(_ShadingTex,sampler_ShadingTex,
i.uv.xy-float2(0,_DisScale*speTexUVOffset+_SpecularShift))
After the texture sampling coordinate values are determined, sampling can be performed from the highlight image based on the texture sampling coordinate values. For example, sampling can be performed through the following algorithm:
specularTex=
SAMPLE_TEXTURE2D(_ShadingTex,sampler_ShadingTex,
i.uv.xy-float2(0,_DisScale*speTexUVOffset+_SpecularShift))
其中,纹理采样语法SAMPLE_TEXTURE2D(Tex,sampler_Tex,uv),参数分别为纹理(即高光图像)、纹理采样器、采样纹理对应的UV(即纹理采样坐标值),Among them, the texture sampling syntax SAMPLE_TEXTURE2D (Tex, sampler_Tex, uv), the parameters are the texture (that is, the highlight image), the texture sampler, and the UV corresponding to the sampled texture (that is, the texture sampling coordinate value),
i.uv.xy表示基础纹理坐标值;i.uv.xy represents the basic texture coordinate value;
float2(0,_DisScale*speTexUVOffset+_SpecularShift)表示目标偏移量,如上文所述只考虑高光在垂直方向的偏移,也就是V方向偏移,即U方向的偏移为0。float2(0,_DisScale*speTexUVOffset+_SpecularShift) represents the target offset. As mentioned above, only the offset of the highlight in the vertical direction is considered, that is, the offset in the V direction, that is, the offset in the U direction is 0.
由此,通过上述技术方案,在确定高光位置的偏移时,可以进一步地基于偏移调整参数对偏移位置进行控制,使得高光的偏移移动更加贴合其应用的渲染场景,并且可以提高高光渲染的多样性,进一步拓宽高光渲染方法的应用场景。Therefore, through the above technical solution, when determining the offset of the highlight position, the offset position can be further controlled based on the offset adjustment parameter, so that the offset movement of the highlight is more suitable for the rendering scene to which it is applied, and can improve The diversity of highlight rendering further broadens the application scenarios of the highlight rendering method.
本公开还提供一种高光渲染装置,如图6所示,所述装置10包括:The present disclosure also provides a highlight rendering device. As shown in Figure 6, the device 10 includes:
获取模块100,用于获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状;The acquisition module 100 is used to acquire a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image;
第一处理模块200,用于根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量;The first processing module 200 is used to determine the light source vector in the target coordinate space corresponding to the hair model according to the light source direction in the world space;
第二处理模块300,用于针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量;The second processing module 300 is used to determine, for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
确定模块400,用于针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量;The determination module 400 is configured to determine, for each pixel point, the texture of the pixel point in the vertical direction corresponding to the horizontal direction in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point. Offset;
渲染模块500,用于根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像。The rendering module 500 is configured to sample from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
可选地,所述确定模块包括:Optionally, the determining module includes:
第一确定子模块,用于确定所述光源向量在所述垂直方向上的光源分量;The first determination sub-module is used to determine the light source component of the light source vector in the vertical direction;
第二确定子模块,用于针对每一所述像素点,确定所述像素点对应的视线向量在所述垂直方向上的视线分量; The second determination sub-module is used to determine, for each pixel point, the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction;
第三确定子模块,用于根据所述光源分量和每一所述像素点对应的视线分量,确定所述像素点对应的纹理偏移量。The third determination sub-module is used to determine the texture offset corresponding to the pixel point based on the light source component and the line of sight component corresponding to each pixel point.
可选地,所述第二处理模块包括:Optionally, the second processing module includes:
第四确定子模块,用于确定所述世界空间下的相机位置在所述目标坐标空间下的相机位置坐标;The fourth determination sub-module is used to determine the camera position coordinates of the camera position in the world space and the camera position coordinates in the target coordinate space;
处理子模块,用于针对每一所述像素点,将所述相机位置坐标减去所述像素点的位置坐标所得的向量进行标准化处理后所得的向量作为所述像素点对应的视线向量。The processing submodule is configured to normalize the vector obtained by subtracting the position coordinates of the pixel from the camera position coordinates for each of the pixels, and use the vector as the line of sight vector corresponding to the pixel.
可选地,所述渲染模块包括:Optionally, the rendering module includes:
获取子模块,用于获取每一所述像素点在所述高光图像中对应的基础纹理坐标值,其中,所述基于纹理坐标值为所述光源向量和所述视线向量无偏移时对应的坐标值;Obtaining sub-module, used to obtain the basic texture coordinate value corresponding to each pixel point in the highlight image, wherein the based on texture coordinate value is the corresponding basic texture coordinate value when there is no offset between the light source vector and the line of sight vector. coordinate value;
第五确定子模块,用于根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值;The fifth determination sub-module is used to determine the texture sampling coordinate value corresponding to each pixel point based on the texture offset corresponding to the pixel point and the basic texture coordinate value;
采样子模块,用于从所述高光图像中采样所述纹理采样坐标值对应的纹理颜色值,作为所述像素点对应的颜色值;A sampling submodule, configured to sample the texture color value corresponding to the texture sampling coordinate value from the highlight image as the color value corresponding to the pixel point;
渲染子模块,用于基于每一像素点对应的所述颜色值对所述像素点进行渲染,获得所述高光渲染图像。A rendering sub-module is used to render the pixel based on the color value corresponding to each pixel to obtain the highlight rendering image.
可选地,所述第五确定子模块包括:Optionally, the fifth determination sub-module includes:
第六确定子模块,用于确定每一所述像素点的基础纹理坐标值在所述垂直方向上的子坐标值;The sixth determination sub-module is used to determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction;
第七确定子模块,用于针对每一所述像素点,将所述像素点对应的所述子坐标值减去所述像素点对应的纹理偏移量所得的坐标值,作为所述垂直方向上的更新坐标值,并将所述像素点对应的所述子坐标值更新为所述更新坐标值,以获得所述像素点对应的纹理采样坐标值。The seventh determination sub-module is used to, for each pixel point, subtract the coordinate value obtained by subtracting the texture offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point as the vertical direction. on the updated coordinate value, and update the sub-coordinate value corresponding to the pixel point to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
可选地,所述第五确定子模块包括:Optionally, the fifth determination sub-module includes:
第八确定子模块,用于确定每一所述像素点的基础纹理坐标值在所述垂直方向上的子坐标值;The eighth determination sub-module is used to determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction;
第九确定子模块,用于针对每一所述像素点,根据所述像素点对应的纹理偏移量和预设的偏移调整参数,确定所述像素点对应的目标偏移量;The ninth determination sub-module is used for determining, for each pixel point, the target offset amount corresponding to the pixel point according to the texture offset amount corresponding to the pixel point and the preset offset adjustment parameter;
第十确定子模块,用于针对每一所述像素点,将所述像素点对应的所述子坐标值减去所述像素点对应的目标偏移量所得的坐标值,作为所述垂直方向上的更新坐标值,并将所 述像素点对应的所述子坐标值更新为所述更新坐标值,以获得所述像素点对应的纹理采样坐标值。The tenth determination sub-module is configured to, for each pixel point, subtract the coordinate value obtained by subtracting the target offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point as the vertical direction. update coordinate values on the The sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
下面参考图7,其示出了适于用来实现本公开实施例的电子设备600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(Personal Digital Assistant,个人数字助理)、PAD(平板电脑)、PMP(Portable Multimedia Player,便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV(Television,电视)、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring now to FIG. 7 , a schematic structural diagram of an electronic device 600 suitable for implementing embodiments of the present disclosure is shown. Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDA (Personal Digital Assistant, personal digital assistant), PAD (tablet computer), PMP (Portable Multimedia Player, portable multimedia player), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TV (Television, television), desktop computers, etc. The electronic device shown in FIG. 7 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
如图7所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(Read Only Memory,ROM)602中的程序或者从存储装置608加载到随机访问存储器(Random Access Memory,RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(Input/Output,I/O)接口605也连接至总线604。As shown in Figure 7, the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may process data according to a program stored in a read-only memory (Read Only Memory, ROM) 602 or from a storage device 608 The program loaded into the random access memory (Random Access Memory, RAM) 603 performs various appropriate actions and processes. In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored. The processing device 601, ROM 602 and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Generally, the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 607 such as a speaker, a vibrator, etc.; a storage device 608 including a magnetic tape, a hard disk, etc.; and a communication device 609. Communication device 609 may allow electronic device 600 to communicate wirelessly or wiredly with other devices to exchange data. Although FIG. 7 illustrates electronic device 600 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In such embodiments, the computer program may be downloaded and installed from the network via communication device 609, or from storage device 608, or from ROM 602. When the computer program is executed by the processing device 601, the above functions defined in the method of the embodiment of the present disclosure are performed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、 便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Electrical Programmable Read Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(Radio Frequency,射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: electrical connections having one or more conductors, Portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), electrical programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device . Program codes contained on computer-readable media can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, RF (Radio Frequency, Radio Frequency), etc., or any suitable combination of the above.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium. Communications (e.g., communications network) interconnections. Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any current network for knowledge or future research and development.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状;根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量;针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量;针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量;根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像。The computer-readable medium carries one or more programs. When the one or more programs are executed by the electronic device, the electronic device: obtains a highlight image to be rendered, wherein the highlight image to be rendered is drawn Highlight shape; determine the light source vector in the target coordinate space corresponding to the hair model according to the light source direction in the world space; for each pixel to be rendered in the hair model, determine the target coordinate space according to the line of sight direction in the world space The line of sight vector corresponding to the pixel point; for each pixel point, determine the vertical direction corresponding to the horizontal direction of the pixel point in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point. The texture offset amount is sampled from the highlight image according to the texture offset amount corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序 代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as "C" or similar programming languages. program The code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In situations involving remote computers, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider). connected via the Internet).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operations of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved. It will also be noted that each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定,例如,获取模块还可以被描述为“获取待渲染的高光图像的模块”。The modules involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of the module does not constitute a limitation on the module itself under certain circumstances. For example, the acquisition module can also be described as "a module that acquires the highlight image to be rendered."
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Product,ASSP)、片上系统(System On Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that can be used include: field programmable gate array (Field Programmable Gate Array, FPGA), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), application specific standard product (Application Specific Standard Product, ASSP), System On Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
根据本公开的一个或多个实施例,示例1提供了一种高光渲染方法,其中,所述方法包括:According to one or more embodiments of the present disclosure, Example 1 provides a highlight rendering method, wherein the method includes:
获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状;Obtain a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image;
根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量;Determine the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in world space;
针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量;For each pixel to be rendered in the hair model, determine the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像 素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量;For each pixel, the image is determined based on the light source vector and the line of sight vector corresponding to the pixel. The texture offset of the pixel in the vertical direction corresponding to the horizontal direction in the highlight image;
根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像。Sampling is performed from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
根据本公开的一个或多个实施例,示例2提供了示例1的方法,其中,所述针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量,包括:According to one or more embodiments of the present disclosure, Example 2 provides the method of Example 1, wherein for each pixel point, the light source vector and the line of sight vector corresponding to the pixel point are determined. The texture offset of pixels in the vertical direction corresponding to the horizontal direction in the highlight image includes:
确定所述光源向量在所述垂直方向上的光源分量;Determine the light source component of the light source vector in the vertical direction;
针对每一所述像素点,确定所述像素点对应的视线向量在所述垂直方向上的视线分量;For each pixel point, determine the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction;
根据所述光源分量和每一所述像素点对应的视线分量,确定所述像素点对应的纹理偏移量。According to the light source component and the line of sight component corresponding to each pixel point, the texture offset corresponding to the pixel point is determined.
根据本公开的一个或多个实施例,示例3提供了示例1的方法,其中,所述针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量,包括:According to one or more embodiments of the present disclosure, Example 3 provides the method of Example 1, wherein for each pixel to be rendered in the hair model, the target coordinate space is determined according to the line of sight direction in the world space. The line of sight vector corresponding to the pixel below includes:
确定所述世界空间下的相机位置在所述目标坐标空间下的相机位置坐标;Determine the camera position coordinates of the camera position in the world space and the camera position coordinates in the target coordinate space;
针对每一所述像素点,将所述相机位置坐标减去所述像素点的位置坐标所得的向量进行标准化处理后所得的向量作为所述像素点对应的视线向量。For each pixel point, the vector obtained by subtracting the position coordinates of the pixel point from the camera position coordinates is normalized and the vector is used as the line of sight vector corresponding to the pixel point.
根据本公开的一个或多个实施例,示例4提供了示例1的方法,其中,所述根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像,包括:According to one or more embodiments of the present disclosure, Example 4 provides the method of Example 1, wherein the sampling is performed from the highlight image according to the texture offset corresponding to each pixel point, so as to Render pixels to obtain the rendered highlight rendering image, including:
获取每一所述像素点在所述高光图像中对应的基础纹理坐标值,其中,所述基于纹理坐标值为所述光源向量和所述视线向量无偏移时对应的坐标值;Obtain the basic texture coordinate value corresponding to each pixel in the highlight image, wherein the texture-based coordinate value is the corresponding coordinate value when there is no offset between the light source vector and the line of sight vector;
根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值;According to the texture offset corresponding to each pixel point and the basic texture coordinate value, determine the texture sampling coordinate value corresponding to the pixel point;
从所述高光图像中采样所述纹理采样坐标值对应的纹理颜色值,作为所述像素点对应的颜色值;Sample the texture color value corresponding to the texture sampling coordinate value from the highlight image as the color value corresponding to the pixel point;
基于每一像素点对应的所述颜色值对所述像素点进行渲染,获得所述高光渲染图像。Render the pixels based on the color value corresponding to each pixel point to obtain the highlight rendering image.
根据本公开的一个或多个实施例,示例5提供了示例4的方法,其中,所述根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值,包括: According to one or more embodiments of the present disclosure, Example 5 provides the method of Example 4, wherein, according to the texture offset corresponding to each pixel point and the basic texture coordinate value, it is determined that the pixel point corresponds to The texture sampling coordinate values include:
确定每一所述像素点的基础纹理坐标值在所述垂直方向上的子坐标值;Determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction;
针对每一所述像素点,将所述像素点对应的所述子坐标值减去所述像素点对应的纹理偏移量所得的坐标值,作为所述垂直方向上的更新坐标值,并将所述像素点对应的所述子坐标值更新为所述更新坐标值,以获得所述像素点对应的纹理采样坐标值。For each pixel point, the coordinate value obtained by subtracting the texture offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and The sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
根据本公开的一个或多个实施例,示例6提供了示例4的方法,其中,所述根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值,包括:According to one or more embodiments of the present disclosure, Example 6 provides the method of Example 4, wherein, according to the texture offset corresponding to each pixel point and the basic texture coordinate value, it is determined that the pixel point corresponds to The texture sampling coordinate values include:
确定每一所述像素点的基础纹理坐标值在所述垂直方向上的子坐标值;Determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction;
针对每一所述像素点,根据所述像素点对应的纹理偏移量和预设的偏移调整参数,确定所述像素点对应的目标偏移量;For each pixel, determine the target offset corresponding to the pixel according to the texture offset corresponding to the pixel and the preset offset adjustment parameter;
针对每一所述像素点,将所述像素点对应的所述子坐标值减去所述像素点对应的目标偏移量所得的坐标值,作为所述垂直方向上的更新坐标值,并将所述像素点对应的所述子坐标值更新为所述更新坐标值,以获得所述像素点对应的纹理采样坐标值。For each pixel point, the coordinate value obtained by subtracting the target offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and The sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
根据本公开的一个或多个实施例,示例7提供了一种高光渲染装置,其中,所述装置包括:According to one or more embodiments of the present disclosure, Example 7 provides a highlight rendering device, wherein the device includes:
获取模块,用于获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状;An acquisition module, configured to acquire a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image;
第一处理模块,用于根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量;The first processing module is used to determine the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in the world space;
第二处理模块,用于针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量;The second processing module is used to determine, for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
确定模块,用于针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量;A determination module configured to determine, for each pixel point, the texture bias of the pixel point in the vertical direction corresponding to the horizontal direction in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point. displacement;
渲染模块,用于根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像。A rendering module, configured to sample from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
根据本公开的一个或多个实施例,示例8提供了示例7的装置,其中,所述确定模块包括:According to one or more embodiments of the present disclosure, Example 8 provides the apparatus of Example 7, wherein the determining module includes:
第一确定子模块,用于确定所述光源向量在所述垂直方向上的光源分量;The first determination sub-module is used to determine the light source component of the light source vector in the vertical direction;
第二确定子模块,用于针对每一所述像素点,确定所述像素点对应的视线向量在所述垂直方向上的视线分量; The second determination sub-module is used to determine, for each pixel point, the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction;
第三确定子模块,用于根据所述光源分量和每一所述像素点对应的视线分量,确定所述像素点对应的纹理偏移量。The third determination sub-module is used to determine the texture offset corresponding to the pixel point based on the light source component and the line of sight component corresponding to each pixel point.
根据本公开的一个或多个实施例,示例9提供了一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理装置执行时实现示例1-6中任一项所述方法的步骤。According to one or more embodiments of the present disclosure, Example 9 provides a computer-readable medium having a computer program stored thereon, and when the computer program is executed by a processing device, the method of any one of Examples 1-6 is implemented. step.
根据本公开的一个或多个实施例,示例10提供了一种电子设备,其中,包括:According to one or more embodiments of the present disclosure, Example 10 provides an electronic device, including:
存储装置,其上存储有计算机程序;a storage device having a computer program stored thereon;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现示例1-6中任一项所述方法的步骤。A processing device, configured to execute the computer program in the storage device to implement the steps of the method in any one of Examples 1-6.
根据本公开的一个或多个实施例,示例11提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理装置执行时实现示例1-6中任一项所述方法的步骤。According to one or more embodiments of the present disclosure, Example 11 provides a computer program product, including a computer program that, when executed by a processing device, implements the steps of the method in any one of Examples 1-6.
根据本公开的一个或多个实施例,示例12提供了一种计算机程序,该计算机程序被处理装置执行时实现示例1-6中任一项所述方法的步骤。According to one or more embodiments of the present disclosure, Example 12 provides a computer program that, when executed by a processing device, implements the steps of the method in any one of Examples 1-6.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a description of the preferred embodiments of the present disclosure and the technical principles applied. Those skilled in the art should understand that the disclosure scope involved in the present disclosure is not limited to technical solutions composed of specific combinations of the above technical features, but should also cover solutions composed of the above technical features or without departing from the above disclosed concept. Other technical solutions formed by any combination of equivalent features. For example, a technical solution is formed by replacing the above features with technical features with similar functions disclosed in this disclosure (but not limited to).
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。Furthermore, although operations are depicted in a specific order, this should not be understood as requiring that these operations be performed in the specific order shown or performed in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, although several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。 Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims. Regarding the devices in the above embodiments, the specific manner in which each module performs operations has been described in detail in the embodiments related to the method, and will not be described in detail here.

Claims (12)

  1. 一种高光渲染方法,其中,所述方法包括:A highlight rendering method, wherein the method includes:
    获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状;Obtain a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image;
    根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量;Determine the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in world space;
    针对所述头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量;For each pixel to be rendered in the hair model, determine the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
    针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量;For each pixel, determine the texture offset of the pixel in the vertical direction corresponding to the horizontal direction in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel;
    根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像。Sampling is performed from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point to obtain a rendered highlight rendering image.
  2. 根据权利要求1所述的方法,其中,所述针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量,包括:The method according to claim 1, wherein for each pixel point, according to the light source vector and the line of sight vector corresponding to the pixel point, it is determined whether the pixel point is in the horizontal direction in the highlight image. The corresponding texture offset in the vertical direction, including:
    确定所述光源向量在所述垂直方向上的光源分量;Determine the light source component of the light source vector in the vertical direction;
    针对每一所述像素点,确定所述像素点对应的视线向量在所述垂直方向上的视线分量;For each pixel point, determine the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction;
    根据所述光源分量和每一所述像素点对应的视线分量,确定所述像素点对应的纹理偏移量。According to the light source component and the line of sight component corresponding to each pixel point, the texture offset corresponding to the pixel point is determined.
  3. 根据权利要求1或2所述的方法,其中,所述针对头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量,包括:The method according to claim 1 or 2, wherein for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space is determined according to the line of sight direction in the world space, include:
    确定所述世界空间下的相机位置在所述目标坐标空间下的相机位置坐标;Determine the camera position coordinates of the camera position in the world space and the camera position coordinates in the target coordinate space;
    针对每一所述像素点,将所述相机位置坐标减去所述像素点的位置坐标所得的向量进行标准化处理后所得的向量作为所述像素点对应的视线向量。For each pixel point, the vector obtained by subtracting the position coordinates of the pixel point from the camera position coordinates is normalized and the vector is used as the line of sight vector corresponding to the pixel point.
  4. 根据权利要求1-3中任一项所述的方法,其中,所述根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采样,以对所述像素点进行渲染,获得渲染后的高光渲染图像,包括:The method according to any one of claims 1-3, wherein the sampling is performed from the highlight image according to the texture offset corresponding to each pixel point to render the pixel point, Obtain the rendered highlight rendering image, including:
    获取每一所述像素点在所述高光图像中对应的基础纹理坐标值,其中,所述基于纹理坐标值为所述光源向量和所述视线向量无偏移时对应的坐标值;Obtain the basic texture coordinate value corresponding to each pixel in the highlight image, wherein the texture-based coordinate value is the corresponding coordinate value when there is no offset between the light source vector and the line of sight vector;
    根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对 应的纹理采样坐标值;According to the texture offset corresponding to each pixel point and the basic texture coordinate value, the pixel point pair is determined The corresponding texture sampling coordinate value;
    从所述高光图像中采样所述纹理采样坐标值对应的纹理颜色值,作为所述像素点对应的颜色值;Sample the texture color value corresponding to the texture sampling coordinate value from the highlight image as the color value corresponding to the pixel point;
    基于每一像素点对应的所述颜色值对所述像素点进行渲染,获得所述高光渲染图像。Render the pixels based on the color value corresponding to each pixel point to obtain the highlight rendering image.
  5. 根据权利要求4所述的方法,其中,所述根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值,包括:The method according to claim 4, wherein determining the texture sampling coordinate value corresponding to each pixel point based on the texture offset corresponding to the pixel point and the basic texture coordinate value includes:
    确定每一所述像素点的基础纹理坐标值在所述垂直方向上的子坐标值;Determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction;
    针对每一所述像素点,将所述像素点对应的所述子坐标值减去所述像素点对应的纹理偏移量所得的坐标值,作为所述垂直方向上的更新坐标值,并将所述像素点对应的所述子坐标值更新为所述更新坐标值,以获得所述像素点对应的纹理采样坐标值。For each pixel point, the coordinate value obtained by subtracting the texture offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and The sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
  6. 根据权利要求4所述的方法,其中,所述根据每一所述像素点对应的纹理偏移量和所述基础纹理坐标值,确定该像素点对应的纹理采样坐标值,包括:The method according to claim 4, wherein determining the texture sampling coordinate value corresponding to each pixel point based on the texture offset corresponding to the pixel point and the basic texture coordinate value includes:
    确定每一所述像素点的基础纹理坐标值在所述垂直方向上的子坐标值;Determine the sub-coordinate value of the basic texture coordinate value of each pixel point in the vertical direction;
    针对每一所述像素点,根据所述像素点对应的纹理偏移量和预设的偏移调整参数,确定所述像素点对应的目标偏移量;For each pixel, determine the target offset corresponding to the pixel according to the texture offset corresponding to the pixel and the preset offset adjustment parameter;
    针对每一所述像素点,将所述像素点对应的所述子坐标值减去所述像素点对应的目标偏移量所得的坐标值,作为所述垂直方向上的更新坐标值,并将所述像素点对应的所述子坐标值更新为所述更新坐标值,以获得所述像素点对应的纹理采样坐标值。For each pixel point, the coordinate value obtained by subtracting the target offset corresponding to the pixel point from the sub-coordinate value corresponding to the pixel point is used as the updated coordinate value in the vertical direction, and The sub-coordinate value corresponding to the pixel point is updated to the updated coordinate value to obtain the texture sampling coordinate value corresponding to the pixel point.
  7. 一种高光渲染装置,其中,所述装置包括:A highlight rendering device, wherein the device includes:
    获取模块,用于获取待渲染的高光图像,其中,所述高光图像中绘制有待渲染的高光形状;An acquisition module, configured to acquire a highlight image to be rendered, wherein a highlight shape to be rendered is drawn in the highlight image;
    第一处理模块,用于根据世界空间下的光源方向确定头发模型对应的目标坐标空间下的光源向量;The first processing module is used to determine the light source vector in the target coordinate space corresponding to the hair model based on the light source direction in the world space;
    第二处理模块,用于针对所述头发模型中的每一个待渲染的像素点,根据世界空间下的视线方向确定所述目标坐标空间下该像素点对应的视线向量;The second processing module is used to determine, for each pixel to be rendered in the hair model, the line of sight vector corresponding to the pixel in the target coordinate space according to the line of sight direction in the world space;
    确定模块,用于针对每一所述像素点,根据所述光源向量和所述像素点对应的视线向量,确定所述像素点在所述高光图像中与水平方向对应的垂直方向上的纹理偏移量;A determination module configured to determine, for each pixel point, the texture bias of the pixel point in the vertical direction corresponding to the horizontal direction in the highlight image according to the light source vector and the line of sight vector corresponding to the pixel point. displacement;
    渲染模块,用于根据每一所述像素点对应的纹理偏移量从所述高光图像中进行采 样,以对所述像素点进行渲染,获得渲染后的高光渲染图像。A rendering module, configured to extract data from the highlight image according to the texture offset corresponding to each pixel. In this way, the pixel points are rendered to obtain a rendered highlight rendering image.
  8. 根据权利要求7所述的装置,其中,所述确定模块包括:The device according to claim 7, wherein the determining module includes:
    第一确定子模块,用于确定所述光源向量在所述垂直方向上的光源分量;The first determination sub-module is used to determine the light source component of the light source vector in the vertical direction;
    第二确定子模块,用于针对每一所述像素点,确定所述像素点对应的视线向量在所述垂直方向上的视线分量;The second determination sub-module is used to determine, for each pixel point, the line of sight component of the line of sight vector corresponding to the pixel point in the vertical direction;
    第三确定子模块,用于根据所述光源分量和每一所述像素点对应的视线分量,确定所述像素点对应的纹理偏移量。The third determination sub-module is used to determine the texture offset corresponding to the pixel point based on the light source component and the line of sight component corresponding to each pixel point.
  9. 一种计算机可读介质,其上存储有计算机程序,其中,该计算机程序被处理装置执行时实现权利要求1-6中任一项所述方法的步骤。A computer-readable medium having a computer program stored thereon, wherein the computer program implements the steps of the method of any one of claims 1-6 when executed by a processing device.
  10. 一种电子设备,包括:An electronic device including:
    存储装置,其上存储有计算机程序;a storage device having a computer program stored thereon;
    处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-6中任一项所述方法的步骤。A processing device, configured to execute the computer program in the storage device to implement the steps of the method according to any one of claims 1-6.
  11. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理装置执行时实现权利要求1-6中任一项所述方法的步骤。A computer program product, comprising a computer program that implements the steps of the method according to any one of claims 1-6 when executed by a processing device.
  12. 一种计算机程序,其中,所述计算机程序被处理装置执行时实现权利要求1-6中任一项所述方法的步骤。 A computer program, wherein the steps of the method of any one of claims 1-6 are implemented when the computer program is executed by a processing device.
PCT/CN2023/084540 2022-04-13 2023-03-28 Highlight rendering method and apparatus, medium, and electronic device WO2023197860A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210386625.0 2022-04-13
CN202210386625.0A CN114693860A (en) 2022-04-13 2022-04-13 Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment

Publications (1)

Publication Number Publication Date
WO2023197860A1 true WO2023197860A1 (en) 2023-10-19

Family

ID=82142686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/084540 WO2023197860A1 (en) 2022-04-13 2023-03-28 Highlight rendering method and apparatus, medium, and electronic device

Country Status (2)

Country Link
CN (1) CN114693860A (en)
WO (1) WO2023197860A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693860A (en) * 2022-04-13 2022-07-01 北京字跳网络技术有限公司 Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470369A (en) * 2018-03-26 2018-08-31 城市生活(北京)资讯有限公司 A kind of water surface rendering intent and device
WO2019050808A1 (en) * 2017-09-08 2019-03-14 Pinscreen, Inc. Avatar digitization from a single image for real-time rendering
CN113379885A (en) * 2021-06-22 2021-09-10 网易(杭州)网络有限公司 Virtual hair processing method and device, readable storage medium and electronic equipment
CN113763526A (en) * 2020-06-01 2021-12-07 上海米哈游天命科技有限公司 Hair highlight rendering method, device, equipment and storage medium
CN114693860A (en) * 2022-04-13 2022-07-01 北京字跳网络技术有限公司 Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019050808A1 (en) * 2017-09-08 2019-03-14 Pinscreen, Inc. Avatar digitization from a single image for real-time rendering
CN108470369A (en) * 2018-03-26 2018-08-31 城市生活(北京)资讯有限公司 A kind of water surface rendering intent and device
CN113763526A (en) * 2020-06-01 2021-12-07 上海米哈游天命科技有限公司 Hair highlight rendering method, device, equipment and storage medium
CN113379885A (en) * 2021-06-22 2021-09-10 网易(杭州)网络有限公司 Virtual hair processing method and device, readable storage medium and electronic equipment
CN114693860A (en) * 2022-04-13 2022-07-01 北京字跳网络技术有限公司 Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment

Also Published As

Publication number Publication date
CN114693860A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN110766777B (en) Method and device for generating virtual image, electronic equipment and storage medium
JP2024505995A (en) Special effects exhibition methods, devices, equipment and media
US11887325B2 (en) Face image processing method and apparatus, readable medium, and electronic device
US20230005194A1 (en) Image processing method and apparatus, readable medium and electronic device
WO2022042290A1 (en) Virtual model processing method and apparatus, electronic device and storage medium
US11849211B2 (en) Video processing method, terminal device and storage medium
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
WO2023197860A1 (en) Highlight rendering method and apparatus, medium, and electronic device
WO2023138559A1 (en) Virtual reality interaction method and apparatus, and device and storage medium
CN114900625A (en) Subtitle rendering method, device, equipment and medium for virtual reality space
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
WO2024055837A1 (en) Image processing method and apparatus, and device and medium
WO2023193613A1 (en) Highlight shading method and apparatus, and medium and electronic device
WO2024041623A1 (en) Special effect map generation method and apparatus, device, and storage medium
US20220272283A1 (en) Image special effect processing method, apparatus, and electronic device, and computer-readable storage medium
WO2023169287A1 (en) Beauty makeup special effect generation method and apparatus, device, storage medium, and program product
WO2023109564A1 (en) Video image processing method and apparatus, and electronic device and storage medium
US20230237625A1 (en) Video processing method, electronic device, and storage medium
WO2023035973A1 (en) Video processing method and apparatus, device, and medium
CN115170714A (en) Scanned image rendering method and device, electronic equipment and storage medium
CN112070888B (en) Image generation method, device, equipment and computer readable medium
CN112214187B (en) Water ripple image implementation method and device
US11880919B2 (en) Sticker processing method and apparatus
CN114049417B (en) Virtual character image generation method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23787506

Country of ref document: EP

Kind code of ref document: A1