WO2019085838A1 - 对象渲染方法和装置、存储介质及电子装置 - Google Patents

对象渲染方法和装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2019085838A1
WO2019085838A1 PCT/CN2018/112196 CN2018112196W WO2019085838A1 WO 2019085838 A1 WO2019085838 A1 WO 2019085838A1 CN 2018112196 W CN2018112196 W CN 2018112196W WO 2019085838 A1 WO2019085838 A1 WO 2019085838A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
vector
target object
application client
vector control
Prior art date
Application number
PCT/CN2018/112196
Other languages
English (en)
French (fr)
Inventor
吴东
马瑞
张文光
谢海天
彭亮
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019085838A1 publication Critical patent/WO2019085838A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Definitions

  • the present application relates to the field of computers, and in particular, to an object rendering method and apparatus, a storage medium, and an electronic device.
  • the rendering method usually includes the following two types: 1) traditional rendering. It is often a rendering in the game application for the purpose of real simulation, so that the rendered object is closer to reality; 2) cartoon rendering. It is usually a rendering in the game application for the purpose of realistic simulation, so that the rendered object and the comic or cartoon achieve a similar effect.
  • cartoon rendering mode since the rendering consumption is too high, it is difficult to directly apply to the mobile terminal device with low processing capability. That is to say, cartoon rendering on a mobile terminal device has a problem of poor rendering quality.
  • the embodiment of the present invention provides an object rendering method and apparatus, a storage medium, and an electronic device to solve at least the technical problem of poor rendering quality of the object rendering method provided by the related art.
  • an object rendering method including: an application client running on a mobile terminal acquires a target object to be rendered, where the target object includes multiple vector controls for controlling rendering. a vertex; the application client determines a rendering strategy of the target object in each scene of the application client by using a predetermined function by using a plurality of normal vectors of the vector control vertex, wherein the rendering strategy includes the target object in the scenario The lighting rendering strategy and the stroke rendering strategy; the application client renders the target object in the above scenario of the application client according to the above rendering strategy.
  • an object rendering apparatus wherein the apparatus runs an application client, and the apparatus includes: a first acquiring unit, configured to acquire, by using an application client running on the mobile terminal a target object to be rendered, wherein the target object includes a plurality of vector control vertices for controlling rendering; a determining unit configured to determine a target object in the application by using a predetermined function by using a normal vector of the plurality of vector control vertices a rendering strategy in each scenario of the client, wherein the rendering strategy includes a lighting rendering strategy and a stroke rendering strategy of the target object in the scenario; and a rendering unit configured to be in the foregoing scenario of the application client according to the rendering policy Render the above target object.
  • a first acquiring unit configured to acquire, by using an application client running on the mobile terminal a target object to be rendered, wherein the target object includes a plurality of vector control vertices for controlling rendering
  • a determining unit configured to determine a target object in the application by using a predetermined function by using a normal vector of
  • the determining unit includes: a third acquiring module, configured to sequentially acquire, by using the predetermined function, each of the vector control vertices by using a normal vector of the plurality of vector control vertices a boundary vertex vector; the second determining module is configured to determine the above-described stroke rendering strategy of the target object according to the boundary vertex vector.
  • the third obtaining module includes: a second determining submodule, configured to determine a rendering mode of the target object in the application client; and a second obtaining submodule, configured to The rendering mode determines a calculation space for acquiring the boundary vertex vector, wherein, in the case that the rendering mode is the first mode, determining that the computing space is the first space; and in the case that the rendering mode is the second mode, Determining that the calculation space is a second space; and the third acquisition submodule is configured to acquire the boundary vertex vector corresponding to the vector control vertex by using the predetermined function in the calculation space.
  • the third acquiring sub-module is configured to obtain, by using the predetermined function, the boundary vertex vector corresponding to the vector control vertex in the computing space by acquiring the vector control vertex.
  • the product of the normal vector and the normal control coefficient; the boundary vertex vector is determined according to the sum of the position vector of the vector control vertex and the product.
  • the apparatus further includes: a fourth acquiring submodule, configured to acquire the target after sequentially obtaining the boundary vertex vector corresponding to each of the vector control vertices by using the predetermined function a distance between the object and a reference position in the above scenario of the application client; the third determining submodule is configured to determine the normal control coefficient according to the distance, wherein, when the distance is greater than the first threshold, the adjustment is increased The normal control coefficient is large; if the distance is less than the first threshold, the adjustment reduces the normal control coefficient.
  • the foregoing second determining module includes: a fourth determining submodule, configured to determine the stroke rendering strategy according to the normal control coefficient used to obtain the boundary vertex vector, The stroke rendering strategy is used to indicate the display width of the stroke of the target object, and the stroke of the target object is determined according to the position vector of the vector control vertex and the boundary vertex vector.
  • a storage medium storing a computer program, wherein the computer program is executed to execute the above method.
  • an electronic device comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor executes by running the computer program The above method.
  • the application client running on the mobile terminal uses the normal vector of the plurality of vector control vertices included in the acquired target object to be rendered, and determines the application according to a predetermined function operation.
  • the rendering strategy of the target object in each scene of the client so as to implement the rendering of the target object in the application client according to the above rendering strategy, to overcome the related art, the cartoon rendering cannot be applied in the mobile terminal, and the mobile terminal cannot be guaranteed.
  • the problem of rendering quality of the target object for cartoon rendering thereby improving the rendering quality of the target object for cartoon rendering on the mobile terminal, and improving the rendering effect of the rendering.
  • FIG. 1 is a schematic diagram of an application environment of an optional object rendering method according to an embodiment of the present application
  • FIG. 2 is a flow chart of an alternative object rendering method in accordance with an embodiment of the present application.
  • FIG. 3 is a flow chart of another alternative object rendering method in accordance with an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an optional object rendering method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another optional object rendering method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of still another optional object rendering method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an optional object rendering apparatus according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a determining unit in an optional object rendering apparatus according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a determining unit in an optional object rendering apparatus according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an alternative electronic device in accordance with an embodiment of the present application.
  • the object rendering method may be, but is not limited to, being applied to an application environment, as shown in FIG. 1 , an application client running a predetermined application on the mobile terminal 102, where the display interface of the application client may be However, it is not limited to the effect shown in the top of the mobile terminal 102 in FIG. 1, and the acquired target object is previewed by the method in this embodiment.
  • the application client acquires a target object to be rendered displayed on the mobile terminal 102, wherein the target object includes a plurality of vector control vertices for controlling rendering; the application client controls the normal vector of the vertices by using multiple vectors, Determining, by a predetermined function, a rendering strategy of the target object in each scene of the application client, wherein the rendering strategy includes a lighting rendering strategy and a stroke rendering strategy of the target object in the scene; then, the application client follows the rendering policy on the application client Render the target object in the scene.
  • the application client after the application client obtains the target object to be rendered through the application client running on the mobile terminal, the application client utilizes multiple normal vectors for controlling the rendered vector control vertices included in the target object. Determining, by a predetermined function, a rendering policy of the target object in each scenario of the application client, and rendering the target object in the scenario of the application client according to the rendering policy, wherein the rendering strategy includes a lighting rendering strategy And stroke rendering strategies. That is to say, the application client running on the mobile terminal determines the normal vector of the vertices by using a plurality of vector control vertices included in the acquired target object to be rendered, and determines each scene of the application client according to a predetermined function operation.
  • the rendering strategy of the target object so as to implement the rendering of the target object in the application client according to the above rendering strategy, to overcome the inability of the related art to apply cartoon rendering in the mobile terminal, and cannot guarantee the cartoon rendering of the target object on the mobile terminal.
  • the problem of rendering quality improves the rendering quality of the cartoon rendering of the target object on the mobile terminal, and improves the rendering effect of the rendering.
  • the foregoing mobile terminal may include, but is not limited to, at least one of the following: a mobile phone, a tablet computer, a notebook computer, and other mobile hardware devices that are required to render objects.
  • a mobile phone a tablet computer
  • a notebook computer a mobile hardware device that are required to render objects.
  • other mobile hardware devices that are required to render objects.
  • an object rendering method is provided. As shown in FIG. 2, the method includes:
  • the application client obtains a target object to be rendered by using an application client running on the mobile terminal, where the target object includes multiple vector control vertices for controlling rendering.
  • the application client uses multiple vectors to control the normal vector of the vertex, and determines a rendering strategy of the target object in each scene of the application client by using a predetermined function, where the rendering strategy includes a lighting rendering strategy and description of the target object in the scene.
  • Edge rendering strategy
  • the application client renders the target object in the scenario of the application client according to the rendering policy.
  • the foregoing object rendering method may be, but is not limited to, being applied to an application that needs to perform real-time rendering on a mobile terminal.
  • the foregoing application may include, but is not limited to, a game application.
  • the game application may include, but is not limited to, at least one of the following: Two Dimension (2D) game application, Three Dimension (3D) game application, Virtual Reality (VR) game application, Augmented Reality (AR) game application, Mixed Reality (MR) game application.
  • 2D Two Dimension
  • 3D Three Dimension
  • VR Virtual Reality
  • AR Augmented Reality
  • MR Mixed Reality
  • the application client running on the mobile terminal determines the scene of the application client according to a predetermined function operation by using a normal vector of a plurality of vector control vertices included in the acquired target object to be rendered.
  • the rendering strategy of the target object is implemented, so that the target object in the application client is rendered according to the above rendering strategy, so as to overcome the related art, the cartoon rendering cannot be applied in the mobile terminal, and the target object on the mobile terminal cannot be guaranteed to be cartoon rendered.
  • the problem of rendering quality thereby improving the rendering quality of the target object for cartoon rendering on the mobile terminal, and improving the rendering effect of the rendering.
  • the target object to be rendered may include, but is not limited to, an object to be operated in the application client, and the object to be operated may include, but is not limited to, at least one of the following: a static object and a dynamic object.
  • the target object may include, but is not limited to, at least one of: trees, houses, and the like in a game application, weapon equipment in a game application, and the like, and a character model in a game application.
  • the above is only an example, and is not limited in this embodiment.
  • the illumination rendering strategy in the foregoing rendering strategy may be, but is not limited to, indicating a light and dark region to be rendered by the target object, that is, the application client uses the plurality of vector control vertices according to a predetermined function. The operation is performed to obtain whether the respective regions in the target object belong to the bright region or the dark region, so as to implement different renderings for different light and dark regions indicated by the lighting rendering strategy.
  • the predetermined function for determining the illumination rendering strategy may include, but is not limited to, a spherical harmonic function, wherein the spherical harmonic function is an angular portion of a spherical coordinate system form solution of the Laplace equation.
  • the above spherical harmonic function is only an example, and is not limited in this embodiment.
  • the stroke rendering strategy in the foregoing rendering strategy may be, but is not limited to, a display width for indicating a stroke of the target object, that is, the application client controls the vertex according to the plurality of vectors.
  • the function operation is predetermined to obtain the display width (also referred to as the thickness level) of the stroke of the target object, so as to render the strokes according to different display widths for different parts of the target object.
  • the predetermined function for determining the stroke rendering strategy may include, but is not limited to, superimposing a control vector on a position vector of the vector control vertex, and the control vector may be, but not limited to, determined according to a normal vector of the vector control vertex.
  • the application client determines that the illumination rendering policy of the target object in each scenario of the application client includes: the application client uses multiple vectors to control the normal vector of the vertex, and sequentially acquires each by a predetermined function.
  • the vector controls the global illumination brightness of the vertex in the scene; obtains the local illumination brightness of each vector control vertex; the application client controls the global illumination brightness and the local illumination brightness of the vertex according to each vector to determine the bright and dark area to be rendered in the target object.
  • the lighting rendering strategy used used.
  • the application client uses the normal vector of the vector control vertex to obtain the global illumination brightness of each vector control vertex, and further combines the local illumination brightness to perform a hybrid operation to determine the target object to be rendered.
  • the light and dark areas in the area are rendered.
  • the global illumination brightness is introduced by the rendering process in the mobile terminal, so as to avoid an excessive boundary between the light and dark surfaces in the case where the scene displayed by the application client is dark.
  • the normal component in one direction of the normal vector of the vertex is controlled by the preserving vector, and the gradient effect caused by the smoothing process during rendering is reduced. This ensures that the rendered target object is closer to the cartoon or comic effect and improves the cartoon rendering effect.
  • the application client determines that the rendering strategy of the target object in each scenario of the application client includes: the application client uses multiple vectors to control normal vectors of the vertices, and sequentially acquires each vector through a predetermined function. Controls the boundary vertex vector corresponding to the vertex; determines the stroke rendering strategy of the target object according to the boundary vertex vector.
  • a shell larger than the target object and only the back surface is rendered is rendered, and most of the outer shell is blocked by the body, and only the edge is exposed, thereby providing rendering for the target object.
  • the boundary vertex constituting the outer edge of the stroke may be, but is not limited to, obtained by superimposing a control vector on a position vector of a vector control vertex, and the control vector may be, but not limited to, determined based on at least a normal vector of the vector control vertex.
  • control vector can be controlled by, but not limited to, the degree of thickness (i.e., display width) of the different stroke portions.
  • control vector can be, but is not limited to, a product of a normal vector of a vector control vertex and a normal control coefficient.
  • the purpose of adjusting the control vector can be achieved, thereby ensuring that the stroke of the target object can render different display widths according to different parts to improve the rendering effect.
  • the pixel display thickness of the above stroke can be maintained by the above-described normal control coefficient. Regardless of the proximity of the reference position in the application (such as the position of the camera used to determine the field of view), the size of the normal control coefficients can be adjusted such that the pixel display thickness of the displayed stroke remains the same.
  • the application client running on the mobile terminal determines the application client according to a predetermined function operation by using the normal vector of the plurality of vector control vertices included in the acquired target object to be rendered.
  • the rendering strategy of the target object in the scenario so as to implement the rendering of the target object in the application client according to the above rendering strategy, to overcome the problem that the cartoon rendering cannot be applied in the mobile terminal in the related art, and the target object on the mobile terminal cannot be guaranteed.
  • the rendering quality of the cartoon rendering problem in turn to improve the rendering quality of the target object on the mobile terminal for cartoon rendering, improve the rendering display.
  • the application client uses multiple vectors to control the normal vector of the vertex, and determines a rendering strategy of the target object in each scene of the application client by using a predetermined function, including:
  • the application client uses multiple vectors to control the normal vector of the vertex, and sequentially obtains the global illumination brightness of each vector control vertex in the scene by a predetermined function;
  • the application client obtains local illumination brightness of each vector control vertex
  • the application client determines a lighting rendering strategy used to render the light and dark areas to be rendered in the target object according to the global illumination brightness and the local illumination brightness of each vector control vertex.
  • the illumination rendering strategy in the foregoing rendering strategy may be, but is not limited to, indicating a light and dark region to be rendered by the target object, that is, the application client uses the plurality of vector control vertices according to a predetermined function. The operation is performed to obtain whether the respective regions in the target object belong to the bright region or the dark region, so as to implement different renderings for different light and dark regions indicated by the lighting rendering strategy.
  • the predetermined function for determining the illumination rendering strategy may include, but is not limited to, a spherical harmonic function, wherein the spherical harmonic function is an angular portion of a spherical coordinate system form solution of the Laplace equation.
  • the above spherical harmonic function is only an example, and is not limited in this embodiment.
  • step S1 the application client uses multiple vectors to control the normal vector of the vertices, and sequentially obtains the global illumination illuminance of each vector control vertex in the scene by using a predetermined function, including:
  • the application client performs the following steps on each vector control vertex in the target object:
  • the application client obtains a normal vector of the current vector control vertex
  • the application client retains a normal component of the normal vector of the current vector control vertex in one direction;
  • the application client sends the normal component of the current vector control vertex into the spherical harmonic function in the predetermined function to obtain the global illumination brightness of the current vector control vertex.
  • the application client obtains a vector to control the normal vector of the vertex A, and retains the normal component of the normal vector in one direction, for example, as shown in FIG. 3 .
  • the y normal component is retained as shown, and the x normal component and the z normal component are zeroed.
  • the y-normal component is passed to the global illumination spherical harmonic function to obtain the global illumination brightness, wherein the global illumination brightness can be obtained by, but not limited to, calculating the color value brightness according to the global illumination color value;
  • the illumination brightness is mixed to obtain a color result calculated by the light and dark surface, which is used to identify the light and dark areas in the area to be rendered by the target object. Furthermore, combined with other color results in the scene, the final result is obtained, and then the target object in the scene is rendered according to the final result obtained.
  • the wrist in the box at the bottom left of Figure 4 has a noticeable gradual effect, while the wrist in the box at the bottom right of Figure 4 avoids unwanted gradients, resulting in rendering.
  • the target object is closer to the cartoon or comic effect, improving the cartoon rendering effect.
  • the above-mentioned control method for removing the gradation effect is more effective when the scene is a darker scene. The above is only an example, and is not limited in this embodiment.
  • the solid line frame shown in FIG. 3 is the executed process step, and the data corresponding to the step is shown in the dotted line frame.
  • the shaded background is an optional execution step, and may also be defaulted. Not executed.
  • the application of the client combines the global illumination brightness and the local illumination brightness of each vector control vertex to determine the illumination rendering strategy used to render the light and dark areas to be rendered in the target object, thereby accurately identifying The light and dark areas in the area to be rendered in the target object to avoid unnecessary boundaries of the light and dark areas.
  • the gradient effect can also be reduced, so that the rendered target object is closer to the cartoon rendered cartoon or Comic effects.
  • the application client determines the lighting rendering strategy used to render the light and dark areas to be rendered in the target object:
  • the application client obtains color results of each vector control vertex, wherein the color result of each vector control vertex comprises: a product of a global illumination brightness of the vector control vertex and a local illumination brightness of the vector control vertex;
  • the application client identifies the light and dark area to be rendered in the target object according to the color result of each vector control vertex, and determines a lighting rendering strategy, wherein the lighting rendering strategy is used to indicate the light and dark area to be rendered in the identified target object.
  • FIG. 3 shows a multiplication operation to obtain the vector control vertex.
  • the color result calculated by the light and dark surface, thereby facilitating accurate recognition of the light and dark areas in the area to be rendered by the target object based on the color result. Further combining the other color results in the scene, further mixing operations can be performed to obtain the final result for rendering the target object in the scene.
  • the color result of each vector control vertex is obtained by the application client to identify the light and dark area to be rendered in the target object, and the illumination for indicating the light and dark area to be rendered in the identified target object is determined.
  • Rendering strategies to achieve high quality rendering of the target object and improve rendering.
  • the application client uses multiple vectors to control the normal vector of the vertex, and determines a rendering strategy of the target object in each scene of the application client by using a predetermined function, including:
  • the application client uses multiple vectors to control the normal vector of the vertex, and sequentially acquires the boundary vertex vector corresponding to each vector control vertex by a predetermined function;
  • the application client determines a stroke rendering strategy of the target object according to the boundary vertex vector.
  • determining, by the application client, the stroke rendering strategy of the target object according to the boundary vertex vector comprises: determining a stroke rendering strategy according to a normal control coefficient used to obtain the boundary vertex vector, wherein the stroke rendering The strategy is used to indicate the display width of the stroke of the target object, and the stroke of the target object is determined according to the position vector of the vector control vertex and the boundary vertex vector.
  • the stroke rendering strategy in the foregoing rendering strategy may be, but is not limited to, a display width for indicating a stroke of the target object. That is to say, the application client uses the above plurality of vector control vertices to operate according to a predetermined function to obtain the display width (also referred to as the thickness level) of the stroke of the target object, so as to realize different display for different parts of the target object. Render strokes in width.
  • the predetermined function for determining the stroke rendering strategy may include, but is not limited to, superimposing a control vector on a position vector of a vector control vertex, and the control vector may be, but not limited to, a normal vector and a normal of the vector control vertex.
  • the product of the control coefficients For example, the product of the normal vector of the vector control vertex and the normal control coefficient is obtained; the boundary vertex vector is determined according to the sum of the position vector of the vector control vertex and the product.
  • the above calculation process can be, but is not limited to, the following:
  • P s represents the boundary vertex vector
  • P m represents the position vector of the vector control vertex
  • n m represents the normal vector of the vector control vertex
  • r s represents the normal control coefficient
  • the solid circle shown in FIG. 5 is a vector control vertex in the target object
  • the shadow circle of the network line shown in FIG. 5 is a boundary vertex
  • the normal control coefficient is assumed to be a preset coefficient.
  • the boundary vertex vector can be calculated, thereby obtaining the position of the boundary vertex in the target object.
  • the boundary vertex vector is obtained by the application client to determine a stroke rendering strategy for indicating the display width of the stroke, thereby implementing a stroke that provides different display widths for different parts of the target object. Make the rendered target object closer to the cartoon rendering effect.
  • the application client uses multiple vectors to control the normal vector of the vertex, and sequentially obtains the boundary vertex vectors corresponding to the respective vector control vertex by a predetermined function, including:
  • the application client determines a rendering mode of the target object in the application client
  • the application client determines, according to the rendering mode, a calculation space for acquiring a boundary vertex vector, where, in a case where the rendering mode is the first mode, determining that the calculation space is the first space; and in the case that the rendering mode is the second mode. , determining that the computing space is the second space;
  • the application client acquires a boundary vertex vector corresponding to the vector control vertex by using a predetermined function in the calculation space.
  • the object rendering process provided in this embodiment includes a model space, a view space (also referred to as a camera space), and a clip space.
  • the first space may be, but is not limited to, a corresponding model space, wherein the model space has the highest computational efficiency.
  • the input vector controls the position vector v of the vertex, and the final boundary vertex vector o is obtained by calculating the output boundary vertex vector according to the above formula (1), and then converting the boundary vertex vector to the cropping space; in addition, the second space can be It is not limited to the corresponding cropping space, and the calculation result of the cropping space is more accurate.
  • the input vector controls the position vector v of the vertex, first converts the position vector v of the vector control vertex to the cropping space, and then calculates the output boundary vertex vector o according to the above (1).
  • the foregoing rendering mode may include, but is not limited to, 1) an efficiency mode and a precision mode, wherein a first spatial calculation may be employed in the efficiency mode and a second spatial calculation may be employed in the precision mode; 2) a field of view The far mode and the near field mode, wherein the first space calculation can be employed in the far field mode and the second space calculation can be employed in the field view mode.
  • an efficiency mode and a precision mode wherein a first spatial calculation may be employed in the efficiency mode and a second spatial calculation may be employed in the precision mode
  • 2) a field of view The far mode and the near field mode wherein the first space calculation can be employed in the far field mode and the second space calculation can be employed in the field view mode.
  • the application client uses different computing spaces according to different rendering modes to obtain boundary vertex vectors, thereby achieving the purpose of satisfying different rendering requirements and improving rendering flexibility.
  • the method further includes:
  • the application client obtains a distance between the target object and a reference position in a scenario of the application client.
  • the application client determines a normal control coefficient according to the distance, wherein, when the distance is greater than the first threshold, the normal control coefficient is increased; and when the distance is less than the first threshold, the normal control coefficient is decreased.
  • the reference position may be, but is not limited to, a position of a camera in an application.
  • the thickness of the stroke is not affected by the distance between the camera and the target object, thereby ensuring that the thickness of the stroke is kept constant.
  • the rendering effect of the stroke is as shown in Figure 6.
  • the application client ensures that the pixel thickness of the stroke is not affected by the change of the visual field distance through the normal control coefficient, thereby ensuring the accurate and true rendering effect.
  • an object rendering apparatus for implementing the object rendering method described above, wherein an application client is run in the device.
  • the device includes:
  • the first obtaining unit 702 is configured to acquire a target object to be rendered by using an application client running on the mobile terminal, where the target object includes a plurality of vector control vertices for controlling rendering;
  • the determining unit 704 is configured to determine a rendering strategy of the target object in each scene of the application client by using a predetermined function by using a plurality of vectors to control a normal vector of the vertex, wherein the rendering strategy includes a lighting rendering of the target object in the scene.
  • Strategy and stroke rendering strategy
  • a rendering unit 706, configured to render the target object in the scene of the application client according to the rendering strategy.
  • the foregoing object rendering apparatus may be, but is not limited to, applied to an application that needs to perform real-time rendering on a mobile terminal.
  • the foregoing application may include, but is not limited to, a gaming application.
  • the game application may include, but is not limited to, at least one of the following: Two Dimension (2D) game application, Three Dimension (3D) game application, Virtual Reality (VR) game application, Augmented Reality (AR) game application, Mixed Reality (MR) game application.
  • 2D Two Dimension
  • 3D Three Dimension
  • VR Virtual Reality
  • AR Augmented Reality
  • MR Mixed Reality
  • the application client running on the mobile terminal determines the scene of the application client according to a predetermined function operation by using a normal vector of a plurality of vector control vertices included in the acquired target object to be rendered.
  • the rendering strategy of the target object is implemented, so that the target object in the application client is rendered according to the above rendering strategy, so as to overcome the related art, the cartoon rendering cannot be applied in the mobile terminal, and the target object on the mobile terminal cannot be guaranteed to be cartoon rendered.
  • the problem of rendering quality thereby improving the rendering quality of the target object for cartoon rendering on the mobile terminal, and improving the rendering effect of the rendering.
  • the target object to be rendered may include, but is not limited to, an object to be operated in the application, and the object to be operated may include, but is not limited to, at least one of the following: a static object and a dynamic object.
  • the target object may include, but is not limited to, at least one of: trees, houses, and the like in a game application, weapon equipment in a game application, and the like, and a character model in a game application. The above is only an example, and is not limited in this embodiment.
  • the illumination rendering strategy in the foregoing rendering strategy may be, but is not limited to, indicating a light and dark region to be rendered by the target object, that is, using the plurality of vector control vertices according to a predetermined function, Obtain whether each area in the target object belongs to a bright area or a dark area, so as to implement different renderings for different light and dark areas indicated by the lighting rendering strategy.
  • the predetermined function for determining the illumination rendering strategy may include, but is not limited to, a spherical harmonic function, wherein the spherical harmonic function is an angular portion of a spherical coordinate system form solution of the Laplace equation.
  • the above spherical harmonic function is only an example, and is not limited in this embodiment.
  • the stroke rendering strategy in the foregoing rendering strategy may be, but is not limited to, a display width for indicating a stroke of the target object, that is, using the plurality of vector control vertices according to a predetermined function.
  • a display width for indicating a stroke of the target object that is, using the plurality of vector control vertices according to a predetermined function.
  • the predetermined function for determining the stroke rendering strategy may include, but is not limited to, superimposing a control vector on a position vector of the vector control vertex, and the control vector may be, but not limited to, determined according to a normal vector of the vector control vertex.
  • determining a lighting rendering strategy of the target object in each scene of the application client includes: using a plurality of vectors to control a normal vector of the vertex, sequentially acquiring each vector control vertex in the scene by a predetermined function.
  • the global illumination brightness obtain the local illumination brightness of each vector control vertex; determine the illumination rendering strategy used to render the light and dark areas to be rendered in the target object according to the global illumination brightness and local illumination brightness of each vector control vertex.
  • the global illumination brightness of each vector control vertex is obtained by using a normal vector of the vector control vertex, and the local illumination brightness is further combined with the local illumination brightness to determine the area to be rendered in the target object. Light and dark areas and render. Thereby, the global illumination brightness is introduced by the rendering process in the mobile terminal, so as to avoid an excessive boundary between the light and dark surfaces in the case where the scene displayed by the application client is dark.
  • the normal component of the normal vector of the vertex in one direction is controlled by the preserving vector, and the gradient effect caused by the smoothing process during rendering is reduced, thereby ensuring rendering.
  • the target object is closer to the cartoon or comic effect, improving the cartoon rendering effect.
  • determining a rendering strategy of the target object in each scene of the application client includes: using a plurality of vectors to control a normal vector of the vertex, sequentially acquiring boundary vertex corresponding to each vector control vertex by a predetermined function.
  • Vector determines the stroke rendering strategy of the target object based on the boundary vertex vector.
  • a shell larger than the target object and only the back surface is rendered is rendered, and most of the outer shell is blocked by the body, and only the edge is exposed, thereby providing rendering for the target object.
  • the boundary vertex constituting the outer edge of the stroke may be, but is not limited to, obtained by superimposing a control vector on a position vector of a vector control vertex, and the control vector may be, but not limited to, determined based on at least a normal vector of the vector control vertex.
  • control vector can be controlled by, but not limited to, the degree of thickness (i.e., display width) of the different stroke portions.
  • control vector can be, but is not limited to, a product of a normal vector of a vector control vertex and a normal control coefficient.
  • the purpose of adjusting the control vector can be achieved, thereby ensuring that the stroke of the target object can render different display widths according to different parts to improve the rendering effect.
  • the pixel display thickness of the above stroke can be maintained by the above-described normal control coefficient. Regardless of the proximity of the reference position in the application (such as the position of the camera used to determine the field of view), the size of the normal control coefficients can be adjusted such that the pixel display thickness of the displayed stroke remains the same.
  • the application client running on the mobile terminal determines the application client according to a predetermined function operation by using a normal vector of a plurality of vector control vertices included in the acquired target object to be rendered.
  • the rendering strategy of the target object in each scene so as to implement the rendering of the target object in the application client according to the above rendering strategy, to overcome the related art, the cartoon rendering cannot be applied in the mobile terminal, and the target object on the mobile terminal cannot be guaranteed.
  • the problem of rendering quality of cartoon rendering is implemented, thereby improving the rendering quality of the target object for cartoon rendering on the mobile terminal, and improving the rendering effect of the rendering.
  • the determining unit 704 includes:
  • the first obtaining module 802 is configured to use a normal vector of a plurality of vector control vertices to sequentially acquire global illumination brightness of each vector control vertex in the scene by using a predetermined function;
  • the second obtaining module 804 is configured to obtain the local illumination brightness of each vector control vertex; 3) the first determining module 806 is configured to determine the global illumination brightness and the local illumination brightness of the vertex according to each vector to determine the target object The lighting rendering strategy used to render the light and dark areas of the rendering.
  • the illumination rendering strategy in the foregoing rendering strategy may be, but is not limited to, indicating a light and dark region to be rendered by the target object, that is, using the plurality of vector control vertices according to a predetermined function, Obtain whether each area in the target object belongs to a bright area or a dark area, so as to implement different renderings for different light and dark areas indicated by the lighting rendering strategy.
  • the predetermined function for determining the illumination rendering strategy may include, but is not limited to, a spherical harmonic function, wherein the spherical harmonic function is an angular portion of a spherical coordinate system form solution of the Laplace equation.
  • the above spherical harmonic function is only an example, and is not limited in this embodiment.
  • the first obtaining module 802 includes:
  • processing the sub-module configured to perform the following steps on each vector control vertex in the target object: acquiring a normal vector of the current vector control vertex; retaining a normal component of the normal vector of the current vector control vertex in one direction; The normal component of the current vector control vertex is passed to the spherical harmonic function in the predetermined function to obtain the global illumination brightness of the current vector control vertex.
  • the y normal component sets the x normal component and the z normal component to zero. Then, the y-normal component is passed to the global illumination spherical harmonic function to obtain the global illumination brightness, wherein the global illumination brightness can be obtained by, but not limited to, calculating the color value brightness according to the global illumination color value;
  • the illumination brightness is mixed to obtain a color result calculated by the light and dark surface, which is used to identify the light and dark areas in the area to be rendered by the target object. Furthermore, combined with other color results in the scene, the final result is obtained, and then the target object in the scene is rendered according to the final result obtained.
  • the wrist in the box at the bottom left of Figure 4 has a noticeable gradual effect, while the wrist in the box at the bottom right of Figure 4 avoids unwanted gradients, resulting in rendering.
  • the target object is closer to the cartoon or comic effect, improving the cartoon rendering effect.
  • the above-mentioned control method for removing the gradation effect is more effective when the scene is a darker scene. The above is only an example, and is not limited in this embodiment.
  • the solid line frame shown in FIG. 3 is the executed process step, and the data corresponding to the step is shown in the dotted line frame.
  • the shaded background is an optional execution step, and may also be defaulted. Not executed.
  • the illumination rendering strategy used for rendering the light and dark areas to be rendered in the target object is determined, thereby accurately identifying the target object.
  • the light and dark areas in the area to be rendered to avoid unnecessary boundaries of the light and dark areas.
  • the gradient effect can also be reduced, so that the rendered target object is closer to the cartoon or cartoon effect after the cartoon rendering.
  • the first determining module 806 includes:
  • a first acquisition sub-module configured to obtain a color result of each vector control vertex, wherein the color result of each vector control vertex comprises: a product of a global illumination brightness of the vector control vertex and a local illumination brightness of the vector control vertex;
  • identifying a sub-module configured to identify a light-dark region to be rendered in the target object according to a color result of each vector control vertex
  • the first determining sub-module is configured to determine a lighting rendering strategy, wherein the lighting rendering strategy is used to indicate the light and dark regions to be rendered in the identified target object.
  • FIG. 3 shows a multiplication operation to obtain the vector control vertex.
  • the color result calculated by the light and dark surface, thereby facilitating accurate recognition of the light and dark areas in the area to be rendered by the target object based on the color result. Further combining the other color results in the scene, further mixing operations can be performed to obtain the final result for rendering the target object in the scene.
  • the light and dark area to be rendered in the target object is identified, and an illumination rendering strategy for indicating the light and dark area to be rendered in the identified target object is determined, This enables high-quality rendering of the target object and improved rendering.
  • the determining unit 704 includes:
  • the third obtaining module 902 is configured to use multiple vectors to control the normal vectors of the vertices, and sequentially acquire the boundary vertex vectors corresponding to the respective vector control vertices by using a predetermined function;
  • the second determining module 904 is configured to determine a stroke rendering strategy of the target object according to the boundary vertex vector.
  • the second determining module 904 includes: a fourth determining submodule, configured to determine a stroke rendering strategy according to a normal control coefficient used to obtain a boundary vertex vector, where the stroke rendering strategy is used To indicate the display width of the stroke of the target object, the stroke of the target object is determined according to the position vector of the vector control vertex and the boundary vertex vector.
  • the stroke rendering strategy in the foregoing rendering strategy may be, but is not limited to, a display width for indicating a stroke of the target object. That is to say, the plurality of vector control vertices are used to operate according to a predetermined function to obtain the display width of the stroke of the target object (also referred to as the thickness level), so as to realize rendering of different parts of the target object according to different display widths. side.
  • the predetermined function for determining the stroke rendering strategy may include, but is not limited to, superimposing a control vector on a position vector of a vector control vertex, and the control vector may be, but not limited to, a normal vector and a normal of the vector control vertex.
  • the product of the control coefficients For example, the product of the normal vector of the vector control vertex and the normal control coefficient is obtained; the boundary vertex vector is determined according to the sum of the position vector of the vector control vertex and the product.
  • the above calculation process can be, but is not limited to, the following:
  • P s represents the boundary vertex vector
  • P m represents the position vector of the vector control vertex
  • n m represents the normal vector of the vector control vertex
  • r s represents the normal control coefficient
  • the solid circle shown in FIG. 5 is a vector control vertex in the target object
  • the shadow circle of the network line shown in FIG. 5 is a boundary vertex
  • the normal control coefficient is assumed to be a preset coefficient.
  • the boundary vertex vector can be calculated, thereby obtaining the position of the boundary vertex in the target object.
  • a stroke rendering strategy for indicating the display width of the stroke is determined by acquiring a boundary vertex vector, thereby implementing a stroke that provides different display widths for different parts of the target object, so that after rendering The target object is closer to the cartoon rendering effect.
  • the third obtaining module 902 includes:
  • a second determining submodule configured to determine a rendering mode of the target object in the application client
  • a second obtaining submodule configured to determine a computing space for acquiring a boundary vertex vector according to a rendering mode, wherein, in a case where the rendering mode is the first mode, determining that the computing space is the first space; In the case of the second mode, the calculation space is determined to be the second space;
  • the third acquisition sub-module is configured to acquire a boundary vertex vector corresponding to the vector control vertex by a predetermined function in the calculation space.
  • the object rendering process provided in this embodiment includes a model space, a view space (also referred to as a camera space), and a clip space.
  • the first space may be, but is not limited to, a corresponding model space, wherein the model space has the highest computational efficiency.
  • the input vector controls the position vector v of the vertex, and the final boundary vertex vector o is obtained by calculating the output boundary vertex vector according to the above formula (2), and then converting the boundary vertex vector to the cropping space; in addition, the second space can be It is not limited to the corresponding cropping space, and the calculation result of the cropping space is more accurate.
  • the input vector controls the position vector v of the vertex, first converts the position vector v of the vector control vertex to the clipping space, and then calculates the output boundary vertex vector o according to the above formula (2).
  • the foregoing rendering mode may include, but is not limited to, 1) an efficiency mode and a precision mode, wherein a first spatial calculation may be employed in the efficiency mode and a second spatial calculation may be employed in the precision mode; 2) a field of view The far mode and the near field mode, wherein the first space calculation can be employed in the far field mode and the second space calculation can be employed in the field view mode.
  • an efficiency mode and a precision mode wherein a first spatial calculation may be employed in the efficiency mode and a second spatial calculation may be employed in the precision mode
  • 2) a field of view The far mode and the near field mode wherein the first space calculation can be employed in the far field mode and the second space calculation can be employed in the field view mode.
  • the fourth obtaining submodule is configured to acquire a distance between the target object and a reference position in the scene of the application client after sequentially acquiring the boundary vertex vectors corresponding to the respective vector control vertices by using a predetermined function;
  • the third determining submodule is configured to determine a normal control coefficient according to the distance, wherein, when the distance is greater than the first threshold, the normal control coefficient is increased; and when the distance is less than the first threshold, the adjustment is decreased. Normal control factor.
  • the reference position may be, but is not limited to, a position of a camera in an application.
  • the thickness of the stroke is not affected by the distance between the camera and the target object, thereby ensuring that the thickness of the stroke is kept constant.
  • the rendering effect of the stroke is as shown in Figure 6.
  • the normal control coefficient ensures that the pixel thickness of the stroke is not affected by the change of the visual field distance, and the rendering effect is accurate and true.
  • an electronic device for implementing the object rendering method described above is further provided.
  • the electronic device includes: one or more (only one shown in the figure) processing The device 1002, the memory 1004, the user interface 1006, the display 1008, and the transmission device 1010.
  • the memory 1004 can be used to store software programs and modules, such as the object rendering method and the program instructions/modules corresponding to the device in the embodiment of the present application.
  • the processor 1002 executes the software programs and modules stored in the memory 1004, thereby executing each A functional application and data processing, that is, the object rendering described above is implemented.
  • Memory 1004 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • 1004 cocoa further includes memory remotely located relative to processor 1002, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the display 1008 is configured to display a target object to be rendered by the application client according to the object rendering method, and the user interface 1006 is configured to send an operation instruction generated by an operation panel on the display to the processor 1002 for processing.
  • the above described transmission device 1010 is for receiving or transmitting data via a network.
  • Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 1010 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
  • the transmission device 1010 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 1004 is configured to store a target object to be rendered, a predetermined function, a rendering policy, and the like.
  • a storage medium is also provided.
  • the foregoing storage medium may be located in at least one of the plurality of network devices in the network.
  • the storage medium is arranged to store a computer program for performing the following steps:
  • the application client running on the mobile terminal acquires a target object to be rendered, where the target object includes multiple vector control vertices for controlling rendering.
  • the application client uses multiple vectors to control the normal vector of the vertex, and determines a rendering strategy of the target object in each scene of the application client by using a predetermined function, where the rendering strategy includes a lighting rendering strategy and description of the target object in the scene.
  • Edge rendering strategy
  • the application client renders the target object in the scenario of the application client according to the rendering policy.
  • the storage medium is further arranged to store a computer program for performing the following steps:
  • the application client uses multiple vectors to control the normal vector of the vertex, and sequentially obtains the global illumination brightness of each vector control vertex in the scene by a predetermined function;
  • the application client obtains local illumination brightness of each vector control vertex
  • the application client determines a lighting rendering strategy used to render the light and dark areas to be rendered in the target object according to the global illumination brightness and the local illumination brightness of each vector control vertex.
  • the storage medium is further arranged to store a computer program for performing the following steps:
  • the application client uses multiple vectors to control the normal vector of the vertex, and sequentially acquires the boundary vertex vector corresponding to each vector control vertex by a predetermined function;
  • the application client determines a stroke rendering strategy of the target object according to the boundary vertex vector.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • mobile hard disk a magnetic memory.
  • magnetic memory a variety of media that can store computer programs, such as a disc or an optical disc.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present application may be embodied in the form of a software product, or the whole or part of the technical solution, which is stored in the storage medium, including
  • the instructions are used to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the various embodiments of the present application.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of a unit is only a logical function division.
  • multiple units or components may be combined or may be integrated into Another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the application client running on the mobile terminal determines the normal vector of the plurality of vector control vertices included in the acquired target object to be rendered according to a predetermined function, and determines each scenario of the application client.
  • the rendering strategy of the target object so as to implement the rendering of the target object in the application client according to the above rendering strategy, to overcome the inability of the related art to apply cartoon rendering in the mobile terminal, and cannot guarantee the cartoon rendering of the target object on the mobile terminal.
  • the problem of rendering quality thereby improving the rendering quality of the target object for cartoon rendering on the mobile terminal, and improving the rendering effect of the rendering.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

一种对象渲染方法和装置、存储介质及电子装置。其中,该方法包括:移动终端上运行的应用客户端获取待渲染的目标对象,其中,目标对象中包括多个用于控制渲染的矢量控制顶点(S202);应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数确定目标对象在应用客户端的每个场景中的渲染策略,其中,渲染策略包括目标对象在场景中的光照渲染策略和描边渲染策略(S204);应用客户端按照渲染策略在应用客户端的场景中渲染目标对象(S206)。所述方法解决了相关技术提供的对象渲染方法所存在的渲染质量较差的技术问题。

Description

对象渲染方法和装置、存储介质及电子装置
本申请要求于2017年11月3日提交中国专利局、优先权号为2017110811961、申请名称为“对象渲染方法和装置、存储介质及电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及一种对象渲染方法和装置、存储介质及电子装置。
背景技术
如今,终端设备常常会通过渲染的方式,来实现在终端设备屏幕上显示游戏应用里的特定画面的目的。其中,渲染方式通常包括以下两种:1)传统渲染。往往是在游戏应用中以真实地模拟为目的所进行的渲染,以使渲染后的对象更加贴近现实;2)卡通渲染。通常是在游戏应用中以去真实感地模拟为目的所进行的渲染,以使渲染后的对象与漫画或卡通达到形似的效果。
然而,目前针对卡通渲染方式,由于渲染消耗过高,因而还难以直接应用于处理能力较低的移动终端设备上。也就是说,在移动终端设备上进行卡通渲染存在渲染质量较差的问题。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供一种对象渲染方法和装置、存储介质及电子装置,以至少解决相关技术提供的对象渲染方法所存在的渲染质量较差的技术问题。
根据本申请实施例的一个方面,提供了一种对象渲染方法,包括:移动终端上运行的应用客户端获取待渲染的目标对象,其中,上述目标对象中包括多个用于控制渲染的矢量控制顶点;上述应用客户端利用多个上述矢量控制顶点的法线矢量,通过预定函数确定上述目标对象在上述应用客户端的每个场景中的渲染策略,其中,上述渲染策略包括上述目标对象在上述场景中的光照渲染策略和描边渲染策略;上述应用客户端按照上述渲染策略在上述应用客户端的上述场景中渲染上述目标对象。
根据本申请实施例的另一方面,还提供了一种对象渲染装置,上述装置中运行有应用客户端,上述装置包括:第一获取单元,设置为通过移动终端上运行的应用客户端获取待渲染的目标对象,其中,上述目标对象中包括多个用于控制渲染的矢量控制顶点;确定单元,设置为利用多个上述矢量控制顶点的法线矢量,通过预定函数确定上述目标对象在上述应用客户端的每个场景中的渲染策略,其中,上述渲染策略包括上述目标对象在上述场景中的光照渲染策略和描边渲染策略;渲染单元,设置为按照上述渲染策略在上述应用客户端的上述场景中渲染上述目标对象。
作为一种可选的示例,在本实施例中,上述确定单元包括:第三获取模块,设置为利用多个上述矢量控制顶点的法线矢量,通过上述预定函数依次获取各个上述矢量控制顶点对应的边界顶点矢量;第二确定模块,设置为根据上述边界顶点矢量确定上述目标对象的上述描边渲染策略。
作为一种可选的示例,在本实施例中,上述第三获取模块包括:第二确定子模块,设置为确定上述目标对象在上述应用客户端的渲染模式;第二获取子模块,设置为根据上述渲染模式确定用于获取上述边界顶点矢量的计算空间,其中,在上述渲染模式为第一模式的情况下,确定上述计算空间为第一空间;在上述渲染模式为第二模式的情况下,确定上述计算空间为第二空间;第三获取子模块,设置为在上述计算空间通过上述预定函数获取上述矢量控制顶点对应的上述边界顶点矢量。
作为一种可选的示例,在本实施例中,上述第三获取子模块通过以下 步骤实现在上述计算空间通过上述预定函数获取上述矢量控制顶点对应的上述边界顶点矢量:获取上述矢量控制顶点的法线矢量与法线控制系数的乘积;根据上述矢量控制顶点的位置矢量及上述乘积二者之和确定上述边界顶点矢量。
作为一种可选的示例,在本实施例中,上述装置还包括:第四获取子模块,设置为在上述通过上述预定函数依次获取各个上述矢量控制顶点对应的边界顶点矢量之后,获取上述目标对象与上述应用客户端的上述场景中的参考位置之间的距离;第三确定子模块,设置为根据上述距离确定上述法线控制系数,其中,在上述距离大于第一阈值的情况下,调整增大上述法线控制系数;在上述距离小于上述第一阈值的情况下,调整减小上述法线控制系数。
作为一种可选的示例,在本实施例中,上述第二确定模块包括:第四确定子模块,设置为根据用于获取上述边界顶点矢量的上述法线控制系数确定上述描边渲染策略,其中,上述描边渲染策略用于指示上述目标对象的描边的显示宽度,上述目标对象的上述描边根据上述矢量控制顶点的位置矢量与上述边界顶点矢量确定得到。
根据本申请实施例的又一方面,还提供了一种存储介质,上述存储介质存储有计算机程序,其中,上述计算机程序运行时执行上述方法。
根据本申请实施例的又一方面,还提供了一种电子装置,包括存储器、处理器及存储在上述存储器上并可在上述处理器上运行的计算机程序,上述处理器通过运行上述计算机程序执行上述方法。
在本申请实施例中,在移动终端上运行的应用客户端,应用客户端利用获取到的待渲染的目标对象中包括的多个矢量控制顶点的法线矢量,按照预定函数运算,来确定应用客户端的每个场景中目标对象的渲染策略,从而实现按照上述渲染策略对应用客户端中的目标对象进行渲染,以克服相关技术中无法在移动终端中应用卡通渲染,无法保证对移动终端上的目标对象进行卡通渲染的渲染质量的问题,进而实现提高移动终端上目标对 象进行卡通渲染的渲染质量,改善渲染的显示效果。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种可选的对象渲染方法的应用环境示意图;
图2是根据本申请实施例的一种可选的对象渲染方法的流程图;
图3是根据本申请实施例的另一种可选的对象渲染方法的流程图;
图4是根据本申请实施例的一种可选的对象渲染方法的示意图;
图5是根据本申请实施例的另一种可选的对象渲染方法的示意图;
图6是根据本申请实施例的又一种可选的对象渲染方法的示意图;
图7是根据本申请实施例的一种可选的对象渲染装置的示意图;
图8是根据本申请实施例的一种可选的对象渲染装置中确定单元的示意图;
图9是根据本申请实施例的一种可选的对象渲染装置中确定单元的示意图;
图10是根据本申请实施例的一种可选的电子装置的示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动 前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本申请实施例中,提供了一种上述对象渲染方法的实施例。作为一种可选的实施方式,该对象渲染方法可以但不限于应用于如图1所示的应用环境中,在移动终端102上运行有预定应用的应用客户端,该应用客户端的显示界面可以但不限于如图1中移动终端102上方所示效果,实现通过本实施例中的方法预览获取到的目标对象。该应用客户端获取在移动终端102所要显示的待渲染的目标对象,其中,该目标对象中包括多个用于控制渲染的矢量控制顶点;应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数确定目标对象在应用客户端的每个场景中的渲染策略,其中,渲染策略包括目标对象在场景中的光照渲染策略和描边渲染策略;然后,应用客户端按照渲染策略在应用客户端的场景中渲染目标对象。
在本实施例中,在应用客户端通过移动终端上运行的应用客户端获取待渲染的目标对象后,应用客户端利用目标对象中包括的多个用于控制渲染的矢量控制顶点的法线矢量,通过预定函数来确定目标对象在上述应用客户端的每个场景中的渲染策略,并在移动设备按照上述渲染策略在上述应用客户端的场景中渲染上述目标对象,其中,上述渲染策略包括光照渲染策略和描边渲染策略。也就是说,在移动终端上运行的应用客户端,利用获取到的待渲染的目标对象中包括的多个矢量控制顶点的法线矢量,按照预定函数运算,来确定应用客户端的每个场景中目标对象的渲染策略,从而实现按照上述渲染策略对应用客户端中的目标对象进行渲染,以克服 相关技术中无法在移动终端中应用卡通渲染,无法保证对移动终端上的目标对象进行卡通渲染的渲染质量的问题,进而提高了对移动终端上目标对象进行卡通渲染的渲染质量,改善了渲染的显示效果。
可选地,在本实施例中,上述移动终端可以包括但不限于以下至少之一:手机、平板电脑、笔记本电脑及其他需要用于渲染对象的移动硬件设备。上述只是一种示例,本实施例对此不做任何限定。
根据本申请实施例,提供了一种对象渲染方法,如图2所示,该方法包括:
S202,应用客户端通过移动终端上运行的应用客户端获取待渲染的目标对象,其中,目标对象中包括多个用于控制渲染的矢量控制顶点;
S204,应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数确定目标对象在应用客户端的每个场景中的渲染策略,其中,渲染策略包括目标对象在场景中的光照渲染策略和描边渲染策略;
S206,应用客户端按照渲染策略在应用客户端的场景中渲染目标对象。
可选地,在本实施例中,上述对象渲染方法可以但不限于应用于需要在移动终端上进行实时渲染的应用中,例如,上述应用可以包括但不限于游戏应用。其中,该游戏应用可以包括但不限于以下至少之一:二维(Two Dimension,简称2D)游戏应用、三维(Three Dimension,简称3D)游戏应用、虚拟现实(Virtual Reality,简称VR)游戏应用、增强现实(Augmented Reality,简称AR)游戏应用、混合现实(Mixed Reality,简称MR)游戏应用。以上只是一种示例,本实施例对此不作任何限定。
需要说明的是,在移动终端上运行的应用客户端,利用获取到的待渲染的目标对象中包括的多个矢量控制顶点的法线矢量,按照预定函数运算,来确定应用客户端的每个场景中目标对象的渲染策略,从而实现按照上述渲染策略对应用客户端中的目标对象进行渲染,以克服相关技术中无法在移动终端中应用卡通渲染,无法保证对移动终端上的目标对象进行卡通渲 染的渲染质量的问题,进而实现提高移动终端上目标对象进行卡通渲染的渲染质量,改善渲染的显示效果。
可选地,在本实施例中,上述待渲染的目标对象可以包括但不限于应用客户端中待操作的对象,如待操作的对象可以包括但不限于以下至少之一:静态对象和动态对象。例如,以游戏应用为例,上述目标对象可以包括但不限于以下至少之一:游戏应用中的树木、房屋等,游戏应用中的武器装备等,游戏应用中的人物模型。上述仅是一种示例,本实施例中对此不做任何限定。
可选地,在本实施例中,上述渲染策略中的光照渲染策略可以但不限于用于指示目标对象所要渲染的明暗区域,也就是说,应用客户端利用上述多个矢量控制顶点按照预定函数运算,以获取目标对象中各个区域所属为明区域还是暗区域,以实现针对光照渲染策略所指示的不同明暗区域,分别进行渲染。其中,用于确定上述光照渲染策略的预定函数可以包括但不限于球谐函数,其中,该球谐函数是拉普拉斯方程的球坐标系形式解的角度部分。上述球谐函数仅是一种示例,本实施例中对此不做任何限定。
可选地,在本实施例中,上述渲染策略中的描边渲染策略可以但不限于用于指示目标对象的描边的显示宽度,也就是说,应用客户端利用上述多个矢量控制顶点按照预定函数运算,以获取目标对象的描边的显示宽度(也可称作粗细程度),以实现针对目标对象的不同部分按照不同的显示宽度渲染描边。其中,用于确定上述描边渲染策略的预定函数可以包括但不限于在矢量控制顶点的位置矢量上叠加控制矢量,该控制矢量可以但不限于根据矢量控制顶点的法线矢量确定。
可选地,在本实施例中,应用客户端确定目标对象在应用客户端的每个场景中的光照渲染策略包括:应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点在场景中的全局光照亮度;获取各个矢量控制顶点的局部光照亮度;应用客户端根据各个矢量控制顶点的全局光照亮度和局部光照亮度,确定对目标对象中所要渲染的明暗区 域进行渲染所使用的光照渲染策略。
需要说明的是,在本实施例中,应用客户端利用矢量控制顶点的法线矢量获取每个矢量控制顶点的全局光照亮度,进一步结合局部光照亮度进行混合运算,以确定目标对象中所要渲染的区域中的明暗区域,并进行渲染。从而实现通过在移动终端的渲染过程引入全局光照亮度,以避免在应用客户端所显示的场景较暗的情况下,出现多余的明暗面交界。
此外,在本实施例中,应用客户端通过预定函数进行计算的过程中,通过保留矢量控制顶点的法线矢量在一个方向的法线分量,将达到降低渲染时平滑处理带来的渐变效果,从而保证渲染出的目标对象更加贴近卡通或漫画效果,改善卡通渲染效果。
可选地,在本实施例中,应用客户端确定目标对象在应用客户端的每个场景中的渲染策略包括:应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点对应的边界顶点矢量;根据边界顶点矢量确定目标对象的描边渲染策略。
需要说明的是,在本实施例中,渲染一个比目标对象的本体大,且只有背面被渲染的外壳,并使得该外壳的大部分被本体挡住,只露出边缘,从而达到为目标对象提供渲染描边的效果。其中,构成上述描边的外沿的边界顶点可以但不限于通过在矢量控制顶点的位置矢量上叠加控制矢量得到,该控制矢量可以但不限于至少根据矢量控制顶点的法线矢量确定。
此外,在本实施例中,通过上述控制矢量可以但不限于控制不同描边部位的粗细程度(即显示宽度)。其中,该控制矢量可以但不限于为矢量控制顶点的法线矢量和法线控制系数的乘积。通过调整上述法线控制系数可以达到调整控制矢量的目的,进而保证上述目标对象的描边可以根据不同部位渲染出不同的显示宽度,以改善渲染效果。此外,通过上述法线控制系数还可以保持上述描边的像素显示厚度。如不管应用中的参考位置(如用于确定显示视野的摄像机的位置)的远近,可以通过调整法线控制系数的大小,以使得所显示的描边的像素显示厚度保持一致。
需要说明的是,在本实施例中,还可以但不限于通过对上述外壳进行前后偏移,如调整外壳的边界顶点矢量的偏移,以达到剔除目标对象中所显示的多余的描边。
通过本申请提供的实施例,通过移动终端上运行的应用客户端利用获取到的待渲染的目标对象中包括的多个矢量控制顶点的法线矢量,按照预定函数运算,来确定应用客户端的每个场景中目标对象的渲染策略,从而实现按照上述渲染策略对应用客户端中的目标对象进行渲染,以克服相关技术中无法在移动终端中应用卡通渲染,无法保证对移动终端上的目标对象进行卡通渲染的渲染质量的问题,进而实现提高移动终端上目标对象进行卡通渲染的渲染质量,改善渲染的显示效果。
作为一种可选的方案,应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数确定目标对象在应用客户端的每个场景中的渲染策略包括:
S1,应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点在场景中的全局光照亮度;
S2,应用客户端获取各个矢量控制顶点的局部光照亮度;
S3,应用客户端根据各个矢量控制顶点的全局光照亮度和局部光照亮度,确定对目标对象中所要渲染的明暗区域进行渲染所使用的光照渲染策略。
可选地,在本实施例中,上述渲染策略中的光照渲染策略可以但不限于用于指示目标对象所要渲染的明暗区域,也就是说,应用客户端利用上述多个矢量控制顶点按照预定函数运算,以获取目标对象中各个区域所属为明区域还是暗区域,以实现针对光照渲染策略所指示的不同明暗区域,分别进行渲染。其中,用于确定上述光照渲染策略的预定函数可以包括但不限于球谐函数,其中,该球谐函数是拉普拉斯方程的球坐标系形式解的角度部分。上述球谐函数仅是一种示例,本实施例中对此不做任何限定。
可选地,在本实施例中,步骤S1,应用客户端利用多个矢量控制顶 点的法线矢量,通过预定函数依次获取各个矢量控制顶点在场景中的全局光照亮度包括:
S12,应用客户端对目标对象中的每一个矢量控制顶点执行以下步骤:
S12-1,应用客户端获取当前矢量控制顶点的法线矢量;
S12-2,应用客户端保留当前矢量控制顶点的法线矢量在一个方向的法线分量;
S12-3,应用客户端将当前矢量控制顶点的法线分量传入预定函数中的球谐函数,得到当前矢量控制顶点的全局光照亮度。
具体结合图3所示示例进行说明,以一个矢量控制顶点A为例,应用客户端获取矢量控制顶点A的法线矢量,保留该法线矢量在一个方向的法线分量,例如,如图3所示保留y法线分量,将x法线分量和z法线分量置零。然后,将上述y法线分量传入全局光照球谐函数,以获取全局光照亮度,其中,上述全局光照亮度可以但不限于根据全局光照颜色值计算颜色值亮度得到;将上述全局光照亮度与局部光照亮度进行混合运算,以得到明暗面计算的颜色结果,用于识别目标对象所要渲染的区域中的明暗区域。再者,结合场景中的其他颜色结果,来获取最终结果,然后根据得到的最终结果对场景中的目标对象进行渲染。
此外,在本实施例中,通过保留一个方向的法线分量,将达到降低渲染时平滑处理带来的渐变效果。例如,如图4所示,图4左侧下方的方框内的手腕处有明显的渐变效果,而图4右侧下方的方框内的手腕处则避免了多余的渐变效果,从而使得渲染出的目标对象更加贴近卡通或漫画效果,改善卡通渲染效果。其中,需要说明那个的是,上述去除渐变效果的控制方式,在场景为较暗场景下效果更为明显。上述仅是一种示例,本实施例中对此不做任何限定。
需要说明的是,图3所示实线框内为所执行的流程步骤,虚线框内所示为该步骤对应得到的数据,此外,有阴影背景的为可选的执行步骤,也 可缺省不执行。
通过本申请提供的实施例,通过应用客户端结合各个矢量控制顶点的全局光照亮度和局部光照亮度,确定对目标对象中所要渲染的明暗区域进行渲染所使用的光照渲染策略,从而实现准确识别出目标对象中所要渲染的区域中的明暗区域,以避免不必要的明暗区域的边界,通过保留单独的法线分量,还可以减少渐变效果,使得渲染后的目标对象更加贴近卡通渲染后的卡通或漫画效果。
作为一种可选的方案,应用客户端确定对目标对象中所要渲染的明暗区域进行渲染所使用的光照渲染策略包括:
S1,应用客户端获取各个矢量控制顶点的颜色结果,其中,每个矢量控制顶点的颜色结果包括:矢量控制顶点的全局光照亮度与矢量控制顶点的局部光照亮度的乘积;
S2,应用客户端根据各个矢量控制顶点的颜色结果识别目标对象中所要渲染的明暗区域,并确定光照渲染策略,其中,光照渲染策略用于指示识别出的目标对象中所要渲染的明暗区域。
具体的仍以图3所示为例进行说明,对矢量控制顶点的全局光照亮度与矢量控制顶点的局部光照亮度进行混合运算,例如,图3所示为乘法运算,以得到该矢量控制顶点进行明暗面计算的颜色结果,从而便于根据该颜色结果准确识别出目标对象所要渲染的区域中的明暗区域。进一步结合场景中其他颜色结果,可以进一步混合运算,得到用于对场景中的目标对象进行渲染的最终结果。
通过本申请提供的实施例,通过应用客户端获取各个矢量控制顶点的颜色结果,以识别目标对象中所要渲染的明暗区域,并确定用于指示识别出的目标对象中所要渲染的明暗区域的光照渲染策略,从而实现对目标对象进行高质量的渲染,改善渲染效果。
作为一种可选的方案,应用客户端利用多个矢量控制顶点的法线矢量, 通过预定函数确定目标对象在应用客户端的每个场景中的渲染策略包括:
S1,应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点对应的边界顶点矢量;
S2,应用客户端根据边界顶点矢量确定目标对象的描边渲染策略。
可选地,在本实施例中,应用客户端根据边界顶点矢量确定目标对象的描边渲染策略包括:根据用于获取边界顶点矢量的法线控制系数确定描边渲染策略,其中,描边渲染策略用于指示目标对象的描边的显示宽度,目标对象的描边根据矢量控制顶点的位置矢量与边界顶点矢量确定得到。
可选地,在本实施例中,上述渲染策略中的描边渲染策略可以但不限于用于指示目标对象的描边的显示宽度。也就是说,应用客户端利用上述多个矢量控制顶点按照预定函数运算,以获取目标对象的描边的显示宽度(也可称作粗细程度),以实现针对目标对象的不同部位按照不同的显示宽度渲染描边。其中,用于确定上述描边渲染策略的预定函数可以包括但不限于用于在矢量控制顶点的位置矢量上叠加控制矢量,该控制矢量可以但不限于为矢量控制顶点的法线矢量与法线控制系数的乘积。如获取矢量控制顶点的法线矢量与法线控制系数的乘积;根据矢量控制顶点的位置矢量及乘积二者之和确定边界顶点矢量。
其中,上述计算过程可以但不限于如下:
P s=P m+n m×r s (1)
其中,P s表示边界顶点矢量,P m表示矢量控制顶点的位置矢量,n m表示矢量控制顶点的法线矢量,r s表示法线控制系数。
例如,具体结合图5所示的描边效果进行说明,图5所示实心圆为目标对象中的矢量控制顶点,图5所示网线阴影圆为边界顶点,假设法线控制系数为预设系数。可选地,将矢量控制顶点的位置矢量和法线矢量代入上述公式(1),则可计算得出边界顶点矢量,从而得到目标对象中边界顶点的位置。
需要说明的是,由于边界顶点构成的外壳比目标对象的本体大,且大部分被目标对象的本体覆盖,因而将得到本实施例中提供的目标对象的描边。此外,在本实施例中,通过上述公式(1)可以针对不同部位,得到不同显示宽度(也可称作粗细程度)的描边,如图5所示黑色填充区域。从而使得描边更加精确。
通过本申请提供的实施例,通过应用客户端获取边界顶点矢量,以确定用于指示描边的显示宽度的描边渲染策略,从而实现针对目标对象的不同部位提供不同的显示宽度的描边,使得渲染后的目标对象更加贴近卡通渲染效果。
作为一种可选的方案,应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点对应的边界顶点矢量包括:
S1,应用客户端确定目标对象在应用客户端的渲染模式;
S2,应用客户端根据渲染模式确定用于获取边界顶点矢量的计算空间,其中,在渲染模式为第一模式的情况下,确定计算空间为第一空间;在渲染模式为第二模式的情况下,确定计算空间为第二空间;
S3,应用客户端在计算空间通过预定函数获取矢量控制顶点对应的边界顶点矢量。
需要说明的是,在本实施例中提供的对象渲染过程中包括了模型空间(model space)、观察空间(view space,也称为摄像机空间camera space)、裁剪空间(clip space)。第一空间可以但不限于对应模型空间,其中,模型空间的运算效率最高。例如,输入矢量控制顶点的位置矢量v,通过按照上述公式(1)计算输出边界顶点矢量,然后再将边界顶点矢量转换到裁剪空间,得到最终的边界顶点矢量o;此外,第二空间可以但不限于对应裁剪空间,其中,裁剪空间的运算结果更加精准。例如,输入矢量控制顶点的位置矢量v,先将上述矢量控制顶点的位置矢量v转换到裁剪空间,再按照上述(1)计算输出边界顶点矢量o。
需要说明的是,上述渲染模式可以但不限于包括:1)效率模式和精准模式,其中,在效率模式下可采用第一空间计算,而在精准模式下可采用第二空间计算;2)视野远模式和视野近模式,其中,在视野远模式下可采用第一空间计算,而在视野近模式下可采用第二空间计算。上述仅是一种示例,本实施例中对此不做任何限定
通过本申请提供的实施例,应用客户端根据不同渲染模式采用不同计算空间来获取边界顶点矢量,从而达到满足不同渲染需求的目的,提高渲染的灵活性。
作为一种可选的方案,在应用客户端通过预定函数依次获取各个矢量控制顶点对应的边界顶点矢量之后,还包括:
S1,应用客户端获取目标对象与应用客户端的场景中的参考位置之间的距离;
S2,应用客户端根据距离确定法线控制系数,其中,在距离大于第一阈值的情况下,调整增大法线控制系数;在距离小于第一阈值的情况下,调整减小法线控制系数。
可选地,在本实施例中,上述参考位子可以但不限于为应用中摄像机(camera)的位置。通过调整法线控制系数,以实现描边粗细程度不受摄像机(camera)与目标对象之间的距离的影响,从而能够确保描边的粗细像素大小保持不变。例如,描边的渲染效果如图6所示加粗线条。
通过本申请提供的实施例,应用客户端通过法线控制系数保证描边的像素厚度不受视野距离变化影响,保证渲染效果的准确真实。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一方面,还提供了一种用于实施上述对象渲染方法的对象渲染装置,装置中运行有应用客户端。如图7示,该装置包括:
1)第一获取单元702,设置为通过移动终端上运行的应用客户端获取待渲染的目标对象,其中,目标对象中包括多个用于控制渲染的矢量控制顶点;
2)确定单元704,设置为利用多个矢量控制顶点的法线矢量,通过预定函数确定目标对象在应用客户端的每个场景中的渲染策略,其中,渲染策略包括目标对象在场景中的光照渲染策略和描边渲染策略;
3)渲染单元706,设置为按照渲染策略在应用客户端的场景中渲染目标对象。
可选地,在本实施例中,上述对象渲染装置可以但不限于应用于需要在移动终端上进行实时渲染的应用中,例如,上述应用可以包括但不限于游戏应用。其中,该游戏应用可以包括但不限于以下至少之一:二维(Two Dimension,简称2D)游戏应用、三维(Three Dimension,简称3D)游戏应用、虚拟现实(Virtual Reality,简称VR)游戏应用、增强现实(Augmented Reality,简称AR)游戏应用、混合现实(Mixed Reality,简称MR)游戏应用。以上只是一种示例,本实施例对此不作任何限定。
需要说明的是,在移动终端上运行的应用客户端,利用获取到的待渲染的目标对象中包括的多个矢量控制顶点的法线矢量,按照预定函数运算,来确定应用客户端的每个场景中目标对象的渲染策略,从而实现按照上述渲染策略对应用客户端中的目标对象进行渲染,以克服相关技术中无法在移动终端中应用卡通渲染,无法保证对移动终端上的目标对象进行卡通渲染的渲染质量的问题,进而实现提高移动终端上目标对象进行卡通渲染的渲染质量,改善渲染的显示效果。
可选地,在本实施例中,上述待渲染的目标对象可以包括但不限于应 用中待操作的对象,如待操作的对象可以包括但不限于以下至少之一:静态对象和动态对象。例如,以游戏应用为例,上述目标对象可以包括但不限于以下至少之一:游戏应用中的树木、房屋等,游戏应用中的武器装备等,游戏应用中的人物模型。上述仅是一种示例,本实施例中对此不做任何限定。
可选地,在本实施例中,上述渲染策略中的光照渲染策略可以但不限于用于指示目标对象所要渲染的明暗区域,也就是说,利用上述多个矢量控制顶点按照预定函数运算,以获取目标对象中各个区域所属为明区域还是暗区域,以实现针对光照渲染策略所指示的不同明暗区域,分别进行渲染。其中,用于确定上述光照渲染策略的预定函数可以包括但不限于球谐函数,其中,该球谐函数是拉普拉斯方程的球坐标系形式解的角度部分。上述球谐函数仅是一种示例,本实施例中对此不做任何限定。
可选地,在本实施例中,上述渲染策略中的描边渲染策略可以但不限于用于指示目标对象的描边的显示宽度,也就是说,利用上述多个矢量控制顶点按照预定函数运算,以获取目标对象的描边的显示宽度(也可称作粗细程度),以实现针对目标对象的不同部分按照不同的显示宽度渲染描边。其中,用于确定上述描边渲染策略的预定函数可以包括但不限于在矢量控制顶点的位置矢量上叠加控制矢量,该控制矢量可以但不限于根据矢量控制顶点的法线矢量确定。
可选地,在本实施例中,确定目标对象在应用客户端的每个场景中的光照渲染策略包括:利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点在场景中的全局光照亮度;获取各个矢量控制顶点的局部光照亮度;根据各个矢量控制顶点的全局光照亮度和局部光照亮度,确定对目标对象中所要渲染的明暗区域进行渲染所使用的光照渲染策略。
需要说明的是,在本实施例中,利用矢量控制顶点的法线矢量获取每个矢量控制顶点的全局光照亮度,进一步结合局部光照亮度进行混合运算, 以确定目标对象中所要渲染的区域中的明暗区域,并进行渲染。从而实现通过在移动终端的渲染过程引入全局光照亮度,以避免在应用客户端所显示的场景较暗的情况下,出现多余的明暗面交界。
此外,在本实施例中,通过预定函数进行计算的过程中,通过保留矢量控制顶点的法线矢量在一个方向的法线分量,将达到降低渲染时平滑处理带来的渐变效果,从而保证渲染出的目标对象更加贴近卡通或漫画效果,改善卡通渲染效果。
可选地,在本实施例中,确定目标对象在应用客户端的每个场景中的渲染策略包括:利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点对应的边界顶点矢量;根据边界顶点矢量确定目标对象的描边渲染策略。
需要说明的是,在本实施例中,渲染一个比目标对象的本体大,且只有背面被渲染的外壳,并使得该外壳的大部分被本体挡住,只露出边缘,从而达到为目标对象提供渲染描边的效果。其中,构成上述描边的外沿的边界顶点可以但不限于通过在矢量控制顶点的位置矢量上叠加控制矢量得到,该控制矢量可以但不限于至少根据矢量控制顶点的法线矢量确定。
此外,在本实施例中,通过上述控制矢量可以但不限于控制不同描边部位的粗细程度(即显示宽度)。其中,该控制矢量可以但不限于为矢量控制顶点的法线矢量和法线控制系数的乘积。通过调整上述法线控制系数可以达到调整控制矢量的目的,进而保证上述目标对象的描边可以根据不同部位渲染出不同的显示宽度,以改善渲染效果。此外,通过上述法线控制系数还可以保持上述描边的像素显示厚度。如不管应用中的参考位置(如用于确定显示视野的摄像机的位置)的远近,可以通过调整法线控制系数的大小,以使得所显示的描边的像素显示厚度保持一致。
需要说明的是,在本实施例中,还可以但不限于通过对上述外壳进行前后偏移,如调整外壳的边界顶点矢量的偏移,以达到剔除目标对象中所显示的多余的描边。
通过本申请提供的实施例,在移动终端上运行的应用客户端,利用获取到的待渲染的目标对象中包括的多个矢量控制顶点的法线矢量,按照预定函数运算,来确定应用客户端的每个场景中目标对象的渲染策略,从而实现按照上述渲染策略对应用客户端中的目标对象进行渲染,以克服相关技术中无法在移动终端中应用卡通渲染,无法保证对移动终端上的目标对象进行卡通渲染的渲染质量的问题,进而实现提高移动终端上目标对象进行卡通渲染的渲染质量,改善渲染的显示效果。
作为一种可选的方案,如图8所示,确定单元704包括:
1)第一获取模块802,设置为利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点在场景中的全局光照亮度;
2)第二获取模块804,设置为获取各个矢量控制顶点的局部光照亮度;3)第一确定模块806,设置为根据各个矢量控制顶点的全局光照亮度和局部光照亮度,确定对目标对象中所要渲染的明暗区域进行渲染所使用的光照渲染策略。
可选地,在本实施例中,上述渲染策略中的光照渲染策略可以但不限于用于指示目标对象所要渲染的明暗区域,也就是说,利用上述多个矢量控制顶点按照预定函数运算,以获取目标对象中各个区域所属为明区域还是暗区域,以实现针对光照渲染策略所指示的不同明暗区域,分别进行渲染。其中,用于确定上述光照渲染策略的预定函数可以包括但不限于球谐函数,其中,该球谐函数是拉普拉斯方程的球坐标系形式解的角度部分。上述球谐函数仅是一种示例,本实施例中对此不做任何限定。
可选地,在本实施例中,第一获取模块802包括:
(1)处理子模块,设置为对目标对象中的每一个矢量控制顶点执行以下步骤:获取当前矢量控制顶点的法线矢量;保留当前矢量控制顶点的法线矢量在一个方向的法线分量;将当前矢量控制顶点的法线分量传入预定函数中的球谐函数,得到当前矢量控制顶点的全局光照亮度。
具体结合图3所示示例进行说明,以一个矢量控制顶点A为例,获取矢量控制顶点A的法线矢量,保留该法线矢量在一个方向的法线分量,例如,如图3所示保留y法线分量,将x法线分量和z法线分量置零。然后,将上述y法线分量传入全局光照球谐函数,以获取全局光照亮度,其中,上述全局光照亮度可以但不限于根据全局光照颜色值计算颜色值亮度得到;将上述全局光照亮度与局部光照亮度进行混合运算,以得到明暗面计算的颜色结果,用于识别目标对象所要渲染的区域中的明暗区域。再者,结合场景中的其他颜色结果,来获取最终结果,然后根据得到的最终结果对场景中的目标对象进行渲染。
此外,在本实施例中,通过保留一个方向的法线分量,将达到降低渲染时平滑处理带来的渐变效果。例如,如图4所示,图4左侧下方的方框内的手腕处有明显的渐变效果,而图4右侧下方的方框内的手腕处则避免了多余的渐变效果,从而使得渲染出的目标对象更加贴近卡通或漫画效果,改善卡通渲染效果。其中,需要说明那个的是,上述去除渐变效果的控制方式,在场景为较暗场景下效果更为明显。上述仅是一种示例,本实施例中对此不做任何限定。
需要说明的是,图3所示实线框内为所执行的流程步骤,虚线框内所示为该步骤对应得到的数据,此外,有阴影背景的为可选的执行步骤,也可缺省不执行。
通过本申请提供的实施例,通过结合各个矢量控制顶点的全局光照亮度和局部光照亮度,确定对目标对象中所要渲染的明暗区域进行渲染所使用的光照渲染策略,从而实现准确识别出目标对象中所要渲染的区域中的明暗区域,以避免不必要的明暗区域的边界,通过保留单独的法线分量,还可以减少渐变效果,使得渲染后的目标对象更加贴近卡通渲染后的卡通或漫画效果。
作为一种可选的方案,第一确定模块806包括:
1)第一获取子模块,设置为获取各个矢量控制顶点的颜色结果,其 中,每个矢量控制顶点的颜色结果包括:矢量控制顶点的全局光照亮度与矢量控制顶点的局部光照亮度的乘积;
2)识别子模块,设置为根据各个矢量控制顶点的颜色结果识别目标对象中所要渲染的明暗区域;
3)第一确定子模块,设置为确定光照渲染策略,其中,光照渲染策略用于指示识别出的目标对象中所要渲染的明暗区域。
具体的仍以图3所示为例进行说明,对矢量控制顶点的全局光照亮度与矢量控制顶点的局部光照亮度进行混合运算,例如,图3所示为乘法运算,以得到该矢量控制顶点进行明暗面计算的颜色结果,从而便于根据该颜色结果准确识别出目标对象所要渲染的区域中的明暗区域。进一步结合场景中其他颜色结果,可以进一步混合运算,得到用于对场景中的目标对象进行渲染的最终结果。
通过本申请提供的实施例,通过获取各个矢量控制顶点的颜色结果,以识别目标对象中所要渲染的明暗区域,并确定用于指示识别出的目标对象中所要渲染的明暗区域的光照渲染策略,从而实现对目标对象进行高质量的渲染,改善渲染效果。
作为一种可选的方案,如图9所示,确定单元704包括:
1)第三获取模块902,设置为利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点对应的边界顶点矢量;
2)第二确定模块904,设置为根据边界顶点矢量确定目标对象的描边渲染策略。
可选地,在本实施例中,第二确定模块904包括:第四确定子模块,用于根据用于获取边界顶点矢量的法线控制系数确定描边渲染策略,其中,描边渲染策略用于指示目标对象的描边的显示宽度,目标对象的描边根据矢量控制顶点的位置矢量与边界顶点矢量确定得到。
可选地,在本实施例中,上述渲染策略中的描边渲染策略可以但不限于用于指示目标对象的描边的显示宽度。也就是说,利用上述多个矢量控制顶点按照预定函数运算,以获取目标对象的描边的显示宽度(也可称作粗细程度),以实现针对目标对象的不同部位按照不同的显示宽度渲染描边。其中,用于确定上述描边渲染策略的预定函数可以包括但不限于用于在矢量控制顶点的位置矢量上叠加控制矢量,该控制矢量可以但不限于为矢量控制顶点的法线矢量与法线控制系数的乘积。如获取矢量控制顶点的法线矢量与法线控制系数的乘积;根据矢量控制顶点的位置矢量及乘积二者之和确定边界顶点矢量。
其中,上述计算过程可以但不限于如下:
P s=P m+n m×r s (2)
其中,P s表示边界顶点矢量,P m表示矢量控制顶点的位置矢量,n m表示矢量控制顶点的法线矢量,r s表示法线控制系数。
例如,具体结合图5所示的描边效果进行说明,图5所示实心圆为目标对象中的矢量控制顶点,图5所示网线阴影圆为边界顶点,假设法线控制系数为预设系数。可选地,将矢量控制顶点的位置矢量和法线矢量代入上述公式(2),则可计算得出边界顶点矢量,从而得到目标对象中边界顶点的位置。
需要说明的是,由于边界顶点构成的外壳比目标对象的本体大,且大部分被目标对象的本体覆盖,因而将得到本实施例中提供的目标对象的描边。此外,在本实施例中,通过上述公式(2)可以针对不同部位,得到不同显示宽度(也可称作粗细程度)的描边,如图5所示黑色填充区域。从而使得描边更加精确。
通过本申请提供的实施例,通过获取边界顶点矢量,以确定用于指示描边的显示宽度的描边渲染策略,从而实现针对目标对象的不同部位提供不同的显示宽度的描边,使得渲染后的目标对象更加贴近卡通渲染效果。
作为一种可选的方案,第三获取模块902包括:
1)第二确定子模块,设置为确定目标对象在应用客户端的渲染模式;
2)第二获取子模块,设置为根据渲染模式确定用于获取边界顶点矢量的计算空间,其中,在渲染模式为第一模式的情况下,确定计算空间为第一空间;在渲染模式为第二模式的情况下,确定计算空间为第二空间;
3)第三获取子模块,设置为在计算空间通过预定函数获取矢量控制顶点对应的边界顶点矢量。
需要说明的是,在本实施例中提供的对象渲染过程中包括了模型空间(model space)、观察空间(view space,也称为摄像机空间camera space)、裁剪空间(clip space)。第一空间可以但不限于对应模型空间,其中,模型空间的运算效率最高。例如,输入矢量控制顶点的位置矢量v,通过按照上述公式(2)计算输出边界顶点矢量,然后再将边界顶点矢量转换到裁剪空间,得到最终的边界顶点矢量o;此外,第二空间可以但不限于对应裁剪空间,其中,裁剪空间的运算结果更加精准。例如,输入矢量控制顶点的位置矢量v,先将上述矢量控制顶点的位置矢量v转换到裁剪空间,再按照上述公式(2)计算输出边界顶点矢量o。
需要说明的是,上述渲染模式可以但不限于包括:1)效率模式和精准模式,其中,在效率模式下可采用第一空间计算,而在精准模式下可采用第二空间计算;2)视野远模式和视野近模式,其中,在视野远模式下可采用第一空间计算,而在视野近模式下可采用第二空间计算。上述仅是一种示例,本实施例中对此不做任何限定
通过本申请提供的实施例,根据不同渲染模式采用不同计算空间来获取边界顶点矢量,从而达到满足不同渲染需求的目的,提高渲染的灵活性。
作为一种可选的方案,还包括:
1)第四获取子模块,设置为在通过预定函数依次获取各个矢量控制顶点对应的边界顶点矢量之后,获取目标对象与应用客户端的场景中的参 考位置之间的距离;
2)第三确定子模块,设置为根据距离确定法线控制系数,其中,在距离大于第一阈值的情况下,调整增大法线控制系数;在距离小于第一阈值的情况下,调整减小法线控制系数。
可选地,在本实施例中,上述参考位子可以但不限于为应用中摄像机(camera)的位置。通过调整法线控制系数,以实现描边粗细程度不受摄像机(camera)与目标对象之间的距离的影响,从而能够确保描边的粗细像素大小保持不变。例如,描边的渲染效果如图6所示加粗线条。
通过本申请提供的实施例,通过法线控制系数保证描边的像素厚度不受视野距离变化影响,保证渲染效果的准确真实。
根据本申请实施例的又一方面,还提供了一种用于实施上述对象渲染方法的电子装置,如图10所示,该电子装置包括:一个或多个(图中仅示出一个)处理器1002、存储器1004、用户接口1006、显示器1008以及传输装置1010。
其中,存储器1004可用于存储软件程序以及模块,如本申请实施例中的对象渲染方法和装置对应的程序指令/模块,处理器1002通过运行存储在存储器1004内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的对象渲染。存储器1004可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,1004可可进一步包括相对于处理器1002远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述显示器1008用于显示上述应用客户端按照上述对象渲染方法所要渲染的目标对象,上述用户接口1006用于获取通过显示器上的操作面 板所生成的操作指令发送给处理器1002进行处理。
上述的传输装置1010用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1010包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中传输装置1010为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,可选地,存储器1004用于存储待渲染的目标对象、预定的函数与渲染策略等。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本申请的实施例的又一方面,还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以位于网络中的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的计算机程序:
S1,移动终端上运行的应用客户端获取待渲染的目标对象,其中,目标对象中包括多个用于控制渲染的矢量控制顶点;
S2,应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数确定目标对象在应用客户端的每个场景中的渲染策略,其中,渲染策略包括目标对象在场景中的光照渲染策略和描边渲染策略;
S3,应用客户端按照渲染策略在应用客户端的场景中渲染目标对象。
可选地,存储介质还被设置为存储用于执行以下步骤的计算机程序:
S1,应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数依 次获取各个矢量控制顶点在场景中的全局光照亮度;
S2,应用客户端获取各个矢量控制顶点的局部光照亮度;
S3,应用客户端根据各个矢量控制顶点的全局光照亮度和局部光照亮度,确定对目标对象中所要渲染的明暗区域进行渲染所使用的光照渲染策略。
可选地,存储介质还被设置为存储用于执行以下步骤的计算机程序:
S1,应用客户端利用多个矢量控制顶点的法线矢量,通过预定函数依次获取各个矢量控制顶点对应的边界顶点矢量;
S2,应用客户端根据边界顶点矢量确定目标对象的描边渲染策略。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上仅是本申请的可选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
工业实用性
通过本实施例,通过移动终端上运行的应用客户端对获取到的待渲染的目标对象中包括的多个矢量控制顶点的法线矢量按照预定函数进行运算,来确定应用客户端的每个场景中目标对象的渲染策略,从而实现按照上述渲染策略对应用客户端中的目标对象进行渲染,以克服相关技术中无法在移动终端中应用卡通渲染,无法保证对移动终端上的目标对象进行卡通渲染的渲染质量的问题,进而实现提高移动终端上目标对象进行卡通渲染的渲染质量,改善渲染的显示效果。

Claims (15)

  1. 一种对象渲染方法,包括:
    应用客户端通过移动终端上运行的应用客户端获取待渲染的目标对象,其中,所述目标对象中包括多个用于控制渲染的矢量控制顶点;
    所述应用客户端利用多个所述矢量控制顶点的法线矢量,通过预定函数确定所述目标对象在所述应用客户端的每个场景中的渲染策略,其中,所述渲染策略包括所述目标对象在所述场景中的光照渲染策略和描边渲染策略;
    所述应用客户端按照所述渲染策略在所述应用客户端的所述场景中渲染所述目标对象。
  2. 根据权利要求1所述的方法,其中,所述应用客户端利用多个所述矢量控制顶点的法线矢量,通过预定函数确定所述目标对象在所述应用客户端的每个场景中的渲染策略包括:
    所述应用客户端利用多个所述矢量控制顶点的法线矢量,通过所述预定函数依次获取各个所述矢量控制顶点在所述场景中的全局光照亮度;
    所述应用客户端获取各个所述矢量控制顶点的局部光照亮度;
    所述应用客户端根据各个所述矢量控制顶点的所述全局光照亮度和所述局部光照亮度,确定对所述目标对象中所要渲染的明暗区域进行渲染所使用的所述光照渲染策略。
  3. 根据权利要求2所述的方法,其中,所述应用客户端利用多个所述矢量控制顶点的法线矢量,通过所述预定函数依次获取各个所述矢量控制顶点在所述场景中的全局光照亮度包括:
    所述应用客户端对所述目标对象中的每一个所述矢量控制顶点执行以下步骤:
    获取当前矢量控制顶点的法线矢量;
    保留所述当前矢量控制顶点的法线矢量在一个方向的法线分量;
    将所述当前矢量控制顶点的所述法线分量传入所述预定函数中的球谐函数,得到所述当前矢量控制顶点的所述全局光照亮度。
  4. 根据权利要求2所述的方法,其中,所述应用客户端确定对所述目标对象中所要渲染的明暗区域进行渲染所使用的所述光照渲染策略包括:
    所述应用客户端获取各个所述矢量控制顶点的颜色结果,其中,每个所述矢量控制顶点的所述颜色结果包括:所述矢量控制顶点的所述全局光照亮度与所述矢量控制顶点的所述局部光照亮度的乘积;
    所述应用客户端根据各个所述矢量控制顶点的所述颜色结果识别所述目标对象中所要渲染的明暗区域,并确定所述光照渲染策略,其中,所述光照渲染策略用于指示识别出的所述目标对象中所要渲染的明暗区域。
  5. 根据权利要求1所述的方法,其中,所述应用客户端利用多个所述矢量控制顶点的法线矢量,通过预定函数确定所述目标对象在所述应用客户端的每个场景中的渲染策略包括:
    所述应用客户端利用多个所述矢量控制顶点的法线矢量,通过所述预定函数依次获取各个所述矢量控制顶点对应的边界顶点矢量;
    所述应用客户端根据所述边界顶点矢量确定所述目标对象的所述描边渲染策略。
  6. 根据权利要求5所述的方法,其中,所述应用客户端利用多个所述矢量控制顶点的法线矢量,通过所述预定函数依次获取各个所述矢量控制顶点对应的边界顶点矢量包括:
    所述应用客户端确定所述目标对象在所述应用客户端的渲染模式;
    所述应用客户端根据所述渲染模式确定用于获取所述边界顶点 矢量的计算空间,其中,在所述渲染模式为第一模式的情况下,确定所述计算空间为第一空间;在所述渲染模式为第二模式的情况下,确定所述计算空间为第二空间;
    所述应用客户端在所述计算空间通过所述预定函数获取所述矢量控制顶点对应的所述边界顶点矢量。
  7. 根据权利要求6所述的方法,其中,所述应用客户端在所述计算空间通过所述预定函数获取所述矢量控制顶点对应的所述边界顶点矢量包括:
    所述应用客户端获取所述矢量控制顶点的法线矢量与法线控制系数的乘积;
    所述应用客户端根据所述矢量控制顶点的位置矢量及所述乘积二者之和确定所述边界顶点矢量。
  8. 根据权利要求7所述的方法,其中,在所述应用客户端通过所述预定函数依次获取各个所述矢量控制顶点对应的边界顶点矢量之后,还包括:
    所述应用客户端获取所述目标对象与所述应用客户端的所述场景中的参考位置之间的距离;
    所述应用客户端根据所述距离确定所述法线控制系数,其中,在所述距离大于第一阈值的情况下,调整增大所述法线控制系数;在所述距离小于所述第一阈值的情况下,调整减小所述法线控制系数。
  9. 根据权利要求7所述的方法,其中,所述应用客户端根据所述边界顶点矢量确定所述目标对象的所述描边渲染策略包括:
    所述应用客户端根据用于获取所述边界顶点矢量的所述法线控制系数确定所述描边渲染策略,其中,所述描边渲染策略用于指示所述目标对象的描边的显示宽度,所述目标对象的所述描边根据所述矢量控制顶点的位置矢量与所述边界顶点矢量确定得到。
  10. 一种对象渲染装置,所述装置中运行有应用客户端,包括:
    第一获取单元,设置为通过移动终端上运行的应用客户端获取待渲染的目标对象,其中,所述目标对象中包括多个用于控制渲染的矢量控制顶点;
    确定单元,设置为利用多个所述矢量控制顶点的法线矢量,通过预定函数确定所述目标对象在所述应用客户端的每个场景中的渲染策略,其中,所述渲染策略包括所述目标对象在所述场景中的光照渲染策略和描边渲染策略;
    渲染单元,设置为按照所述渲染策略在所述应用客户端的所述场景中渲染所述目标对象。
  11. 根据权利要求10所述的装置,其中,所述确定单元包括:
    第一获取模块,设置为利用多个所述矢量控制顶点的法线矢量,通过所述预定函数依次获取各个所述矢量控制顶点在所述场景中的全局光照亮度;
    第二获取模块,设置为获取各个所述矢量控制顶点的局部光照亮度;
    第一确定模块,设置为根据各个所述矢量控制顶点的所述全局光照亮度和所述局部光照亮度,确定对所述目标对象中所要渲染的明暗区域进行渲染所使用的所述光照渲染策略。
  12. 根据权利要求11所述的装置,其中,所述第一获取模块包括:
    处理子模块,设置为对所述目标对象中的每一个所述矢量控制顶点执行以下步骤:获取当前矢量控制顶点的法线矢量;保留所述当前矢量控制顶点的法线矢量在一个方向的法线分量;将所述当前矢量控制顶点的所述法线分量传入所述预定函数中的球谐函数,得到所述当前矢量控制顶点的所述全局光照亮度。
  13. 根据权利要求12所述的装置,其中,所述第一确定模块包括:
    第一获取子模块,设置为获取各个所述矢量控制顶点的颜色结果,其中,每个所述矢量控制顶点的所述颜色结果包括:所述矢量控制顶 点的所述全局光照亮度与所述矢量控制顶点的所述局部光照亮度的乘积;
    识别子模块,设置为根据各个所述矢量控制顶点的所述颜色结果识别所述目标对象中所要渲染的明暗区域;
    第一确定子模块,设置为确定所述光照渲染策略,其中,所述光照渲染策略用于指示识别出的所述目标对象中所要渲染的明暗区域。
  14. 一种存储介质,所述存储介质包括存储的计算机程序,其中,所述计算机程序运行时执行所述权利要求1至9任一项中所述的方法。
  15. 一种电子装置,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器通过运行所述计算机程序执行所述权利要求1至9任一项中所述的方法。
PCT/CN2018/112196 2017-11-03 2018-10-26 对象渲染方法和装置、存储介质及电子装置 WO2019085838A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711081196.1A CN108090945B (zh) 2017-11-03 2017-11-03 对象渲染方法和装置、存储介质及电子装置
CN201711081196.1 2017-11-03

Publications (1)

Publication Number Publication Date
WO2019085838A1 true WO2019085838A1 (zh) 2019-05-09

Family

ID=62170332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112196 WO2019085838A1 (zh) 2017-11-03 2018-10-26 对象渲染方法和装置、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN108090945B (zh)
WO (1) WO2019085838A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090945B (zh) * 2017-11-03 2019-08-02 腾讯科技(深圳)有限公司 对象渲染方法和装置、存储介质及电子装置
CN108830923B (zh) * 2018-06-08 2022-06-17 网易(杭州)网络有限公司 图像渲染方法、装置及存储介质
CN109224448B (zh) * 2018-09-25 2021-01-01 北京天马时空网络技术有限公司 一种流光渲染的方法和装置
CN109785448B (zh) * 2018-12-06 2023-07-04 广州西山居网络科技有限公司 一种三维模型表面附加印花的方法
CN109794062B (zh) * 2019-01-15 2022-08-30 珠海金山网络游戏科技有限公司 一种实现mmo游戏地表贴花的方法及其装置
CN109978968B (zh) * 2019-04-10 2023-06-20 广州虎牙信息科技有限公司 运动对象的视频绘制方法、装置、设备及存储介质
CN110310224B (zh) * 2019-07-04 2023-05-30 北京字节跳动网络技术有限公司 光效渲染方法及装置
CN111127611B (zh) * 2019-12-24 2023-09-22 北京像素软件科技股份有限公司 三维场景渲染方法、装置及电子设备
CN112070873B (zh) * 2020-08-26 2021-08-20 完美世界(北京)软件科技发展有限公司 一种模型的渲染方法和装置
CN112509131B (zh) * 2020-11-20 2022-12-06 上海莉莉丝网络科技有限公司 游戏地图内地图区域边界的渲染方法、系统及计算机可读存储介质
CN112836469A (zh) * 2021-01-27 2021-05-25 北京百家科技集团有限公司 一种信息渲染方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130265310A1 (en) * 2010-12-16 2013-10-10 Thomson Licensing Method for estimation of information representative of a pixel of a virtual object
CN103400404A (zh) * 2013-07-31 2013-11-20 北京华易互动科技有限公司 一种高效渲染位图运动轨迹的方法
CN104574495A (zh) * 2014-12-22 2015-04-29 北京像素软件科技股份有限公司 一种图像渲染方法和装置
CN106652007A (zh) * 2016-12-23 2017-05-10 网易(杭州)网络有限公司 虚拟海面渲染方法及系统
CN106780709A (zh) * 2016-12-02 2017-05-31 腾讯科技(深圳)有限公司 一种确定全局光照信息的方法及装置
CN108090945A (zh) * 2017-11-03 2018-05-29 腾讯科技(深圳)有限公司 对象渲染方法和装置、存储介质及电子装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101681526B (zh) * 2007-01-24 2013-03-27 英特尔公司 通过使用可置换的剔除程序提高图形性能的方法和装置
CN103065357B (zh) * 2013-01-10 2015-08-05 电子科技大学 基于普通三维模型的皮影模型制作方法
CN104766361B (zh) * 2015-04-29 2018-04-27 腾讯科技(深圳)有限公司 一种残影效果的实现方法,及装置
CN105427366B (zh) * 2015-11-11 2018-07-27 广州华多网络科技有限公司 一种图像渲染方法和图像渲染系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130265310A1 (en) * 2010-12-16 2013-10-10 Thomson Licensing Method for estimation of information representative of a pixel of a virtual object
CN103400404A (zh) * 2013-07-31 2013-11-20 北京华易互动科技有限公司 一种高效渲染位图运动轨迹的方法
CN104574495A (zh) * 2014-12-22 2015-04-29 北京像素软件科技股份有限公司 一种图像渲染方法和装置
CN106780709A (zh) * 2016-12-02 2017-05-31 腾讯科技(深圳)有限公司 一种确定全局光照信息的方法及装置
CN106652007A (zh) * 2016-12-23 2017-05-10 网易(杭州)网络有限公司 虚拟海面渲染方法及系统
CN108090945A (zh) * 2017-11-03 2018-05-29 腾讯科技(深圳)有限公司 对象渲染方法和装置、存储介质及电子装置

Also Published As

Publication number Publication date
CN108090945A (zh) 2018-05-29
CN108090945B (zh) 2019-08-02

Similar Documents

Publication Publication Date Title
WO2019085838A1 (zh) 对象渲染方法和装置、存储介质及电子装置
US9875554B2 (en) Surface normal estimation for use in rendering an image
JP7362044B2 (ja) 修正されたシェイプフロムシェーディング(sfs)スキームを使用した三角形3次元メッシュの形状改善
US10229483B2 (en) Image processing apparatus and image processing method for setting an illumination environment
US9607429B2 (en) Relightable texture for use in rendering an image
CN111742347A (zh) 用于图像中的对象的动态照明
US10171785B2 (en) Color balancing based on reference points
US20140176548A1 (en) Facial image enhancement for video communication
CN110248242B (zh) 一种图像处理和直播方法、装置、设备和存储介质
CN113826144B (zh) 使用单幅彩色图像和深度信息的面部纹理贴图生成
WO2015188666A1 (zh) 三维视频滤波方法和装置
JP6135952B2 (ja) 画像アンチエイリアシング方法および装置
CN112991366B (zh) 对图像进行实时色度抠图的方法、装置及移动端
CN107564085B (zh) 图像扭曲处理方法、装置、计算设备及计算机存储介质
Knecht et al. Adaptive camera-based color mapping for mixed-reality applications
CN107452045B (zh) 基于虚拟现实应用反畸变网格的空间点映射方法
WO2023103813A1 (zh) 图像处理方法、装置、设备、存储介质及程序产品
CN116342720A (zh) 图像处理方法及图像渲染方法、装置、设备和介质
CN110136070B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
WO2019052338A1 (zh) 图像处理方法和装置、存储介质及电子装置
WO2022036338A2 (en) System and methods for depth-aware video processing and depth perception enhancement
CA2709092A1 (en) Smooth shading and texture mapping using linear gradients
AU2015271981A1 (en) Method, system and apparatus for modifying a perceptual attribute for at least a part of an image
AU2015271935A1 (en) Measure of image region visual information
CN117315124A (zh) 图像处理方法、装置、电子设备、介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18874169

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18874169

Country of ref document: EP

Kind code of ref document: A1