CN114529657A - Rendering image generation method and device, computer equipment and storage medium - Google Patents

Rendering image generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114529657A
CN114529657A CN202210158366.6A CN202210158366A CN114529657A CN 114529657 A CN114529657 A CN 114529657A CN 202210158366 A CN202210158366 A CN 202210158366A CN 114529657 A CN114529657 A CN 114529657A
Authority
CN
China
Prior art keywords
rendering
model
vertex
dimensional model
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210158366.6A
Other languages
Chinese (zh)
Inventor
冷晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Datianmian White Sugar Technology Co ltd
Original Assignee
Beijing Datianmian White Sugar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Datianmian White Sugar Technology Co ltd filed Critical Beijing Datianmian White Sugar Technology Co ltd
Priority to CN202210158366.6A priority Critical patent/CN114529657A/en
Publication of CN114529657A publication Critical patent/CN114529657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a rendering image generation method, device, computer device and storage medium, wherein the method comprises: obtaining a first rendering model generated after a three-dimensional model of a target object is rendered by using a color map; performing position offset processing corresponding to the vertex normal direction and rendering processing of preset colors on the vertex in the three-dimensional model to obtain a second rendering model; and performing display and superposition processing on the first rendering model and the second rendering model to obtain a target rendering image.

Description

Rendering image generation method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating a rendered image, a computer device, and a storage medium.
Background
When rendering a two-dimensional object such as a cartoon image, a style effect, such as adding an outer stroked edge and a line, may be added to the two-dimensional object. In some possible cases, a post-processing mode is adopted to add outer delineation and delineation to the rendered two-dimensional object, for example, a pipeline processing mode is adopted, which is generally complex, and if a continuously changing picture is generated by using the two-dimensional object, each frame of picture image needs to be processed frame by frame, which is low in efficiency.
Disclosure of Invention
The embodiment of the disclosure at least provides a rendering image generation method, a rendering image generation device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for generating a rendered image, including: obtaining a first rendering model generated after a three-dimensional model of a target object is rendered by using a color map; performing position offset processing corresponding to the vertex normal direction and rendering processing of preset colors on the vertex in the three-dimensional model to obtain a second rendering model; and displaying and overlapping the first rendering model and the second rendering model to obtain a target rendering image.
In this way, the second rendering model obtained by performing the position offset processing corresponding to the normal direction on the vertex in the three-dimensional model of the target object and the rendering processing of the preset color has a prominent "edge" compared with the first rendering model generated by directly rendering the three-dimensional model, so that the first rendering model and the second rendering model are subjected to display and superposition processing, and the "edge" exposed without being shielded by the rendering image of the first rendering model can be displayed on the basis of normally displaying the rendering image of the first rendering model. Such "edges" may exhibit the effect of outlining a two-dimensional object in the resulting target rendered image. Therefore, the processing is carried out on the level of the model, the corresponding outer stroked edge delineation can be correspondingly rendered and displayed along with the state change of the two-dimensional object, and the post-processing is not required to be carried out frame by frame, so that the efficiency is high.
In an optional embodiment, performing a position shift process corresponding to a vertex normal direction and a rendering process of a preset color on a vertex in the three-dimensional model includes: determining a target vertex from a plurality of vertexes included in the three-dimensional model based on the current display visual angle of the three-dimensional model, and performing position offset processing on the target vertex in a corresponding vertex normal direction to obtain a middle three-dimensional model; and rendering the intermediate three-dimensional model by using the preset color to generate a second rendering model.
In an optional embodiment, performing a position shift process corresponding to a vertex normal direction and a rendering process of a preset color on a vertex in the three-dimensional model includes: rendering the three-dimensional model by using a preset color to obtain an intermediate rendering model; and determining a target vertex from a plurality of vertexes included in the intermediate rendering model based on the current display visual angle of the intermediate rendering model, and performing position offset processing corresponding to the vertex normal direction on the target vertex to obtain the second rendering model.
In this way, two ways of determining the second rendering model are provided in particular for generating the target rendered image, which is more flexible as more alternatives are available.
In an optional implementation manner, the processing of performing position offset on the vertex in the three-dimensional model in the direction corresponding to the vertex normal includes: acquiring the offset of the position offset processing; the offset is used for representing the distance between the position of the target vertex after the offset and the position of the target vertex before the offset; and performing position offset processing corresponding to the vertex normal direction on the vertex in the three-dimensional model by using the offset.
In this way, since the offset amount is settable, the distance between the outline showing the outer stroked outline and the contour line of the target object in the target rendering image can be determined more flexibly, so that the target object in the target rendering image can further show an exquisite style type or a more exaggerated cartoon style type depending on the outer stroked outline.
In an optional embodiment, the rendering processing of a preset color on a vertex in the three-dimensional model includes: acquiring a rendering range when the vertex is rendered by the preset color; and performing preset color rendering processing on the vertex in the three-dimensional model based on the rendering range.
Therefore, on one hand, the preset color can be selected, so that the outer drawn line can be flexibly changed in color. On the other hand, when the vertex is rendered, the rendering range can represent the thickness degree of the outer delineation line, so that the outer delineation line can be correspondingly adjusted through adjusting the rendering range, and the target object in the obtained target rendering image can show different style types.
In an alternative embodiment, the determining a target vertex from a plurality of vertices included in the three-dimensional model based on the current display view angle of the three-dimensional model includes: and based on the current display visual angle and based on the position information of the multiple vertexes in the three-dimensional model under the corresponding model coordinate system, screening the vertexes invisible under the display visual angle from the multiple vertexes of the three-dimensional model to be used as the target vertexes.
In an optional embodiment, the method further comprises: determining a shooting pose of a virtual camera shooting the three-dimensional model under a model coordinate system corresponding to the three-dimensional model; and determining the current display visual angle of the three-dimensional model based on the shooting pose of the virtual camera.
Therefore, the three-dimensional model and the virtual camera are attached to the same coordinate system to determine the current display visual angle, the accuracy is high, and the operation is simple in practical application, so that the efficiency can be effectively improved.
In an optional implementation manner, the displaying and superimposing the first rendering model and the second rendering model to obtain a target rendering image includes: rendering the first rendering model based on the current display visual angle of the three-dimensional model to obtain a first rendering layer, and rendering the second rendering model to obtain a second rendering layer; and superposing the first rendering layer to the front end of the second rendering layer to generate the target rendering image.
In a second aspect, an embodiment of the present disclosure further provides a generating apparatus for rendering an image, including: the acquisition module is used for acquiring a first rendering model generated after the three-dimensional model of the target object is rendered by using the color map; the first processing module is used for performing position offset processing corresponding to the vertex normal direction and rendering processing of preset colors on the vertex in the three-dimensional model to obtain a second rendering model; and the second processing module is used for carrying out display superposition processing on the first rendering model and the second rendering model to obtain a target rendering image.
In an optional embodiment, the first processing module, when performing a position shift process in a corresponding vertex normal direction and a rendering process of a preset color on a vertex in the three-dimensional model, is configured to: determining a target vertex from a plurality of vertexes included in the three-dimensional model based on a current display visual angle of the three-dimensional model, and performing position offset processing on the target vertex in a corresponding vertex normal direction to obtain a middle three-dimensional model; and rendering the intermediate three-dimensional model by using the preset color to generate a second rendering model.
In an optional embodiment, the first processing module, when performing a position shift process in a corresponding vertex normal direction and a rendering process of a preset color on a vertex in the three-dimensional model, is configured to: rendering the three-dimensional model by using a preset color to obtain an intermediate rendering model; and determining a target vertex from a plurality of vertexes included in the intermediate rendering model based on the current display visual angle of the intermediate rendering model, and performing position offset processing corresponding to the vertex normal direction on the target vertex to obtain the second rendering model.
In an optional embodiment, the first processing module, when performing position offset processing on vertices in the three-dimensional model in a direction corresponding to a vertex normal, is configured to: acquiring the offset of the position offset processing; the offset is used for representing the distance between the position of the target vertex after the offset and the position of the target vertex before the offset; and performing position offset processing corresponding to the vertex normal direction on the vertex in the three-dimensional model by using the offset.
In an optional embodiment, the first processing module, when performing rendering processing of a preset color on a vertex in the three-dimensional model, is configured to: acquiring a rendering range when the preset color is used for rendering the vertex; and performing preset color rendering processing on the vertex in the three-dimensional model based on the rendering range.
In an optional embodiment, the first processing module, when determining a target vertex from a plurality of vertices included in the three-dimensional model based on a current display view angle of the three-dimensional model, is configured to: and based on the current display visual angle and based on the position information of the multiple vertexes in the three-dimensional model under the corresponding model coordinate system, screening the vertexes invisible under the display visual angle from the multiple vertexes of the three-dimensional model to be used as the target vertexes.
In an optional implementation, the first processing module is further configured to: determining a shooting pose of a virtual camera shooting the three-dimensional model under a model coordinate system corresponding to the three-dimensional model; and determining the current display visual angle of the three-dimensional model based on the shooting pose of the virtual camera.
In an optional implementation manner, when the second processing module performs display superposition processing on the first rendering model and the second rendering model to obtain a target rendering image, the second processing module is configured to: rendering the first rendering model based on the current display view angle of the three-dimensional model to obtain a first rendering layer, and rendering the second rendering model to obtain a second rendering layer; and superposing the first rendering layer to the front end of the second rendering layer to generate the target rendering image.
In a third aspect, this disclosure provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions, when executed by the processor, perform the steps of the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effect of the above-mentioned rendering image generation apparatus, computer device, and computer-readable storage medium, reference is made to the description of the above-mentioned rendering image generation method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 illustrates a flowchart of a method for generating a rendered image according to an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a three-dimensional model of a target object provided by an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of a color map provided by an embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of adding an outer delineation line on a two-dimensional image provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating an operation interface provided by an embodiment of the disclosure;
FIG. 6 is a schematic diagram illustrating a target rendered image provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating an apparatus for generating a rendered image according to an embodiment of the disclosure;
fig. 8 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It is found through research that when an outer delineation line is added to a quadratic element object in a pipeline processing manner, a post-processing manner is usually adopted, for example, after a frame of image displaying the quadratic element object is obtained, the outer delineation line displaying the quadratic element object is added to the frame of image in a pipeline processing manner by using pixel points as processing units. The pipeline processing method is usually complex, and for continuously changing pictures generated for a two-dimensional object, the efficiency is low due to the adoption of the frame-by-frame processing method.
Based on the above research, the present disclosure provides a method for generating a rendering image, in which a second rendering model obtained by performing a position offset process corresponding to a normal direction on a vertex in a three-dimensional model of a target object and a rendering process of a preset color has a prominent "edge" compared to a first rendering model generated by directly rendering the three-dimensional model, and thus, performing a display and superimposition process on the first rendering model and the second rendering model can display the "edge" exposed by a rendering image corresponding to the second rendering model without being blocked by the rendering image of the first rendering model on the basis of normally displaying the rendering image of the first rendering model. Such "edges" may exhibit the effect of outlining a two-dimensional object in the resulting target rendered image. Therefore, the processing is carried out on the level of the model, the corresponding outer stroked edge delineation can be correspondingly rendered and displayed along with the state change of the two-dimensional object, and the post-processing is not required to be carried out frame by frame, so that the efficiency is high.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, a detailed description is first given of a method for generating a rendered image disclosed in the embodiments of the present disclosure, where an execution subject of the method for generating a rendered image provided in the embodiments of the present disclosure is generally a computer device with certain computing power, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the method of generating the rendered image may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a method for generating a rendered image according to an embodiment of the present disclosure. The method for generating the rendered image can be used for determining the rendered image of the two-dimensional object. The two-dimensional object described herein may specifically include a cartoon image, a game character, a virtual pet, and the like, and thus the generation method provided by the embodiment of the disclosure may be specifically applied to different application fields such as game picture production or generation, animation movie production, and the like. After the rendering image is obtained by using the method for generating the rendering image provided by the embodiment of the disclosure, the rendered and displayed secondary object can have an effect of outer stroked edge and line drawing, and the effect of style rendering can enable the rendered and displayed secondary object to have a special style effect, such as a highlighted card ventilation grid or a brief style.
Referring to fig. 1, a flowchart of a method for generating a rendered image according to an embodiment of the present disclosure is shown, where the method includes steps S101 to S103, where:
s101: obtaining a first rendering model generated after a three-dimensional model of a target object is rendered by using a color map;
s102: performing position offset processing corresponding to the vertex normal direction and rendering processing of preset colors on the vertex in the three-dimensional model to obtain a second rendering model;
s103: and displaying and overlapping the first rendering model and the second rendering model to obtain a target rendering image.
When the target rendering image is determined, the color map is used for rendering the three-dimensional model of the target object to generate the first rendering model, the vertex in the three-dimensional model is subjected to position offset processing corresponding to the normal direction and rendering processing of the preset color to obtain the second rendering model, and the first rendering model and the second rendering model are subjected to display and superposition processing to obtain the target rendering image. Therefore, the processing is carried out on the level of the model, the corresponding outer stroked edge delineation can be correspondingly rendered and displayed along with the state change of the two-dimensional object, and the post-processing is not required to be carried out frame by frame, so that the efficiency is high.
The following describes details of S101 to S103.
For the above S101, the target object may include, for example, the above-described quadratic element object, which is not described in detail again. Because the target object is a virtual object and does not actually exist, the posture characteristics of the target object, such as height, fat and weight, can be obtained by constructing a three-dimensional model corresponding to the target object, and the determined three-dimensional model is used for simulating the image which the target object is expected to present in the real world. Illustratively, referring to fig. 2, a schematic diagram of a three-dimensional model of a target object provided by an embodiment of the present disclosure, where the three-dimensional model is a artificially constructed virtual model; for the three-dimensional model, specific features such as the five sense organs and the dressing feature can be detailed and not shown in fig. 2. In addition, corresponding three-dimensional models can be determined correspondingly for different target objects.
The three-dimensional model typically includes: a plurality of vertices located on the surface of the three-dimensional model, and a patch (mesh) constituted by the interconnection relationship between the vertices.
For the color map, referring to fig. 3, a schematic diagram of a color map provided in the embodiment of the present disclosure is shown, in fact, different areas on the color map have different colors, and the color map may add a color effect to the three-dimensional model corresponding to the target object during rendering, that is, implement a coloring process on the three-dimensional model.
Therefore, after the three-dimensional model of the target object is rendered by using the color map, the generated first rendering model can reflect the patterns and corresponding colors of the five sense organs, the clothing and the like. In a possible case, if the target object does not need to be subjected to additional rendering of outer delineators, the obtained first rendering model can also be directly used for rendering to generate an image which can completely show the target object.
For the above S102, exemplarily, part of the vertices and vertex normal directions corresponding to the vertices are shown in the three-dimensional model shown in fig. 2. Because the three-dimensional model can be determined, a plurality of vertexes can be determined on the outer contour of the three-dimensional model, and the normal direction corresponding to each vertex is determined according to the positions of the vertexes on the three-dimensional model. The specific manner of determining the normal direction is not described herein again.
And performing position offset processing and preset color rendering processing on the vertex in the three-dimensional model to obtain a second rendering model for rendering and displaying the outer tracing edge. Since the order of the position shift processing and the rendering processing of the preset color may be changed, in a specific implementation, for example, the following two different manners (a) or (b) may be adopted:
(a) the method comprises the following steps Firstly, a target vertex to be subjected to position offset processing is determined from a plurality of vertexes in the three-dimensional model, and then the determined target vertex is subjected to position offset processing and preset color rendering processing.
In specific implementation, the following method can be specifically adopted: determining a target vertex from a plurality of vertexes included in the three-dimensional model based on the current display visual angle of the three-dimensional model, and performing position offset processing on the target vertex in a corresponding vertex normal direction to obtain a middle three-dimensional model; and rendering the intermediate three-dimensional model by using the preset color to generate a second rendering model.
When determining the target vertex from a plurality of vertices included in the three-dimensional model according to the current display view angle of the three-dimensional model, the following method may be specifically adopted: and based on the current display visual angle and based on the position information of the multiple vertexes in the three-dimensional model under the corresponding model coordinate system, screening the vertexes invisible under the display visual angle from the multiple vertexes of the three-dimensional model to be used as the target vertexes.
Specifically, when determining the current display angle of view, for example, a shooting pose of a virtual camera shooting the three-dimensional model in a model coordinate system corresponding to the three-dimensional model may be determined, and then the current display angle of view for the three-dimensional model may be determined based on the shooting pose of the virtual camera. Here, the virtual camera may be understood as a virtual camera for photographing a target object; the shooting parameter information of the virtual camera is set or adjusted, and the shooting pose can be changed correspondingly.
Here, since the current display view angle is determined, specifically, the relative position relationship between the virtual camera and the target object is involved, in order to determine the current display view angle more accurately, a corresponding model coordinate system may be established with reference to the three-dimensional model, and the position information of the virtual camera in the model coordinate system may be determined in the model coordinate system. In addition, the shooting attitude of the virtual camera can be determined through the shooting parameter information of the virtual camera, such as the shooting elevation angle of 45 degrees. Therefore, the shooting pose of the virtual camera can be determined according to the position information and the shooting pose of the virtual camera.
Therefore, as the shooting pose is changed, the current display view angle corresponding to the displayed target object is changed, for example, from the front side capable of displaying the target object to the side capable of displaying the target object. The front and sides described herein, in effect, exhibit at least one vertex corresponding to the front and sides of the target object. The distribution area of the current visible vertex in the model coordinate system can be determined by using the current display visual angle, and the position information of the multiple vertexes in the three-dimensional model in the model coordinate system can be determined, so that the vertexes invisible under the display visual angle can be screened out from the multiple vertexes in the three-dimensional model by using the position information corresponding to the multiple vertexes, and the vertexes are used as the target vertexes.
Under the condition of determining the target vertex, firstly, the position offset processing can be carried out on the vertex offset direction corresponding to the target vertex, so as to obtain the middle three-dimensional model. When performing the position offset processing, the following method may be specifically adopted: acquiring the offset of the position offset processing; the offset is used for representing the distance between the position of the target vertex after the offset and the position of the target vertex before the offset; and performing position offset processing corresponding to the vertex normal direction on the vertex in the three-dimensional model by using the offset.
The offset may specifically affect a distance between an outer drawn edge and an outermost edge of the rendered target object (i.e., a contour line of the target object) of the obtained second rendering model after the obtained second rendering model is used to obtain the target rendered image. In general, when an outline is rendered for a target object, the outline is drawn at a position close to the contour line of the target object. For a simplified description of a two-dimensional image, referring to fig. 4, a schematic diagram of adding an outer delineation line on the two-dimensional image is provided for the embodiment of the present disclosure. In fig. 4 (a) a rectangular object is shown, the contour line 41 being indicated by a dotted line. For the contour line 41 of the rectangular object, it can be considered to be composed of a plurality of vertices, and the vertices can be subjected to positional displacement on the two-dimensional plane in the normal direction of the vertices; when all the vertices are positionally shifted by the same distance, the outer stroked outline 42 whose position is shifted can be obtained from the original outline 41, and the outer stroked outline 42 added to the rectangular object is represented by a solid line, for example. Similarly, in fig. 4 (b) the same rectangular object is shown, labelled as outline 43 for ease of distinction, and the outer delineation lines 44 added for the rectangular object are also shown as solid lines. Here, the distance between the vertex and the outer stroked line in the corresponding normal direction, for example, the distance d1 shown in (a) and the distance d2 shown in (b) in fig. 4; such a distance, i.e. the offset as explained above, can likewise be determined by shifting the two-dimensional image onto the three-dimensional model.
Here, as is clear from the description of fig. 4, when the set offset amount is different in magnitude, the effect of the added outer stroke line is also different. For example, in the rectangular object shown in fig. 4 (a), since the offset amount is small, the added outer stroke contour is closer to the outline of the rectangular object, and the style type to be displayed is closer to the refined style type obtained when rendering a person having a relatively large amount of drawing details. In contrast, in the rectangular object shown in fig. 4 (b), since the offset amount is large, the added outline delineators are distant from the outline of the rectangular object, and the style displayed is closer to the more exaggerated, cartoon-style cartoon style type obtained when the simple-stroke cartoon character is rendered.
Therefore, when the offset amount of the position offset processing is acquired, the corresponding offset amount may be determined according to the actual desired rendering style type, for example. The unit of the offset may be, for example, a unit of a corresponding pixel after rendering as a two-dimensional image, for example, the offset is set to be 2 pixels or 5 pixels, so as to obtain different style types.
The principle of performing the positional shift processing on the vertices in the three-dimensional model in the directions corresponding to the normal lines of the vertices by using the shift amounts is the same as the principle of performing the positional shift processing on the contour lines on the two-dimensional image described above with reference to fig. 4, and the description thereof will not be repeated.
When the intermediate rendering model is obtained by using the above-described position offset processing method, the intermediate three-dimensional model may be rendered by using a preset color, so as to obtain a second rendering model.
In a specific implementation, for example, the following may be used: acquiring a rendering range when the vertex is rendered by the preset color; and performing preset color rendering processing on the vertex in the three-dimensional model based on the rendering range.
The preset color specifically includes a color of the outer drawn line, and may be determined in response to a selection operation of the color, for example. Referring to fig. 5, a schematic view of an operation interface provided in the embodiment of the present disclosure is shown. In the operation interface, a color card and a color fetching button are included at a position corresponding to the 'selecting color', and in response to a click operation on the color fetching button, a rendering color at the time of performing rendering processing of the vertex can be selected, and the currently selected rendering color is displayed by using the color card.
The rendering range may specifically reflect the thickness degree of the outline, for example, in fig. 4 (a), the corresponding rendering range is smaller, and thus the displayed outline is thinner; in contrast, since the rendering range corresponding to (b) in fig. 4 is large, the outer stroke lines shown are thick. When determining the rendering range when rendering the vertex, an adjusting knob is further provided at a position corresponding to the "selection range" in the operation interface shown in fig. 5, and in response to a control operation on the adjusting knob, the size of the selected rendering range can be determined, and the current control operation result is displayed with a numerical value on the right side. Here, the selected rendering range may be represented by pixels, for example, and specifically represents the number of pixels occupied in the direction of the vertex corresponding to the normal line when the outlining is rendered and displayed.
Under the condition that the preset color and the rendering range are determined, the vertex can be rendered by using the preset color according to the rendering range, so that a second rendering model is obtained. Here, the obtained second rendering model has an outer shape that is the same as that of the three-dimensional model but is larger than that of the three-dimensional model, and if the second rendering model and the three-dimensional model are subjected to the nesting processing, the three-dimensional model can be wrapped by the second rendering model. Additionally, the vertices of the second rendering model are not visible at the current display perspective.
(b) The method comprises the following steps The method comprises the steps of firstly rendering a three-dimensional model by using a preset color, and then determining a target vertex to be subjected to position offset processing from a plurality of vertexes in the three-dimensional model so as to continue the position offset processing in the normal direction of the corresponding vertex.
In a specific implementation, for example, the following may be used: rendering the three-dimensional model by using a preset color to obtain an intermediate rendering model; and determining a target vertex from a plurality of vertexes included in the intermediate rendering model based on the current display visual angle of the intermediate rendering model, and performing position offset processing corresponding to the vertex normal direction on the target vertex to obtain the second rendering model.
The determination method of the preset color may refer to the description in (a) above, and is not described herein again. After rendering the three-dimensional model with the preset color, the obtained intermediate rendering model is, for example, a three-dimensional model expressed as a whole with the preset color.
After the intermediate rendering model is obtained, a target vertex can be determined from the multiple vertices through the current display view angle, and the target vertex is determined to be subjected to position offset processing in the direction corresponding to the vertex normal. The method for determining the current display viewing angle and the method for determining the vertex of the target point by using the determined current display viewing angle may specifically refer to the description in (a), and details are not repeated here. When the target vertex with the rendered preset color is subjected to position offset processing in the corresponding normal direction, the principle is specifically adopted to be similar to the principle of determining the second rendering model in the step (a), and the obtained second rendering model is also the same as the second rendering model determined in the step (a).
In addition, the order of executing the steps is not limited to the above S101 and S102.
For the above S103, when the first rendering model and the second rendering model are determined according to the above S101 and S102, respectively, the target rendering image may be obtained by performing display and superimposition processing on the first rendering model and the second rendering model.
In a specific implementation, the target rendered image may be determined, for example, in the following manner: rendering the first rendering model based on the current display view angle of the three-dimensional model to obtain a first rendering layer, and rendering the second rendering model to obtain a second rendering layer; and superposing the first rendering layer to the front end of the second rendering layer to generate the target rendering image.
Specifically, the second rendering model performs vertex rendering with a preset color, and is larger than the first rendering model, so that a second rendering layer obtained after the second rendering model is rendered is slightly larger than a first rendering layer obtained after the first rendering model is rendered, if the first rendering layer is superimposed at the front end of the second rendering layer, the first rendering layer cannot completely shield the second rendering layer, but can display an edge portion of the second rendering layer which is not shielded, that is, a portion of the second rendering layer with the preset color and the rendering range which is displayed by rendering. In addition, the vertex of the visible part of the second rendering model is removed under the current display visual angle, the display of the first rendering model is not affected, the first rendering model can display the art details of the target object through the rendering of the color map, and therefore after the first rendering layer is superposed on the front end of the second rendering layer, the obtained target rendering image can display the target object completely, and the outer drawn edge outline rendered and displayed by the second rendering model can be displayed, so that more style effects are added to the target object.
Illustratively, referring to fig. 6, a schematic diagram of an image rendered for a target according to an embodiment of the present disclosure is provided. The target object shown in fig. 6 (a) is, for example, a rendered image corresponding to the first rendered layer, and fig. 6 (b) is a target rendered image obtained by superimposing the second rendered layer. Compared with the rendered image shown in fig. 6 (a), it can be seen that the target object in the target rendered image shown in fig. 6 (b) has an outer stroked outline, and the cartoon style effect can be further embodied.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a device for generating a rendered image corresponding to the method for generating a rendered image, and since the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the method for generating a rendered image in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
Referring to fig. 7, a schematic diagram of an apparatus for generating a rendered image according to an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 71, a first processing module 72, a second processing module 73; wherein the content of the first and second substances,
an obtaining module 71, configured to obtain a first rendering model generated after rendering a three-dimensional model of a target object by using a color map; and
the first processing module 72 is configured to perform position offset processing in a direction corresponding to a vertex normal line and rendering processing of a preset color on a vertex in the three-dimensional model to obtain a second rendering model;
and the second processing module 73 is configured to perform display and superposition processing on the first rendering model and the second rendering model to obtain a target rendering image.
In an optional embodiment, when performing the position shift processing in the corresponding vertex normal direction and the rendering processing of the preset color on the vertex in the three-dimensional model, the first processing module 72 is configured to: determining a target vertex from a plurality of vertexes included in the three-dimensional model based on the current display visual angle of the three-dimensional model, and performing position offset processing on the target vertex in a corresponding vertex normal direction to obtain a middle three-dimensional model; and rendering the intermediate three-dimensional model by using the preset color to generate a second rendering model.
In an optional embodiment, when performing the position shift processing in the corresponding vertex normal direction and the rendering processing of the preset color on the vertex in the three-dimensional model, the first processing module 72 is configured to: rendering the three-dimensional model by using a preset color to obtain an intermediate rendering model; and determining a target vertex from a plurality of vertexes included in the intermediate rendering model based on the current display visual angle of the intermediate rendering model, and performing position offset processing corresponding to the vertex normal direction on the target vertex to obtain the second rendering model.
In an alternative embodiment, the first processing module 72, when performing the position offset processing on the vertices in the three-dimensional model in the corresponding vertex normal direction, is configured to: acquiring the offset of the position offset processing; the offset is used for representing the distance between the position of the target vertex after the offset and the position of the target vertex before the offset; and performing position offset processing corresponding to the vertex normal direction on the vertex in the three-dimensional model by using the offset.
In an optional embodiment, the first processing module 72, when performing a preset color rendering process on vertices in the three-dimensional model, is configured to: acquiring a rendering range when the vertex is rendered by the preset color; and performing preset color rendering processing on the vertexes in the three-dimensional model based on the rendering range.
In an alternative embodiment, the first processing module 72, when determining the target vertex from a plurality of vertices included in the three-dimensional model based on the current display view angle of the three-dimensional model, is configured to: and based on the current display visual angle and based on the position information of the multiple vertexes in the three-dimensional model under the corresponding model coordinate system, screening the vertexes invisible under the display visual angle from the multiple vertexes of the three-dimensional model to be used as the target vertexes.
In an optional embodiment, the first processing module 72 is further configured to: determining a shooting pose of a virtual camera for shooting the three-dimensional model under a model coordinate system corresponding to the three-dimensional model; and determining the current display visual angle of the three-dimensional model based on the shooting pose of the virtual camera.
In an optional implementation manner, when the second processing module 73 performs display superposition processing on the first rendering model and the second rendering model to obtain a target rendering image, the second processing module is configured to: rendering the first rendering model based on the current display view angle of the three-dimensional model to obtain a first rendering layer, and rendering the second rendering model to obtain a second rendering layer; and superposing the first rendering layer to the front end of the second rendering layer to generate the target rendering image.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 8, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and the computer device includes:
a processor 10 and a memory 20; the memory 20 stores machine-readable instructions executable by the processor 10, the processor 10 being configured to execute the machine-readable instructions stored in the memory 20, the processor 10 performing the following steps when the machine-readable instructions are executed by the processor 10:
obtaining a first rendering model generated after a three-dimensional model of a target object is rendered by using a color map; performing position offset processing corresponding to the vertex normal direction and rendering processing of preset colors on the vertex in the three-dimensional model to obtain a second rendering model; and displaying and overlapping the first rendering model and the second rendering model to obtain a target rendering image.
The storage 20 includes a memory 210 and an external storage 220; the memory 210 is also referred to as an internal memory, and temporarily stores operation data in the processor 10 and data exchanged with the external memory 220 such as a hard disk, and the processor 10 exchanges data with the external memory 220 through the memory 210.
For the specific execution process of the instruction, reference may be made to the steps of the method for generating a rendered image described in the embodiment of the present disclosure, and details are not described here again.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the method for generating a rendered image described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the method for generating a rendered image in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system and the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method of generating a rendered image, comprising:
obtaining a first rendering model by performing coloring pretreatment rendering on a three-dimensional model of a target object; and
performing position offset processing corresponding to the vertex normal direction and rendering processing of preset colors on the vertex in the three-dimensional model to obtain a second rendering model;
and displaying and overlapping the first rendering model and the second rendering model to obtain a target rendering image.
2. The generation method according to claim 1, wherein performing a position shift process corresponding to a vertex normal direction and a rendering process of a preset color on a vertex in the three-dimensional model includes:
determining a target vertex from a plurality of vertexes included in the three-dimensional model based on the current display visual angle of the three-dimensional model, and performing position offset processing on the target vertex in a corresponding vertex normal direction to obtain a middle three-dimensional model;
and rendering the intermediate three-dimensional model by using the preset color to generate a second rendering model.
3. The generation method according to claim 1, wherein performing a position shift process corresponding to a vertex normal direction and a rendering process of a preset color on a vertex in the three-dimensional model includes:
rendering the three-dimensional model by using a preset color to obtain an intermediate rendering model;
and determining a target vertex from a plurality of vertexes included in the intermediate rendering model based on the current display visual angle of the intermediate rendering model, and performing position offset processing corresponding to the vertex normal direction on the target vertex to obtain the second rendering model.
4. The generation method according to any one of claims 1 to 3, wherein the performing of the positional shift processing of the vertices in the three-dimensional model in the directions corresponding to the vertex normals includes:
acquiring the offset of the position offset processing; the offset is used for representing the distance between the position of the target vertex after the offset and the position of the target vertex before the offset;
and performing position offset processing corresponding to the vertex normal direction on the vertex in the three-dimensional model by using the offset.
5. The generation method according to any one of claims 1 to 4, wherein performing rendering processing of a preset color on the vertex in the three-dimensional model includes:
acquiring a rendering range when the vertex is rendered by the preset color;
and performing preset color rendering processing on the vertex in the three-dimensional model based on the rendering range.
6. The method of generating according to claim 2, wherein said determining a target vertex from a plurality of vertices included in the three-dimensional model based on a current display perspective of the three-dimensional model comprises:
and based on the current display visual angle and based on the position information of the multiple vertexes in the three-dimensional model under the corresponding model coordinate system, screening the vertexes invisible under the display visual angle from the multiple vertexes of the three-dimensional model to be used as the target vertexes.
7. The method of generating as claimed in claim 6, further comprising:
determining a shooting pose of a virtual camera shooting the three-dimensional model under a model coordinate system corresponding to the three-dimensional model;
and determining the current display visual angle of the three-dimensional model based on the shooting pose of the virtual camera.
8. The generation method according to any one of claims 1 to 7, wherein the performing display superposition processing on the first rendering model and the second rendering model to obtain a target rendering image includes:
rendering the first rendering model based on the current display view angle of the three-dimensional model to obtain a first rendering layer, and rendering the second rendering model to obtain a second rendering layer;
and superposing the first rendering layer to the front end of the second rendering layer to generate the target rendering image.
9. A generation apparatus for rendering an image, comprising:
the acquisition module is used for acquiring a first rendering model generated after the three-dimensional model of the target object is rendered by using the color map; and
the first processing module is used for performing position offset processing corresponding to the vertex normal direction and rendering processing of preset colors on the vertex in the three-dimensional model to obtain a second rendering model;
and the second processing module is used for carrying out display superposition processing on the first rendering model and the second rendering model to obtain a target rendering image.
10. A computer device, comprising: a processor, a memory storing machine readable instructions executable by the processor, the processor for executing the machine readable instructions stored in the memory, the machine readable instructions when executed by the processor, the processor performing the steps of the method of generating a rendered image according to any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a computer device, performs the steps of the method of generating a rendered image according to any one of claims 1 to 8.
CN202210158366.6A 2022-02-21 2022-02-21 Rendering image generation method and device, computer equipment and storage medium Pending CN114529657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210158366.6A CN114529657A (en) 2022-02-21 2022-02-21 Rendering image generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210158366.6A CN114529657A (en) 2022-02-21 2022-02-21 Rendering image generation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114529657A true CN114529657A (en) 2022-05-24

Family

ID=81625643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210158366.6A Pending CN114529657A (en) 2022-02-21 2022-02-21 Rendering image generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114529657A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393494A (en) * 2022-08-24 2022-11-25 北京百度网讯科技有限公司 City model rendering method, device, equipment and medium based on artificial intelligence
CN116721044A (en) * 2023-08-09 2023-09-08 广州市乐淘动漫设计有限公司 Multimedia cartoon making and generating system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393494A (en) * 2022-08-24 2022-11-25 北京百度网讯科技有限公司 City model rendering method, device, equipment and medium based on artificial intelligence
CN115393494B (en) * 2022-08-24 2023-10-17 北京百度网讯科技有限公司 Urban model rendering method, device, equipment and medium based on artificial intelligence
CN116721044A (en) * 2023-08-09 2023-09-08 广州市乐淘动漫设计有限公司 Multimedia cartoon making and generating system
CN116721044B (en) * 2023-08-09 2024-04-02 广州市乐淘动漫设计有限公司 Multimedia cartoon making and generating system

Similar Documents

Publication Publication Date Title
CN111369655B (en) Rendering method, rendering device and terminal equipment
CN114529657A (en) Rendering image generation method and device, computer equipment and storage medium
CN108765520B (en) Text information rendering method and device, storage medium and electronic device
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN112052864B (en) Image drawing method and device, electronic equipment and readable storage medium
CN111653175B (en) Virtual sand table display method and device
CN111583398B (en) Image display method, device, electronic equipment and computer readable storage medium
CN114549719A (en) Rendering method, rendering device, computer equipment and storage medium
CN114119848B (en) Model rendering method and device, computer equipment and storage medium
CN109410309A (en) Weight illumination method and device, electronic equipment and computer storage medium
KR20060108271A (en) Method of image-based virtual draping simulation for digital fashion design
CN108230430B (en) Cloud layer mask image processing method and device
WO2019042028A1 (en) All-around spherical light field rendering method
CN116385619A (en) Object model rendering method, device, computer equipment and storage medium
CN114529656A (en) Shadow map generation method and device, computer equipment and storage medium
CN114581592A (en) Highlight rendering method and device, computer equipment and storage medium
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
CN110038302A (en) Grid generation method and device based on Unity3D
CN115758502A (en) Carving processing method and device of spherical model and computer equipment
CN114627225A (en) Method and device for rendering graphics and storage medium
CN109427084A (en) A kind of map-indication method, device, terminal and storage medium
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN114529655A (en) Edge light rendering method and device, computer equipment and storage medium
CN114529654A (en) Model generation method and device, computer equipment and storage medium
CN114519760A (en) Method and device for generating map, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination