CN115330925A - Image rendering method and device, electronic equipment and storage medium - Google Patents

Image rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115330925A
CN115330925A CN202211001106.4A CN202211001106A CN115330925A CN 115330925 A CN115330925 A CN 115330925A CN 202211001106 A CN202211001106 A CN 202211001106A CN 115330925 A CN115330925 A CN 115330925A
Authority
CN
China
Prior art keywords
pixel
image
rendered
determining
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211001106.4A
Other languages
Chinese (zh)
Inventor
曲春恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211001106.4A priority Critical patent/CN115330925A/en
Publication of CN115330925A publication Critical patent/CN115330925A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the disclosure provides an image rendering method and device, electronic equipment and a storage medium. Wherein, the method comprises the following steps: in response to an image rendering request input for a model to be rendered, determining an initial rendering effect graph of the model to be rendered; carrying out layered shading treatment on the initial rendering effect image based on a preset gray-scale image to obtain a target rendering effect image; and displaying the target rendering effect graph in a preset display area. According to the technical scheme, the image rendering method is optimized, the obtained target rendering effect graph is more vivid, and the image rendering effect is improved.

Description

Image rendering method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to computer graphics rendering technologies, and in particular, to an image rendering method and apparatus, an electronic device, and a storage medium.
Background
With the development of computer graphics rendering technology, more and more people try to blend various painting styles of ink and wash into computer application, namely, simulate image rendering of various painting styles by using a computer. For example, a water-ink style image is rendered.
However, the effect graph with the painting style rendered by the related image rendering technology is often relatively stiff and lacks vividness, which affects the user experience. Therefore, how to improve the stylized rendering effect of the image becomes a problem to be solved urgently.
Disclosure of Invention
The disclosure provides an image rendering method, an image rendering device, an electronic device and a storage medium, so as to realize image rendering.
In a first aspect, an embodiment of the present disclosure provides an image rendering method, where the method includes:
in response to an image rendering request input for a model to be rendered, determining an initial rendering effect graph of the model to be rendered;
carrying out layered shading treatment on the initial rendering effect image based on a preset gray-scale image to obtain a target rendering effect image;
and displaying the target rendering effect graph in a preset display area.
In a second aspect, an embodiment of the present disclosure further provides an image rendering apparatus, where the apparatus includes:
the rendering request module is used for responding to an image rendering request input aiming at a model to be rendered, and determining an initial rendering effect graph of the model to be rendered;
the rendering effect generating module is used for carrying out layered shading processing on the initial rendering effect graph based on a preset gray-scale graph to obtain a target rendering effect graph;
and the rendering effect display module is used for displaying the target rendering effect graph in a preset display area.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement an image rendering method as in any of the embodiments of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a storage medium containing computer-executable instructions, wherein the computer-executable instructions, when executed by a computer processor, are configured to perform the image rendering method according to any of the disclosed embodiments.
According to the technical scheme of the embodiment of the disclosure, the initial rendering effect graph of the model to be rendered is determined by responding to the image rendering request input aiming at the model to be rendered, and the effect of the brush pen stroke is simulated. And then, carrying out layered shading treatment on the initial rendering effect picture based on a preset gray-scale picture to obtain a target rendering effect picture, and simulating the distinction of color concentration and the transition between colors with different concentrations, so that the obtained rendering effect picture is more real and vivid. And finally, displaying the target rendering effect graph in a preset display area so as to respond to the image rendering request and display the final rendering effect graph. The problem that images rendered by related image rendering technologies are hard and not vivid enough is solved, and the effect of improving the reality and vividness of image rendering is achieved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of an image rendering method provided in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a preset grayscale map used for implementing an image rendering method according to an embodiment of the disclosure;
FIG. 3 is a schematic flow chart diagram of another image rendering method provided by the embodiments of the present disclosure;
fig. 4 is a schematic diagram of a preset noise map used in implementing an image rendering method according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a further image rendering method according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a brush map used when implementing an image rendering method according to an embodiment of the present disclosure;
FIG. 7 is a schematic flowchart of an alternative example of an image rendering method provided by an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the disclosure;
fig. 9 is a schematic structural diagram of an image rendering electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
For example, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested operation to be performed would require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution, according to the prompt information.
As an alternative but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window manner, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and is not intended to limit the implementation of the present disclosure, and other ways of satisfying the relevant laws and regulations may be applied to the implementation of the present disclosure.
It will be appreciated that the data referred to in this disclosure, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the applicable laws and regulations and related regulations.
Fig. 1 is a schematic diagram of an image rendering process provided by an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a situation where an image with a painting style is rendered through a three-dimensional model, and the method may be executed by an image rendering apparatus, where the apparatus may be implemented in a form of software and/or hardware, and optionally, the apparatus may be implemented by an electronic device, where the electronic device may be a mobile terminal, a PC terminal, a server, or the like.
As shown in fig. 1, the method includes:
s110, responding to an image rendering request input by aiming at the model to be rendered, and determining an initial rendering effect graph of the model to be rendered.
The model to be rendered can be understood as a three-dimensional model to be rendered. The model to be rendered is generally a model with three-dimensional data constructed in a virtual three-dimensional space by three-dimensional manufacturing software. It can be understood that the form of the model to be rendered, which is related to the actual application scenario, may be determined according to actual requirements, and the model to be rendered may be different in different application scenarios.
Wherein, the image rendering request can be understood as a request for starting rendering initiated by the model to be rendered. The image rendering request can be generated in various ways, for example, the image rendering request can be generated when a preset rendering request control for initiating the image rendering request is received, or when a preset triggering event for triggering the image rendering to start is detected; or, when preset sound information or gesture information is detected, the voice recognition device generates the voice recognition device. It is to be understood that the generation manner of the image rendering request may be set according to an application scenario, and is not limited specifically herein.
The initial rendering effect graph can be understood as a two-dimensional image which is preliminarily rendered and has the color characteristics of the target painting style and the brush characteristics. As can be seen from the foregoing, the model to be rendered is a three-dimensional model, and therefore, in the embodiment of the present disclosure, an image rendering request input by the model to be rendered is related to a line-of-sight direction, that is, the initial rendering effect map is obtained by rendering the model to be rendered based on the line-of-sight direction.
Optionally, before the image rendering request input for the model to be rendered, the method may further include: and receiving a model posture adjusting operation input aiming at the model to be rendered so as to adjust the model to be dyed to a target rendering posture. Further, an input image rendering request for the model to be rendered in the target rendering pose is received. It is understood that the target rendering pose corresponds to the gaze direction.
And S120, carrying out layered shading treatment on the initial rendering effect picture based on a preset gray-scale picture to obtain a target rendering effect picture.
The gray scale map may be understood as an image obtained by dividing the white color and the black color into several levels according to a logarithmic relationship and representing the levels by gray scale, and in the embodiment of the present disclosure, as shown in fig. 2, the preset gray scale map may be a gradient map including two or more strip-shaped regions. The gray scale values of all the pixel points in the same strip-shaped area in the gradient map are the same, and different strip-shaped areas correspond to different gray scale values respectively. It is understood that the number of gray levels included in the gray level map and the specific gray level value can be set according to the application scenario, and are not limited herein.
In the embodiment of the present disclosure, a relatively harsh rendering effect may occur with respect to the initial rendering effect map. Therefore, compared with the method of directly outputting the initial rendering effect graph as the final rendering effect graph, the method can further include sampling pixel points in the preset gray scale graph, and fusing the gray scale values of the sampled pixel points into the initial rendering effect graph so that the initial rendering effect graph presents a layered shading effect, and therefore the rendering effect graph with a better rendering effect is obtained.
And S130, displaying the target rendering effect graph in a preset display area.
As can be seen from the foregoing, the target rendering effect map is a final displayed two-dimensional rendering effect map obtained by performing hierarchical shading processing on the initial rendering effect map of the model to be rendered through a preset grayscale map.
The preset display area may be understood as an area preset for displaying the target rendering effect map. Typically, the preset display area may be an effect diagram preview area available for displaying a target rendering effect diagram, or a special effect display area in a special effect application scene used for generating the target rendering effect diagram. Optionally, the preset display area may also be used to display the initial rendering effect map. It is understood that the specific information such as the position of the preset display area and the adopted size may be preset according to the requirement, and is not specifically limited herein. Optionally, the display area of the target rendering effect graph is facilitated to be observed. For example, a partial area in the rendering interactive interface of the input image rendering request, or a window area popped up in the current interface, etc.
According to the technical scheme, the initial rendering effect graph of the model to be rendered is determined by responding to the image rendering request input aiming at the model to be rendered, so that the initial rendering effect graph has the drawing style. And then, carrying out layered shading treatment on the initial rendering effect picture based on a preset gray-scale picture to obtain a target rendering effect picture, and simulating the distinction of color concentration and the transition between colors with different concentrations, so that the obtained rendering effect picture is more real and vivid. And finally, displaying the target rendering effect graph in a preset display area so as to respond to an image rendering request and display the final rendering effect graph. The problem that the image rendered by the related image rendering technology is hard and not vivid enough is solved, and the effect of improving the reality and vividness of image rendering is realized.
Fig. 3 is a schematic flow chart of another image rendering method provided in the embodiment of the present disclosure, and the embodiment is a refinement of how to perform hierarchical shading processing on the initial rendering effect map based on a preset grayscale map in the above embodiment.
As shown in fig. 3, the method of the embodiment of the present disclosure may specifically include:
s210, in response to an image rendering request input aiming at a model to be rendered, determining an initial rendering effect graph of the model to be rendered.
S220, aiming at each pixel point to be processed of the initial rendering effect image, determining first illumination information of the pixel point to be processed, and determining a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the first illumination information.
The pixel points to be processed can be understood as pixel points to be subjected to layered shading processing. The pixel points to be processed may be all pixel points of the initial rendering effect graph, or may be partial pixel points of the initial rendering effect graph. Specifically, which pixel points in the initial rendering effect graph are used as the pixel points to be processed can be set according to actual requirements, and no specific limitation is made here.
The first illumination information can be understood as the illumination information of the pixel point to be processed. It can be understood that the illumination information is associated with the location of the pixel point and the illumination direction. In the embodiment of the present disclosure, the illumination direction may be determined according to actual situations, and is not specifically limited herein. Illustratively, the first illumination information may be obtained by point-multiplying a normal direction of the pixel point to be processed by a sight line direction.
Optionally, the determining, according to the first illumination information, a halation sampling point corresponding to the pixel point to be processed in the preset gray-scale image includes: determining noise sampling points corresponding to the pixel points to be processed in a preset noise image according to the preset noise image and the first illumination information; and determining shading sampling points corresponding to the pixel points to be processed in a preset gray-scale image according to the noise pixel values of the noise sampling points and the first illumination information.
The preset noise map can be understood as a preset interference image for disturbing the coordinate information of the pixel points to be processed. Exemplarily, as shown in fig. 4. In the embodiment of the present disclosure, the gray value of each pixel in the preset noise map may be preset according to a requirement, and is not specifically limited herein. The noise sampling points can be understood as pixels corresponding to the pixels to be processed, which are determined in the preset noise image by taking the first illumination information as coordinate values.
Considering the area with the possibility of sudden change of gray scale values in the preset gray scale image, the calculated illumination information is used as coordinates, the preset noise image is sampled by using the noise image disturbance coordinates aiming at each pixel point to be processed, the corresponding gray scale is obtained, the pixel gray values of the noise sampling points sampled for multiple times are mixed, the fusion color is obtained, and the fact that when the halation sampling points are located at the junction of two colors of the preset gray scale image, the distinguishing of color concentrations and the transition between colors with different concentrations can be simulated through pixel mutual fusion can be guaranteed.
Optionally, the determining, according to the noise pixel value of the noise sampling point and the illumination information, a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image includes: and adjusting the noise pixel value of the noise sampling point based on a preset noise adjustment parameter, adding the adjusted noise pixel value and the illumination information to obtain a sampling pixel coordinate, and determining a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the sampling pixel coordinate.
The preset noise adjustment parameter may be understood as a preset parameter for adjusting the noise pixel value of the noise sampling point. In this embodiment of the present disclosure, the specific value of the preset noise adjustment parameter may be preset and adjusted according to a requirement, which is not limited herein.
Specifically, the preset noise adjustment parameter may be multiplied by the noise pixel value of the noise sampling point to obtain an adjusted noise pixel value, and then the adjusted noise pixel value is added to the illumination information to obtain a sampling pixel coordinate, and then a pixel point in the preset gray-scale map located at the sampling pixel coordinate is used as a halation sampling point corresponding to the pixel point to be processed.
S230, determining a neighborhood pixel point of the pixel point to be processed, determining second illumination information of the neighborhood pixel point, and determining a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the second illumination information.
The neighborhood pixels can be understood as pixels located in a preset neighborhood of the pixels to be processed. Optionally, the neighborhood pixel point may be a pixel point in a preset direction and having a distance from the to-be-processed pixel point within a preset distance range. The number of the neighborhood pixels can be one, two or more. In the embodiment of the present disclosure, the preset direction, the preset distance range, the selection mode, the number, the numerical value, and the like of the adopted neighborhood pixels may be preset according to the requirement, and are not specifically limited herein.
And the second illumination information is illumination information of neighborhood pixel points of the pixel points to be processed. Similarly, the second illumination information may be obtained by point-multiplying the normal direction of the domain pixel point and the sight line direction.
Specifically, the determining of the neighborhood pixel point of the pixel point to be processed may be to acquire the neighborhood pixel point of the pixel point to be processed according to a preset manner of acquiring the neighborhood pixel point of the pixel point to be processed. For example, based on the coordinate information of the pixel point to be processed, a neighborhood pixel point of the pixel point to be processed is obtained according to a preset direction and a preset interval distance, or a pixel point adjacent to the pixel point to be processed is obtained according to a preset direction and is used as a neighborhood pixel point, and the like. Further, second illumination information of the neighborhood pixel points can be calculated; and determining shading sampling points corresponding to the pixel points to be processed in a preset gray-scale image by taking the second illumination information as coordinate values.
S240, determining a target pixel value of the pixel point to be processed according to the shading pixel value of the shading sampling point and the original pixel value of the shading sampling point in the initial rendering effect picture to obtain a target rendering effect picture.
According to the content, the number of the halation sampling points is related to the selection mode of the field pixel points, so that one halation sampling point or two or more halation sampling points can be selected.
Specifically, under the condition that one halation sampling point exists, halation pixel values of the halation sampling point are fused with original pixel values in the initial rendering effect image to obtain target pixel values of the pixel points to be processed, and a target rendering effect image is obtained.
Optionally, under the condition that two or more halation sampling points exist, summing or weighted summing is performed on halation pixel values of at least two halation sampling points, and then an average value operation is performed to obtain a sampling pixel value; and fusing the sampling pixel value and the original pixel value of the shading sampling point in the initial rendering effect picture to obtain a target pixel value of the pixel point to be processed.
The manner of fusing the sampled pixel values and the original pixel values of the shading sampling points in the initial rendering effect map may be to add, multiply, add after weighting, multiply after weighting, or the like the sampled pixel values and the original pixel values of the shading sampling points in the initial rendering effect map.
And S250, displaying the target rendering effect graph in a preset display area.
According to the technical scheme, the halation sampling points corresponding to the pixels to be processed in the preset gray-scale image are determined according to the first illumination information of the pixels to be processed aiming at each pixel to be processed of the initial rendering effect image. Further, determining shading sampling points corresponding to the pixel points to be processed in a preset gray-scale image according to second illumination information of the neighborhood pixel points of the pixel points to be processed. And determining the target pixel value of the pixel point to be processed according to the halation pixel value of the halation sampling point and the original pixel value of the halation sampling point in the initial rendering effect image. The grey values of the neighborhood pixel points of the pixel points to be processed are combined, the color density distinguishing and the transition between colors with different densities are achieved, and the halation transition can be smoother.
Fig. 5 is a schematic flow chart of another image rendering method provided in the embodiment of the present disclosure, and the embodiment details how to determine the initial rendering effect graph of the model to be rendered in the above embodiment.
As shown in fig. 5, the method of the embodiment of the present disclosure may specifically include:
and S310, responding to an image rendering request input for the model to be rendered.
S320, obtaining a color map and a brush map of the model to be rendered, and determining internal texture information of internal pixel points of the model to be rendered according to the color map and the brush map.
The color map can be understood as a map containing color information of a pixel point of the model to be rendered. For example, the color map may be understood as a two-dimensional image obtained by expanding the model to be rendered. The brush map may be understood as a map indicating drawing characteristics of a brush of a painting style. For example, the drawing characteristics information may be specifically a map including the material of the brush, the drawing shape, the drawing consistency, the filling degree of the pigment, and the like. As shown in fig. 6, the brush map may be a map with black edge areas and white or gray middle areas. In the embodiment of the present disclosure, one or more strokes may be drawn in a black image by using a white brush. Note that the brush maps used for rendering images of different painterlies may be the same or different.
The internal pixel points can be understood as pixel points of the model to be rendered except for the edge of the model to be dyed. The internal texture information may be understood as texture information corresponding to the internal pixel points.
Optionally, the determining, according to the color map and the brush map, internal texture information of an internal pixel point of the model to be rendered includes: and performing color level reduction processing on the color map to obtain a basic color map, and performing image fusion on the brush map and the basic color map to obtain internal texture information of internal pixel points of the model to be rendered.
And under the condition that the colors of the model to be rendered are rich and the colors of the target rendering effect graph are simple, the colors of the model to be rendered can be adjusted towards the color direction of the target rendering effect graph in a color level reduction mode.
In the embodiment, there may be a plurality of color level reduction manners. For example, the gradient map may be sampled according to the illumination calculation result for mapping, or the color map may be processed by a predetermined color-level-reduction algorithm to obtain a basic color map. The color level reduction algorithm may be set according to actual requirements, and is not specifically limited herein.
In a case that the color of the model to be rendered is relatively single, optionally, the performing the color level reduction processing on the color map to obtain a basic color map includes: and obtaining a basic color map by adjusting the image saturation and/or the image contrast of the color map.
The basic color map can be understood as a color map obtained by performing color level reduction processing on the color map. And after obtaining the basic color map, further performing image fusion on the brush chartlet and the basic color map to obtain internal texture information of internal pixel points of the model to be rendered.
S330, determining edge pixel points of the model to be rendered, and performing edge tracing processing on the edge pixel points to obtain edge texture information of the edge pixel points of the model to be rendered.
The edge pixel points can be understood as pixel points located at the edge of the model to be rendered. The edge texture information may be understood as texture information corresponding to the edge pixel points.
Optionally, the determining edge pixel points of the model to be rendered includes: and determining edge pixel points of the model to be rendered based on the mode of inner product or outward normal expansion of the sight direction and the normal direction of the model to be rendered.
Wherein, the sight direction can be understood as a ray direction vector of which the viewpoint points to the model to be rendered. The normal direction may be understood as controlling a model surface direction vector.
In the embodiment of the present disclosure, the edge pixel point of the model to be rendered may be determined based on an inner product of a sight line direction and a normal line direction of the model to be rendered. Specifically, it may be determined whether the pixel point belongs to the internal pixel point or the edge pixel point according to whether a value obtained by multiplying the sight line direction point by the normal line direction exceeds a preset inner product threshold. Specifically, any pixel point of the model to be rendered is obtained, and when the pixel point exceeds the preset inner product threshold value, the pixel point is determined to be an internal pixel point; and when the pixel point does not exceed the preset inner product threshold value, determining the pixel point as an edge pixel point. For example, the value obtained by multiplying the visual line direction point by the normal line direction may be normalized so as to be between 0 and 1. It can be understood that the closer to the edge pixel point, the closer to 0 the inner product of the sight line direction and the normal line direction. The edge pixel points can be determined according to a preset inner product threshold value. The preset inner product threshold may be preset according to a requirement, and is not specifically limited herein.
Optionally, the performing the edge tracing processing on the edge pixel point to obtain the edge texture information of the edge pixel point of the model to be rendered includes: and performing edge tracing processing on the edge pixel points based on a preset edge map corresponding to the model to be rendered. Similarly, the edge map may be understood as a map used to highlight painting-style edge-drawing features. For example, the drawing characteristics information may include material used for edge drawing, drawing shape, drawing consistency, and filling degree of pigment. It should be noted that, in the embodiment of the present disclosure, the edge map and the brush map may be the same or different.
Specifically, performing the edge tracing processing on the edge pixel point to obtain the edge texture information of the edge pixel point of the model to be rendered may include: calculating an inner product of a sight line direction and a normal direction of the model to be rendered aiming at each edge pixel point, and taking the inner product as a sampling abscissa of the edge pixel point of the model to be rendered; summing the normal abscissa and the normal ordinate in the normal direction, multiplying the sum by the normal abscissa to obtain an initial ordinate value, and adjusting the initial ordinate value based on a preset coordinate adjustment parameter to obtain a sampling ordinate in an image coordinate system; and sampling pixel values from a preset edge map corresponding to the model to be rendered based on the sampling abscissa and the sampling ordinate, and determining edge texture information of edge pixel points of the model to be rendered according to the sampled pixel values.
As can be understood, the process of tracing is to assign values to the edge pixels of the model to be rendered. The edge map can be understood as an image sampled when the edge pixel points of the model to be rendered are assigned.
Specifically, after determining the edge pixel points of the model to be rendered, calculating an inner product of the sight direction and the normal direction of each edge pixel point, and taking the inner product as a sampling abscissa of the edge pixel points of the model to be rendered. Further, a coordinate sum obtained by adding the normal abscissa and the normal ordinate in the normal direction is used as an initial value of the ordinate, and the initial value of the ordinate is adjusted based on a preset coordinate adjustment parameter to obtain a sampling ordinate in an image coordinate system. The value range of the preset coordinate adjustment parameter may be 0 to 1. In this embodiment of the present disclosure, the specific value of the preset coordinate adjustment parameter may be set according to an actual requirement, and is not limited specifically herein. And further taking a pixel point which is positioned at a coordinate position consisting of the sampling abscissa and the sampling ordinate in the edge map corresponding to the model to be rendered as a sampling pixel point, and determining the pixel value of the edge pixel point of the model to be rendered according to the pixel value of the sampling pixel point.
Optionally, determining edge texture information of an edge pixel point of the model to be rendered according to the sampled pixel value includes: taking the sampled pixel value as the pixel value of an edge pixel point of the model to be rendered so as to obtain edge texture information of the edge pixel point; or adding the sampled pixel value with the pixel value of the edge pixel point of the model to be rendered to obtain the edge texture information of the edge pixel point.
S340, determining an initial rendering effect graph according to the internal texture information and the edge texture information.
Specifically, the obtained internal texture information and the edge texture information are fused to determine an initial rendering effect graph. It is to be understood that the inner texture information and the edge texture information may be added to determine an initial rendering effect map.
It should be noted that, in the embodiment of the present disclosure, the internal pixel points and the edge pixel points of the to-be-rendered model may be determined first, and then, for each internal pixel point, the internal texture information of the internal pixel point is determined, and then the internal texture information of the internal pixel point and the edge texture information of the edge pixel point are added to obtain the initial rendering effect map. Or, all pixel points of the to-be-selected and dyed model are processed based on the brush chartlet, and then after the edge texture information of the edge pixel points is obtained, the internal texture information of the edge pixel points is determined to be replaced by the edge texture information.
And S350, carrying out layered shading treatment on the initial rendering effect image based on a preset gray-scale image to obtain a target rendering effect image.
And S360, displaying the target rendering effect graph in a preset display area.
According to the technical scheme of the embodiment of the disclosure, the color map and the brush map of the model to be rendered are obtained, and the internal texture information of the internal pixel point of the model to be rendered is determined according to the color map and the brush map. Further, determining edge pixel points of the model to be rendered, and performing edge tracing processing on the edge pixel points to obtain edge texture information of the edge pixel points of the model to be rendered. And further, determining an initial rendering effect graph according to the internal texture information and the edge texture information. The realization is handled marginal pixel and inside pixel respectively, and the image rendering mode is more meticulous, can make the initial rendering effect picture demonstrate the drawing effect of brush vestige to make the effect of drawing emulation more lifelike.
Fig. 7 is a schematic flowchart of an alternative example of an image rendering method according to an embodiment of the present disclosure.
As shown in fig. 7, the image rendering method of this optional example includes the following specific steps:
1. after receiving an image rendering request input for the model to be rendered, the model to be rendered can be determined, then the color map of the model to be rendered is determined, and the basic color map is determined according to the color map. Specifically, the color map may be converted into the basic color map by downscaling. Specifically, the basic color map can be obtained by adjusting the saturation and contrast of the color map. Or sampling a preset gradient map for mapping according to the illumination calculation result, or using a color level reduction algorithm. In the case where the color of the color map is relatively single, only the adjustment of the contrast saturation may be performed.
2. And determining the internal texture information according to the basic color map and the brush mapping. And performing a mixing operation on the basic color map and the brush mapping, wherein the mixing operation can comprise averaging, multiplying, and/or decimal value taking and the like.
3. And determining edge texture information of the edge pixel points, namely performing edge tracing processing. Specifically, the edge pixel point can be determined through an implementation mode of normal outward expansion or an implementation mode of point multiplication of a sight vector and a normal vector or a mode of vertex extension along a normal line. Furthermore, values obtained by multiplying the sight line direction by the normal line direction can be mapped between 0 and 1 through the sight line normal line control range to serve as sampling abscissa, then the abscissa value and the ordinate value of the normal line direction are added, multiplied by the abscissa value and multiplied by a preset adjusting parameter to obtain sampling ordinate, and then the edge map is sampled to perform edge tracing on edge pixel points to obtain edge texture information.
4. And determining an initial rendering effect graph according to the internal texture information and the edge texture information.
5. And obtaining a target rendering effect graph with a layered rendering effect according to the initial rendering effect graph and the preset gray-scale graph. And aiming at each sampling point to be shaded of the initial rendering effect image, using the calculated illumination information of the sampling point to be shaded and the neighborhood pixel point as coordinates, disturbing the coordinates by using a preset noise image, sampling a preset gray-scale image to obtain corresponding gray-scale color levels, and mixing the colors of a plurality of gray-scale color levels obtained by sampling each sampling point to be shaded. When the illumination coordinates correspond to the junction of the two colors, the differentiation of color density and the transition between colors with different densities can be simulated through the mutual fusion between the gray scales of the pixel points.
Fig. 8 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the disclosure, and as shown in fig. 8, the apparatus includes: a rendering request module 410, a rendering effect generation module 420, and a rendering effect presentation module 430.
The rendering request module 410 is configured to determine an initial rendering effect map of a model to be rendered in response to an image rendering request input for the model to be rendered; the rendering effect generating module 420 is configured to perform layered shading processing on the initial rendering effect map based on a preset grayscale map to obtain a target rendering effect map; the rendering effect display module 430 is configured to display the target rendering effect map in a preset display area.
According to the technical scheme of the embodiment of the disclosure, the initial rendering effect graph of the model to be rendered is determined by responding to the image rendering request input aiming at the model to be rendered, and the effect of the brush pen stroke is simulated. And then, carrying out layered shading treatment on the initial rendering effect picture based on a preset gray-scale picture to obtain a target rendering effect picture, and simulating the distinction of color concentration and the transition between colors with different concentrations, so that the obtained rendering effect picture is more real and vivid. And finally, displaying the target rendering effect graph in a preset display area so as to respond to an image rendering request and display the final rendering effect graph. The problem that images rendered by related image rendering technologies are hard and not vivid enough is solved, and the effect of improving the reality and vividness of image rendering is achieved.
Optionally, the rendering effect generating module 420 includes: a halo sampling point determining sub-module and a target pixel value determining sub-module.
The shading sampling point determining submodule is used for determining first illumination information of each pixel point to be processed of the initial rendering effect image and determining a shading sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the first illumination information;
and the target pixel value determining sub-module is used for determining the target pixel value of the pixel point to be processed according to the vignetting pixel value of the vignetting sampling point and the original pixel value of the vignetting sampling point in the initial rendering effect image.
Optionally, the halo sampling point determining sub-module includes: a noise sampling point determining unit and a halation sampling point determining unit.
The noise sampling point determining unit is used for determining a noise sampling point corresponding to the pixel point to be processed in a preset noise image according to the preset noise image and the first illumination information;
and the shading sampling point determining unit is used for determining the shading sampling points corresponding to the pixel points to be processed in the preset gray-scale image according to the noise pixel values of the noise sampling points and the first illumination information.
Optionally, the vignetting sample point determining unit is configured to:
and adjusting the noise pixel value of the noise sampling point based on a preset noise adjustment parameter, adding the adjusted noise pixel value and the illumination information to obtain a sampling pixel coordinate, and determining a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the sampling pixel coordinate.
Optionally, before the target pixel value determining submodule, the method further includes: and a second illumination information determination submodule.
The second illumination information determining submodule is used for determining neighborhood pixel points of the pixel points to be processed, determining second illumination information of the neighborhood pixel points, and determining shading sampling points corresponding to the pixel points to be processed in a preset gray-scale image according to the second illumination information.
Optionally, the target pixel value determining sub-module is configured to:
under the condition that two or more halation sampling points exist, carrying out summation or weighted summation on halation pixel values of at least two halation sampling points, and then carrying out average operation to obtain a sampling pixel value;
and fusing the sampling pixel value and the original pixel value of the shading sampling point in the initial rendering effect picture to obtain a target pixel value of the pixel point to be processed.
Optionally, the rendering request module 410 includes: an internal texture information determining submodule, an edge texture information determining submodule and an initial rendering effect picture determining submodule.
The internal texture information determining submodule is used for acquiring a color map and a brush map of a model to be rendered, and determining internal texture information of internal pixel points of the model to be rendered according to the color map and the brush map;
the edge texture information determining submodule is used for determining edge pixel points of the model to be rendered and performing edge tracing processing on the edge pixel points to obtain edge texture information of the edge pixel points of the model to be rendered;
and the initial rendering effect graph determining sub-module is used for determining an initial rendering effect graph according to the internal texture information and the edge texture information.
Optionally, the edge texture information determining sub-module is configured to:
calculating an inner product of a sight direction and a normal direction of the model to be rendered aiming at each edge pixel point, and taking the inner product as a sampling abscissa of the edge pixel point of the model to be rendered;
summing the normal abscissa and the normal ordinate of the normal direction, multiplying the sum by the normal abscissa to obtain an initial ordinate value, and adjusting the initial ordinate value based on a preset coordinate adjustment parameter to obtain a sampling ordinate in an image coordinate system;
and sampling pixel values from a preset edge map corresponding to the model to be rendered based on the sampling abscissa and the sampling ordinate, and determining edge texture information of edge pixel points of the model to be rendered according to the sampled pixel values.
Optionally, the edge texture information determining sub-module is configured to:
and determining edge pixel points of the model to be rendered based on the inner product of the sight direction and the normal direction of the model to be rendered or the mode of normal outward expansion.
Optionally, the internal texture information determining sub-module includes: an internal texture information determination unit.
The internal texture information determining unit is configured to perform color level reduction processing on the color map to obtain a basic color map, and perform image fusion on the brush map and the basic color map to obtain internal texture information of internal pixels of the model to be rendered.
Optionally, the internal texture information determining unit is configured to:
and adjusting the image saturation and/or the image contrast of the color map to obtain a basic color map.
The image rendering device provided by the embodiment of the disclosure can execute the image rendering method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are also only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now to fig. 9, a schematic diagram of an electronic device (e.g., the terminal device or the server in fig. 9) 500 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An editing/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 9 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The electronic device provided by the embodiment of the disclosure and the image rendering method provided by the embodiment belong to the same inventive concept, and technical details which are not described in detail in the embodiment can be referred to the embodiment, and the embodiment have the same beneficial effects.
The disclosed embodiments provide a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the image rendering method provided by the above embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to an image rendering request input for a model to be rendered, determining an initial rendering effect map of the model to be rendered; carrying out layered shading processing on the initial rendering effect picture based on a preset gray-scale picture to obtain a target rendering effect picture; and displaying the target rendering effect graph in a preset display area.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, including conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first obtaining unit may also be described as a "unit obtaining at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided an image rendering method comprising:
in response to an image rendering request input for a model to be rendered, determining an initial rendering effect graph of the model to be rendered;
carrying out layered shading treatment on the initial rendering effect image based on a preset gray-scale image to obtain a target rendering effect image;
and displaying the target rendering effect graph in a preset display area.
In accordance with one or more embodiments of the present disclosure, [ example two ] there is provided the method of example one, further comprising:
aiming at each pixel point to be processed of the initial rendering effect image, determining first illumination information of the pixel point to be processed, and determining a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the first illumination information;
and determining the target pixel value of the pixel point to be processed according to the shading pixel value of the shading sampling point and the original pixel value of the shading sampling point in the initial rendering effect picture.
In accordance with one or more embodiments of the present disclosure, [ example no ] provides the method of example no, further comprising:
determining noise sampling points corresponding to the pixel points to be processed in a preset noise image according to the preset noise image and the first illumination information;
and determining shading sampling points corresponding to the pixel points to be processed in a preset gray-scale image according to the noise pixel values of the noise sampling points and the first illumination information.
In accordance with one or more embodiments of the present disclosure, [ example four ] there is provided the method of example three, further comprising:
and adjusting the noise pixel value of the noise sampling point based on a preset noise adjustment parameter, adding the adjusted noise pixel value and the illumination information to obtain a sampling pixel coordinate, and determining a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the sampling pixel coordinate.
In accordance with one or more embodiments of the present disclosure, [ example five ] there is provided the method of example two, further comprising:
and determining neighborhood pixel points of the pixel points to be processed, determining second illumination information of the neighborhood pixel points, and determining shading sampling points corresponding to the pixel points to be processed in a preset gray-scale image according to the second illumination information.
In accordance with one or more embodiments of the present disclosure [ example six ] there is provided the method of example five, further comprising:
under the condition that two or more halation sampling points exist, carrying out summation or weighted summation on halation pixel values of at least two halation sampling points, and then carrying out average operation to obtain a sampling pixel value;
and fusing the sampling pixel value and the original pixel value of the halation sampling point in the initial rendering effect picture to obtain a target pixel value of the pixel point to be processed.
In accordance with one or more embodiments of the present disclosure, [ example seven ] there is provided the method of example one, further comprising:
acquiring a color map and a brush map of a model to be rendered, and determining internal texture information of internal pixel points of the model to be rendered according to the color map and the brush map;
determining edge pixel points of the model to be rendered, and performing edge tracing processing on the edge pixel points to obtain edge texture information of the edge pixel points of the model to be rendered;
and determining an initial rendering effect graph according to the internal texture information and the edge texture information.
In accordance with one or more embodiments of the present disclosure, [ example eight ] there is provided the method of example seven, further comprising:
calculating an inner product of a sight line direction and a normal direction of the model to be rendered aiming at each edge pixel point, and taking the inner product as a sampling abscissa of the edge pixel point of the model to be rendered;
summing the normal abscissa and the normal ordinate of the normal direction, multiplying the sum by the normal abscissa to obtain an initial ordinate value, and adjusting the initial ordinate value based on a preset coordinate adjustment parameter to obtain a sampling ordinate in an image coordinate system;
and sampling pixel values from a preset edge map corresponding to the model to be rendered based on the sampling abscissa and the sampling ordinate, and determining edge texture information of edge pixel points of the model to be rendered according to the sampled pixel values.
In accordance with one or more embodiments of the present disclosure [ example nine ] there is provided the method of example seven, further comprising:
and determining edge pixel points of the model to be rendered based on the mode of inner product or outward normal expansion of the sight direction and the normal direction of the model to be rendered.
In accordance with one or more embodiments of the present disclosure, [ example no ] there is provided the method of example no, further comprising:
and carrying out color gradation reduction processing on the color chartlet to obtain a basic color chart, and carrying out image fusion on the brush chartlet and the basic color chart to obtain internal texture information of internal pixel points of the model to be rendered.
In accordance with one or more embodiments of the present disclosure, [ example eleven ] there is provided the method of example ten, further comprising:
and obtaining a basic color map by adjusting the image saturation and/or the image contrast of the color map.
According to one or more embodiments of the present disclosure, [ example twelve ] there is provided an image rendering apparatus comprising:
the rendering request module is used for responding to an image rendering request input aiming at a model to be rendered, and determining an initial rendering effect graph of the model to be rendered;
the rendering effect generating module is used for carrying out layered shading processing on the initial rendering effect graph based on a preset gray-scale graph to obtain a target rendering effect graph;
and the rendering effect display module is used for displaying the target rendering effect graph in a preset display area.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other combinations of features described above or equivalents thereof without departing from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (14)

1. An image rendering method, comprising:
in response to an image rendering request input for a model to be rendered, determining an initial rendering effect map of the model to be rendered;
carrying out layered shading processing on the initial rendering effect picture based on a preset gray-scale picture to obtain a target rendering effect picture;
and displaying the target rendering effect graph in a preset display area.
2. The image rendering method according to claim 1, wherein the performing the hierarchical shading processing on the initial rendering effect map based on a preset grayscale map comprises:
aiming at each pixel point to be processed of the initial rendering effect image, determining first illumination information of the pixel point to be processed, and determining a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the first illumination information;
and determining the target pixel value of the pixel point to be processed according to the shading pixel value of the shading sampling point and the original pixel value of the shading sampling point in the initial rendering effect picture.
3. The image rendering method according to claim 2, wherein the determining, according to the first illumination information, vignetting sample points corresponding to the pixel points to be processed in a preset gray-scale image comprises:
determining noise sampling points corresponding to the pixel points to be processed in a preset noise image according to the preset noise image and the first illumination information;
and determining shading sampling points corresponding to the pixel points to be processed in a preset gray-scale image according to the noise pixel values of the noise sampling points and the first illumination information.
4. The image rendering method according to claim 3, wherein the determining, according to the noise pixel values of the noise sampling points and the illumination information, the vignetting sampling points corresponding to the pixel points to be processed in a preset gray-scale image comprises:
and adjusting the noise pixel value of the noise sampling point based on a preset noise adjustment parameter, adding the adjusted noise pixel value and the illumination information to obtain a sampling pixel coordinate, and determining a halation sampling point corresponding to the pixel point to be processed in a preset gray-scale image according to the sampling pixel coordinate.
5. The image rendering method according to claim 2, wherein before determining the target pixel value of the pixel point to be processed according to the vignetting pixel value of the vignetting sample point and the original pixel value of the vignetting sample point in the initial rendering effect map, the method further comprises:
and determining neighborhood pixel points of the pixel points to be processed, determining second illumination information of the neighborhood pixel points, and determining shading sampling points corresponding to the pixel points to be processed in a preset gray-scale image according to the second illumination information.
6. The image rendering method of claim 5, wherein the determining a target pixel value of the pixel point to be processed according to the vignetting pixel value of the vignetting sample point and the original pixel value of the vignetting sample point in the initial rendering effect map comprises:
under the condition that two or more halation sampling points exist, carrying out summation or weighted summation on halation pixel values of at least two halation sampling points, and then carrying out average operation to obtain a sampling pixel value;
and fusing the sampling pixel value and the original pixel value of the shading sampling point in the initial rendering effect picture to obtain a target pixel value of the pixel point to be processed.
7. The image rendering method of claim 1, wherein the determining the initial rendering effect map of the model to be rendered comprises:
acquiring a color map and a brush map of a model to be rendered, and determining internal texture information of internal pixel points of the model to be rendered according to the color map and the brush map;
determining edge pixel points of the model to be rendered, and performing edge tracing processing on the edge pixel points to obtain edge texture information of the edge pixel points of the model to be rendered;
and determining an initial rendering effect graph according to the internal texture information and the edge texture information.
8. The image rendering method according to claim 7, wherein the performing the edge-tracing process on the edge pixel point to obtain edge texture information of the edge pixel point of the model to be rendered includes:
calculating an inner product of a sight direction and a normal direction of the model to be rendered aiming at each edge pixel point, and taking the inner product as a sampling abscissa of the edge pixel point of the model to be rendered;
summing the normal abscissa and the normal ordinate in the normal direction, multiplying the sum by the normal abscissa to obtain an initial ordinate value, and adjusting the initial ordinate value based on a preset coordinate adjustment parameter to obtain a sampling ordinate in an image coordinate system;
and sampling pixel values from a preset edge map corresponding to the model to be rendered based on the sampling abscissa and the sampling ordinate, and determining edge texture information of edge pixel points of the model to be rendered according to the sampled pixel values.
9. The image rendering method of claim 7, wherein the determining edge pixel points of the model to be rendered comprises:
and determining edge pixel points of the model to be rendered based on the mode of inner product or outward normal expansion of the sight direction and the normal direction of the model to be rendered.
10. The image rendering method of claim 7, wherein the determining internal texture information of internal pixel points of the model to be rendered according to the color map and the brush map comprises:
and performing color level reduction processing on the color map to obtain a basic color map, and performing image fusion on the brush map and the basic color map to obtain internal texture information of internal pixel points of the model to be rendered.
11. The image rendering method of claim 10, wherein the color-level-reducing the color map to obtain a basic color map comprises:
and adjusting the image saturation and/or the image contrast of the color map to obtain a basic color map.
12. An image rendering apparatus, comprising:
the rendering request module is used for responding to an image rendering request input aiming at a model to be rendered, and determining an initial rendering effect graph of the model to be rendered;
the rendering effect generating module is used for carrying out layered shading processing on the initial rendering effect graph based on a preset gray-scale graph to obtain a target rendering effect graph;
and the rendering effect display module is used for displaying the target rendering effect graph in a preset display area.
13. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device to store one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image rendering method of any of claims 1-11.
14. A storage medium containing computer-executable instructions for performing the image rendering method of any of claims 1-11 when executed by a computer processor.
CN202211001106.4A 2022-08-19 2022-08-19 Image rendering method and device, electronic equipment and storage medium Pending CN115330925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211001106.4A CN115330925A (en) 2022-08-19 2022-08-19 Image rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211001106.4A CN115330925A (en) 2022-08-19 2022-08-19 Image rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115330925A true CN115330925A (en) 2022-11-11

Family

ID=83925263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211001106.4A Pending CN115330925A (en) 2022-08-19 2022-08-19 Image rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115330925A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206046A (en) * 2022-12-13 2023-06-02 北京百度网讯科技有限公司 Rendering processing method and device, electronic equipment and storage medium
CN117952860A (en) * 2024-03-27 2024-04-30 山东正禾大教育科技有限公司 Mobile digital publishing method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206046A (en) * 2022-12-13 2023-06-02 北京百度网讯科技有限公司 Rendering processing method and device, electronic equipment and storage medium
CN116206046B (en) * 2022-12-13 2024-01-23 北京百度网讯科技有限公司 Rendering processing method and device, electronic equipment and storage medium
CN117952860A (en) * 2024-03-27 2024-04-30 山东正禾大教育科技有限公司 Mobile digital publishing method and system

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN115330925A (en) Image rendering method and device, electronic equipment and storage medium
CN110070495B (en) Image processing method and device and electronic equipment
WO2024016930A1 (en) Special effect processing method and apparatus, electronic device, and storage medium
WO2023221926A1 (en) Image rendering processing method and apparatus, device, and medium
CN111127603B (en) Animation generation method and device, electronic equipment and computer readable storage medium
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
CN115810101A (en) Three-dimensional model stylizing method and device, electronic equipment and storage medium
CN115358919A (en) Image processing method, device, equipment and storage medium
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
CN114693876A (en) Digital human generation method, device, storage medium and electronic equipment
WO2024041623A1 (en) Special effect map generation method and apparatus, device, and storage medium
CN114742934A (en) Image rendering method and device, readable medium and electronic equipment
WO2024051541A1 (en) Special-effect image generation method and apparatus, and electronic device and storage medium
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN116681765A (en) Method for determining identification position in image, method for training model, device and equipment
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN115861503A (en) Rendering method, device and equipment of virtual object and storage medium
CN115578299A (en) Image generation method, device, equipment and storage medium
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN114866706A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115272060A (en) Transition special effect diagram generation method, device, equipment and storage medium
CN114780197A (en) Split-screen rendering method, device, equipment and storage medium
CN113744379A (en) Image generation method and device and electronic equipment
CN111223105B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination