WO2022048414A1 - 一种图像生成方法、装置、设备及存储介质 - Google Patents

一种图像生成方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022048414A1
WO2022048414A1 PCT/CN2021/112039 CN2021112039W WO2022048414A1 WO 2022048414 A1 WO2022048414 A1 WO 2022048414A1 CN 2021112039 W CN2021112039 W CN 2021112039W WO 2022048414 A1 WO2022048414 A1 WO 2022048414A1
Authority
WO
WIPO (PCT)
Prior art keywords
brush
layer
image
target image
queue
Prior art date
Application number
PCT/CN2021/112039
Other languages
English (en)
French (fr)
Inventor
周财进
胡兴鸿
庄幽文
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to US18/023,893 priority Critical patent/US20230334729A1/en
Publication of WO2022048414A1 publication Critical patent/WO2022048414A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure relates to the field of data processing, and in particular, to an image generation method, apparatus, device, and storage medium.
  • the attributes of the brush object are determined by all brush objects involved in drawing the specific style image.
  • the method of establishing a mathematical model is obtained. Specifically, the attribute of each brush object is used as a variable of the mathematical model, and the problem of determining the attribute of the brush object is transformed into a mathematical problem of finding the optimal solution to the above mathematical model.
  • the present disclosure provides an image generation method, apparatus, device and storage medium, which can improve the efficiency of image generation and reduce time consumption.
  • the present disclosure provides an image generation method, the method includes:
  • each brush layer in the at least two brush layers is set with a preset brush density
  • a target style image corresponding to the target image is generated based on the attributes of the brush objects in the brush queue of each brush layer.
  • determining the properties of the brush objects on each brush layer based on the preset brush density of each brush layer includes:
  • the brush layer is sampled, and the brush position information of the brush object on the brush layer of each layer is determined.
  • determining the properties of the brush objects on each brush layer based on the preset brush density of each brush layer includes:
  • the brush size of the brush object is determined.
  • the attribute further includes a drawing direction, and the drawing direction is randomly generated.
  • the attribute further includes a brush color
  • the brush color is determined based on the color value of the pixel on the target image that has a corresponding relationship with the brush position information of the brush object.
  • generating the target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer includes:
  • the brush layers corresponding to at least two brush queues are superimposed to generate a target style image corresponding to the target image.
  • the superimposing of the brush layers corresponding to the at least two brush queues to generate the target style image corresponding to the target image includes:
  • the pixel value of the pixel with the smallest luminance value in the corresponding position in the brush layers corresponding to the at least two brush queues is determined as the pixel value image of the pixel at the corresponding position in the target style image corresponding to the target image.
  • the drawing of the brush layer corresponding to the brush queue based on the attributes of the brush object in the brush queue of each brush layer includes:
  • the graphics operation unit is used to draw the brush objects in each brush queue in parallel;
  • the brush layer corresponding to each brush queue is drawn.
  • the method before generating the target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer, the method further includes:
  • the target image includes a person image
  • setting a skin brush layer for the target image based on the segmentation of the skin area on the target image
  • generating the target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer includes:
  • the target style image corresponding to the target image is generated based on the attributes of the brush objects in the brush queue of each of the at least two brush layers and the skin brush layers.
  • the generation is based on the attributes of the brush objects in the brush queue of each of the at least two brush layers and the skin brush layers.
  • the brush size in the properties of the brush object is determined based on the skin area with the largest area detected on the target image.
  • the generation is based on the attributes of the brush objects in the brush queue of each of the at least two brush layers and the skin brush layers.
  • the target style image corresponding to the target image including:
  • the first initial overlay layer and the skin brush layer are superimposed to generate a target style image corresponding to the target image.
  • the method before generating the target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer, the method further includes:
  • generating the target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer includes:
  • a target style image corresponding to the target image is generated.
  • generating the target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer and the bump texture map including:
  • the bump texture map is superimposed with the second initial overlay layer to generate a target style image corresponding to the target image.
  • the method before generating the target style image corresponding to the target image by superimposing the bump texture map with the second initial overlay layer, the method further includes:
  • generating a bump texture map for the target image based on the flat area in the target image includes:
  • the determining of the flat area in the target image includes:
  • a flat area in the target image is determined based on the texture degree corresponding to each pixel in the target image.
  • the present disclosure provides an image generation apparatus, the apparatus comprising:
  • a first setting module configured to set at least two brush layers for the target image, wherein each brush layer in the at least two brush layers is set with a preset brush density
  • a first determination module configured to determine the attributes of the brush objects on the brush layers of each layer based on the preset brush density of each brush layer, the attributes include brush position information and brush size ;
  • the first storage module is used to establish a brush queue for each brush layer, and store the brush object into the brush queue on the brush layer corresponding to the brush object ;
  • the first generating module is configured to generate a target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer.
  • the present disclosure provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the terminal device is made to implement the above method.
  • the present disclosure provides a device comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program, Implement the above method.
  • Embodiments of the present disclosure provide an image generation method. Specifically, the method sets at least two brush layers for the target image, and each brush layer is set with a preset brush density. Then, based on the preset brush density of each brush layer, the attribute of the brush object on each brush layer is determined; wherein, the attribute includes brush position information and brush size. Then, the method establishes a brush queue for each brush layer, and stores the brush object in the brush queue of the brush layer corresponding to the brush object. Finally, the method generates a target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer.
  • the present disclosure determines the brush position information of the brush object by sampling the brush layer based on the preset brush density, and determines the corresponding brush position based on the preset brush density of each brush layer. Preset the brush size, and then determine the brush size of the brush object based on the preset brush size. It can be seen that the present disclosure determines the brush position information and brush size of the brush object in a simple and efficient manner, improves the efficiency of determining the attributes of the brush object involved in drawing the target style image, and further improves the attributes based on the brush object. Efficiency in generating target style images.
  • the present disclosure generates an image by drawing at least two brush layers and then superimposing each brush layer, which can improve the quality of the generated target style image. Therefore, the image generation method provided by the present disclosure can improve the quality of the target style image on the basis of improving the efficiency of generating the target style image.
  • FIG. 1 is a flowchart of an image generation method according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of another image generation method provided by an embodiment of the present disclosure.
  • FIG. 4 is a flowchart of still another image generation method provided by an embodiment of the present disclosure.
  • FIG. 5 is a structural block diagram of an image generating apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a structural block diagram of an image generating device according to an embodiment of the present disclosure.
  • the present disclosure provides an image generation method. Specifically, the method sets at least two brush layers for the target image, and each brush layer is set with a preset brush density. Then, based on the preset brush density of each brush layer, an attribute of the brush object on each brush layer is determined, wherein the attribute includes brush position information and brush size. Then, the method establishes a brush queue for each brush layer, and stores the brush object in the brush queue of the brush layer corresponding to the brush object. Finally, the method generates a target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer.
  • the present disclosure determines the brush position information of the brush object by sampling the brush layer based on the preset brush density, and determines the corresponding brush position based on the preset brush density of each brush layer.
  • the brush density is preset, and then the brush size of the brush object is determined based on the preset brush size. It can be seen that the present disclosure determines the brush position information and brush size of the brush object in a simple and efficient manner, improves the efficiency of determining the attributes of the brush object involved in the target style image corresponding to the drawing target image, and further improves the efficiency of determining the attributes of the brush object involved in the target style image corresponding to the drawing target image.
  • the properties of the brush object generate the efficiency of the target style image.
  • the present disclosure generates a target style image by drawing at least two brush layers and then superimposing each brush layer, which can improve the quality of the generated target style image.
  • the image generation method provided by the present disclosure can improve the quality of the target style image on the basis of improving the efficiency of generating the target style image.
  • an embodiment of the present disclosure provides an image generation method.
  • a flowchart of an image generation method provided by an embodiment of the present disclosure includes:
  • S101 Set at least two brush layers for the target image, wherein each brush layer in the at least two brush layers has a corresponding preset brush density.
  • the target image may be various types of images, for example, the target image may be a person image, a landscape image, and the like.
  • the target image may be determined by the user. Specifically, the user can select any image from the terminal album as the target image, or can trigger the photographing function to use the photographed image as the target image.
  • the specific manner of determining the target image is not limited in the embodiment of the present disclosure.
  • At least two brush layers may be set for the target image, wherein each brush layer is set with a corresponding preset pen Brush size, and the preset brush size corresponding to each brush layer is different.
  • three large, medium and small brush layers can be set for the target image.
  • the preset brush size corresponding to the large brush layer is the largest, followed by the preset brush size corresponding to the medium brush layer in the middle, and the preset brush size corresponding to the small brush layer is the smallest.
  • the attributes of the brush object may include brush position information, brush size, drawing direction, brush color, and the like.
  • different preset brush densities can be set for different brush layers, or the same preset brush density can be set.
  • the preset brush density determines the interval between each brush object in the corresponding brush layer.
  • the preset brush density corresponding to the brush layer with the larger preset brush size is smaller, and the preset brush corresponding to the brush layer with the smaller preset brush size is. the higher the density.
  • S102 Based on the preset brush density corresponding to each brush layer, sample the brush layer, and determine the brush position information of the brush object on the brush layer.
  • the brush position information is used to indicate the position where the corresponding brush object needs to be drawn, wherein the brush position information may be a two-dimensional coordinate value.
  • the properties of the brush objects on each brush layer may be determined based on the preset brush density of each brush layer. For example, based on the preset brush density of each brush layer, the brush position information of the brush object on each brush layer is determined.
  • the brush position information of each brush object it is necessary to first determine the brush layer where the brush object is located, and then determine the preset brush density corresponding to the brush layer;
  • the preset brush density samples the brush layer, and determines the brush position information of the brush object based on the positions of the sampling points obtained by sampling.
  • the embodiment of the present disclosure determines the brush position information of the brush objects on each brush layer by sampling the brush layers. Therefore, before sampling the brush layer, first determine the preset brush density corresponding to the brush layer, so as to sample the brush layer based on the preset brush density, so as to determine each brush layer Brush position information on the brush object.
  • the brush layer may be uniformly sampled based on a preset brush density to determine the brush position information of the brush object on the brush layer.
  • the preset brush density corresponding to one brush layer is d
  • the brush size corresponding to this layer is w*h. If the brush layer is uniformly sampled based on the preset brush density d, the spacing between the sampling points determined by uniform sampling is W/d and H/d on the row and column, respectively, so that the brush can be determined
  • the position information of each sampling point on the brush layer is determined, and the position information of each sampling point is determined as the position information of the center point of the brush object on the brush layer, that is, the brush position information.
  • the embodiment of the present disclosure may also determine that the number of brush objects to be drawn on the brush layer is (W*H*d*d)/(w*h).
  • the brush size of the brush object on each brush layer can also be determined, which will be introduced later.
  • S103 Establish a brush queue for each brush layer respectively, and store the brush object on the corresponding brush layer into the brush queue; wherein, the attributes of the brush object include all The brush position information and the brush size are determined based on the brush layer where the brush object is located.
  • the brush object is stored in the brush queue of the brush layer corresponding to the brush object.
  • a corresponding preset brush size can be set for each brush layer, and the brush object on the brush layer is determined based on the preset brush size corresponding to each brush layer. brush size.
  • the preset brush size corresponding to each brush layer may be determined as the brush size of each brush object on the brush layer.
  • the brush objects belonging to the same brush layer have the same brush size, that is, the same brush layer is drawn based on the same size of the brush.
  • the brush size of the brush object on each brush layer may also be determined based on the preset brush density corresponding to each brush layer. Specifically, the corresponding relationship between the brush density and the brush size is preset, and then the preset brush density is determined based on the preset brush density of each brush layer and the preset corresponding relationship between the brush density and the brush size. Set the preset brush size corresponding to the brush density; then determine the preset brush size as the brush size of the brush object on the brush layer.
  • the attributes of the brush object include not only the brush position information and the brush size, but also the drawing direction.
  • the drawing direction is used to indicate the drawing direction of the corresponding brush object.
  • the drawing directions of each brush object may be randomly generated.
  • the embodiments of the present disclosure can improve the efficiency of generating the target style image.
  • the attributes of the brush object in the embodiment of the present disclosure further include the brush color.
  • the brush color is used to represent the drawing color of the brush object.
  • the brush color of the brush object may be determined based on the color value of the pixel on the target image that has a corresponding relationship with the brush position information of the brush object. Specifically, after the brush position information of the brush object is determined, the pixel at the position corresponding to the brush position information on the target image can be determined based on the brush position information, and the color value corresponding to the pixel is obtained, and then Determines this color value as the brush color for this brush object.
  • each brush object may be stored in the brush queue of the brush layer corresponding to the brush object. Since the brush queue has a first-in, first-out feature, the order in which each brush object is stored in the brush queue determines the drawing order of the brush objects in the brush queue.
  • the timing of establishing a brush queue for each brush layer can be any time before the brush object is stored in the brush queue, for example, it can be executed before the above S101 or S102, or it can be executed in The above-mentioned S101 or S102 may be executed after, or may be executed in parallel with the above-mentioned S101 or S102, and the specific execution sequence is not limited.
  • S104 Generate a target style image corresponding to the target image based on the attributes of the brush objects in the brush queue corresponding to each brush layer.
  • the target style image may be an image of a specific artistic style, such as an oil painting style image, a stick figure style image, a cartoon style, a sketch style, and the like.
  • the properties of the brush objects in each brush queue can be properties, draw the brush layers corresponding to each brush queue respectively.
  • each brush layer is independent of each other. Therefore, each brush layer can be drawn in parallel to improve the drawing efficiency of the brush layer and further improve the efficiency of image generation.
  • the graphics computing unit GPU may be used to draw each brush layer in parallel based on the brush objects in the brush queue corresponding to each brush layer.
  • each brush object is independent of each other. Therefore, the brush objects in the same brush queue can also be drawn in parallel, which further improves the drawing efficiency of each brush layer, thereby improving the efficiency of image generation.
  • the GPU can be used to draw the brush objects in each brush queue in parallel, so as to improve the efficiency of image generation.
  • the GPU can be used to draw the several brush objects in parallel, and based on the drawing of each brush object, the final completion of each brush layer is completed. drawing.
  • each brush layer is superimposed, and finally a target style image corresponding to the target image is generated.
  • the pixel value of the pixel with the smallest brightness value at the corresponding position in each brush layer can be determined. , which is determined as the pixel value of the pixel at the corresponding position in the target style image. Based on the above method of determining the pixel value, the determination of the pixel value of each pixel in the target style image is completed, thereby generating the target style image corresponding to the target image.
  • the pixel value of the pixel in the style image at the same location as this pixel. For example, for the pixel with coordinates (m, n), determine the brightness value of the pixel with coordinates (m, n) in each brush layer, then determine the pixel with the smallest brightness value, and set the pixel with the smallest brightness value
  • the pixel value is determined as the pixel value of the pixel at coordinates (m, n) in the target style image.
  • the brush size determines the brush size of the brush object, which can simply and efficiently determine the attributes of each brush object, thereby improving the efficiency of determining the attributes of the brush objects involved in drawing the target style image, and further improving the brush object based on the brush object. properties, the efficiency of generating target style images.
  • the embodiment of the present disclosure generates the final target style image by drawing at least two brush layers and then superimposing each brush layer, which can improve the quality of the generated target style image.
  • the image generation method provided by the embodiments of the present disclosure further improves the quality of the generated target style image on the basis of improving the efficiency of image generation, realizes the efficient generation of the target style image with higher quality, and improves the user experience.
  • the target image may be a person image, and the effect of a person image on skin parts such as a face and the like is generally high. Therefore, the embodiments of the present disclosure also provide an image generation method, which further strengthens the processing of skin parts on the basis of the above-mentioned embodiments.
  • FIG. 2 it is a flowchart of another image generation method provided by an embodiment of the present disclosure. On the basis of each step in the above Figure 1, it can also include:
  • S201 Detect whether the target image includes a person image.
  • the target image after the target image is determined, it may be detected whether the target image includes a person image.
  • the face on the target image may be recognized by means of face detection, and if it is recognized that there is a human face on the target image, it is determined that the target image includes a person image.
  • the specific detection method is not limited in the embodiment of the present disclosure.
  • the generation of target style images for human images can be set as an optional image generation mode.
  • the image generation mode can be selected to trigger image generation for human images. process.
  • the target image includes a human image
  • skin area detection is performed on the target image to determine the skin area on the target image, and then each determined skin area is segmented from the target image based on a skin segmentation algorithm out, and based on the segmented skin area, set the skin brush layer for the target image.
  • the drawing of the skin brush layer is to draw more detailed skin areas such as the human face on the human image, so that the effect of the skin area in the final generated target style image is better, and the user experience is improved. Therefore, for the drawing of the skin brush layer, only the drawing of the skin area on the human image can be included.
  • S203 establish a brush queue for the skin brush layer, and store the brush objects on the skin brush layer into the brush queue; wherein, the pens on the skin brush layer
  • the brush size in the properties of the brush object is determined based on the skin area with the largest area detected on the target image.
  • the skin brush in order to make the brush size on the skin brush layer more suitable for drawing the skin area on the target image, can be determined based on the area size of each skin area on the target image.
  • the brush size of the brush objects on the layer can be determined based on the area size of each skin area on the target image.
  • the brush size in the properties of the brush object on the skin brush layer may be determined based on the skin area with the largest area detected on the target image. Wherein, the larger the area of the skin area with the largest area is, the larger the brush size is determined, and the smaller the area of the skin area with the largest area is, the smaller the determined brush size is.
  • the processing effect of the face region in the skin region is a factor affecting the image quality. Therefore, the embodiments of the present disclosure can determine the brush size of the brush object on the skin brush layer based on the face area.
  • the face area on the target image can be detected, and if the target image includes multiple face areas, the face area with the largest area among the multiple face areas is determined, and then, based on the face area with the largest area, the skin pen is determined.
  • the brush size of the brush objects on the brush layer Among them, the larger the area of the face area with the largest area is, the larger the determined brush size will be. On the contrary, the smaller the area of the largest face area is, the smaller the determined brush size will be.
  • the brush size of the brush object on the skin brush layer is determined based on the area of the face area. Specifically, the brush size of the brush object on the skin brush layer is proportional to the area of the face area.
  • S204 Based on the attributes of the brush objects in the brush queue corresponding to each brush layer in the at least two brush layers and the skin brush layers, generate a corresponding image of the target image.
  • Target style image Based on the attributes of the brush objects in the brush queue corresponding to each brush layer in the at least two brush layers and the skin brush layers, generate a corresponding image of the target image.
  • Target style image Based on the attributes of the brush objects in the brush queue corresponding to each brush layer in the at least two brush layers and the skin brush layers, generate a corresponding image of the target image.
  • Target style image Based on the attributes of the brush objects in the brush queue corresponding to each brush layer in the at least two brush layers and the skin brush layers.
  • the brush objects are stored in the brush queue corresponding to each brush layer set for the target image, based on the attributes of each brush object in each brush queue, the corresponding target image is generated.
  • Target style image is generated.
  • FIG. 3 an effect comparison diagram provided by an embodiment of the present disclosure, wherein A is the target image, and B, C, and D are images using a single-layer brush, respectively.
  • the effect of layer drawing is the effect of drawing the brush layer with only the large brush
  • C is the effect of drawing the brush layer with only the medium brush
  • D is the effect of drawing the brush layer with only the small brush.
  • F is the effect diagram obtained by superimposing the brush layers corresponding to B, C and D in Fig. 3
  • E is the effect of drawing the skin brush layer
  • G is the skin brush layer superimposed on F It can be seen that the image quality after superimposing the skin brush layer in G is significantly higher than that in F.
  • the attributes of the brush objects in the brush queue corresponding to the skin brush layer are the same as the attributes of the brush objects in the brush queues corresponding to the above at least two brush layers respectively, and the attributes may include the size of the brush , brush position information, drawing direction and brush color.
  • the size of the brush is determined based on the skin area with the largest area detected on the target image, which has been specifically introduced in the above S203, and will not be repeated here.
  • the skin brush layer can be uniformly sampled to determine the position of the center point of each brush object on the skin brush layer, and the position of each center point is determined as the pen of the corresponding brush object Swipe location information.
  • the drawing direction of each brush object on the skin brush layer can be determined in a random manner, thereby improving the efficiency of determining the drawing direction.
  • the brush color of the corresponding brush object can be determined based on the color value of the pixel that has a corresponding relationship with the brush position information of each brush object on the skin brush layer on the target image.
  • the corresponding brush object is drawn based on the attributes of each brush object, thereby completing the skin brush layer. drawing.
  • the GPU can be used to draw the brush objects on each brush layer including the skin brush layer in parallel, thereby improving the drawing efficiency of each brush layer, and finally improving the Efficiency of generating target style images.
  • This embodiment of the present disclosure does not limit the drawing order of each brush layer including the skin brush layer. Specifically, after the at least two brush layers set for the target image are drawn, the at least two brush layers are superimposed to obtain a first initial superimposed layer, and then the drawn skin brush is The layer and the first initial overlay layer are superimposed again, and finally a target style image corresponding to the target image is obtained.
  • the skin brush layer and the corresponding position in the first initial superposition layer may also be used for the skin area.
  • the pixel value of the pixel with the smallest luminance value is determined as the pixel value of the pixel at the corresponding position in the target style image.
  • the image generation method for a person image, not only at least two brush layers but also a skin brush layer are set.
  • the skin brush layer is used to draw the skin area in the character image separately, so as to improve the effect of the skin area in the generated target style image and further improve the quality of the target style image.
  • the disclosed embodiment also provides an image generation method, which adds a concave-convex texture effect to the image on the basis of the above-mentioned embodiment.
  • FIG. 4 a flowchart of another image generation method provided by an embodiment of the present disclosure, on the basis of each step in FIG. 1 , further includes:
  • the embodiments of the present disclosure determine the flat area in the target image, then generate a bump texture map for the target image based on the flat area of the target image, and then add bump texture effects to the target image based on the bump texture map.
  • the flat area on the target image may be determined by determining the texture intensity of each pixel on the target image.
  • the structure tensor matrix corresponding to each pixel in the target image is calculated, and eigenvalue decomposition is performed on the structure tensor matrix to obtain two eigenvalues corresponding to each pixel;
  • the eigenvalue with the larger value among the two eigenvalues corresponding to the pixel determines the texture degree corresponding to each pixel, and finally, based on the texture degree corresponding to each pixel in the target image, the flat area in the target image is determined.
  • s1 can be used to represent the texture degree corresponding to the pixel. The larger the value of s1 is, the higher the texture degree corresponding to the pixel is, and the more uneven the corresponding position of the pixel is; The lower the texture level, the flatter the pixel corresponds to.
  • the flat area in the target image may be determined based on the texture degree corresponding to each pixel on the target image.
  • the direction vectors v1 and s2 corresponding to the eigenvalues s1 and s2 can also be obtained.
  • v2 the direction vector v2 corresponding to the feature value s2 is used as the direction vector of the corresponding pixel, and the direction vector v2 corresponding to each pixel on the target image constitutes the direction field of the target image.
  • each pixel on the target image belongs to a flat area in the target image, and then each pixel belonging to the flat area constitutes a flat area on the target image.
  • S402 Generate a bump texture map for the target image based on the flat area in the target image.
  • a bump texture map of the target image is generated based on the flat area on the target image.
  • the brushes that have been drawn are queued first.
  • the corresponding brush layers are superimposed to obtain the second initial superimposed layer.
  • the direction field of the second initial overlay layer is calculated, and line integral is used to visualize the direction field to obtain an initial texture map corresponding to the second initial overlay layer.
  • smooth the streamline texture of the area corresponding to the flat area in the initial texture map corresponding to the second initial overlay layer to obtain a bump texture map corresponding to the target image, and the bump texture map is used for the target image.
  • Style images add a bump texture effect.
  • the direction vector v2 corresponding to each pixel on the second initial overlay layer is calculated first, and then the direction vector v2 corresponding to each pixel on the second initial overlay layer is used to form the direction of the second initial overlay layer. field.
  • the above-mentioned line integral is a vector field visualization technology, and an initial texture map reflecting the streamline is obtained by convolving the white noise along the streamline direction with a low-pass filter.
  • S403 Generate a target style image corresponding to the target image based on the attributes of the brush objects in the brush queue corresponding to each brush layer and the bump texture map.
  • the bump texture effect is added to the target style image, the quality of the generated target style image is improved, and the user experience is improved.
  • a bump map technique can be used, and a bump texture effect can be added to the oil painting style image through the bump texture map.
  • the specific principle is to use the bump texture map as the height information of the oil painting style image, and calculate the normal information by calculating the gradient of the bump texture, so as to generate a shadow effect to generate a bump texture for the oil painting style image.
  • H is the effect map corresponding to G in Figure 3, and the effect map after adding bump texture.
  • processing such as sharpening may also be performed on the generated target style image.
  • the result obtained by superimposing the bump texture map with the second initial overlay layer can also be superimposed with the drawn skin brush layer to maximize the quality of the target style image.
  • the generated target style image is closer to the image drawn in the real painting scene, the quality of the generated target style image is improved, and the user is improved. Satisfaction with the target style image.
  • FIG. 5 a schematic structural diagram of an image generating apparatus provided in an embodiment of the present disclosure includes:
  • the first setting module 501 is configured to set at least two brush layers for the target image, wherein each brush layer in the at least two brush layers is set with a preset brush density;
  • the first determination module 502 is configured to determine the attributes of the brush objects on the brush layers of each layer based on the preset brush density of each brush layer, the attributes include the brush position information and the brush size;
  • the first storage module 503 is used to establish a brush queue for each brush layer, and store the brush object into the brush queue on the brush layer corresponding to the brush object middle;
  • the first generating module 504 is configured to generate a target style image corresponding to the target image based on the attributes of the brush objects in the brush queue of each brush layer.
  • the first determination module 502 is specifically configured to determine the preset brush density based on each brush layer and the corresponding relationship between the preset brush density and the brush size.
  • the preset brush size corresponding to the preset brush density.
  • the first determination module 502 is specifically configured to determine the preset brush density based on each brush layer and the corresponding relationship between the preset brush density and the brush size.
  • the preset brush size corresponding to the preset brush density; based on the preset brush size, the brush size of the brush object is determined.
  • the properties of the brush object further include a drawing direction, and the drawing direction is randomly generated.
  • the attributes of the brush object further include a brush color
  • the brush color is based on the pixels on the target image that have a corresponding relationship with the brush position information of the brush object. The color value is determined.
  • the first generation module 504 includes:
  • a first drawing submodule configured to draw the brush layer corresponding to the brush queue based on the attributes of the brush objects in the brush queue of each brush layer;
  • the first generating sub-module is configured to superimpose brush layers corresponding to at least two brush queues to generate a target style image corresponding to the target image.
  • the first generation sub-module 504 includes:
  • the first determination submodule is used to determine the pixel value of the pixel with the smallest brightness value in the corresponding position in the brush layer corresponding to the at least two brush queues as the pixel at the corresponding position in the target style image corresponding to the target image pixel value.
  • the first drawing sub-module includes:
  • the second drawing submodule is used to draw the brush objects in each brush queue in parallel by using a graphics operation unit based on the attributes of the brush objects in the brush queues of the brush layers of each layer;
  • the third drawing sub-module is used for drawing brush layers corresponding to each brush queue based on the drawing of the brush objects in each brush queue.
  • the device further includes:
  • a detection module for detecting whether the target image includes a person image
  • a second setting module configured to set a skin brush layer for the target image based on the segmentation of the skin area on the target image when the target image includes a person image
  • the first generation module 504 is specifically used for:
  • the target image corresponding to the target image is generated based on the attributes of the brush objects in the brush queue of each of the at least two brush layers and the skin brush layers of.
  • the device further includes:
  • the second storing module is used to establish a brush queue for the skin brush layer, and store the brush object into the brush queue of the skin brush layer corresponding to the brush object; wherein, The brush size in the properties of the brush object on the skin brush layer is determined based on the skin area with the largest area detected on the target image.
  • the first generation module 504 includes:
  • the fourth drawing sub-module is configured to draw the brush layer corresponding to the brush queue based on the attributes of the brush objects in the brush queue of each brush layer in the at least two brush layers ;
  • the first overlay sub-module is used to overlay the brush layers corresponding to at least two brush queues to obtain a first initial overlay layer
  • a fifth drawing submodule used for drawing the skin brush layer based on the attributes of the brush objects in the brush queue of the skin brush layer
  • the second overlay sub-module is configured to overlay the first initial overlay layer and the skin brush layer to generate a target style image corresponding to the target image.
  • the device further includes:
  • a second determining module configured to determine a flat area in the target image
  • a second generation module configured to generate a bump texture map for the target image based on the flat area in the target image
  • the first generation module is specifically used for:
  • a target style image corresponding to the target image is generated.
  • the first generation module 504 includes:
  • the sixth drawing sub-module is configured to draw the brush layer corresponding to the brush queue based on the attributes of the brush objects in the brush queue of each brush layer in the at least two brush layers ;
  • the third overlay sub-module is used to overlay the brush layers corresponding to at least two brush queues to obtain a second initial overlay layer
  • the fourth overlay sub-module is configured to overlay the bump texture map with the second initial overlay layer to generate a target style image corresponding to the target image.
  • the device further includes:
  • a first calculation module configured to calculate the direction field of the second initial overlay layer, and visualize the direction field by using line integral to obtain an initial texture map corresponding to the second initial overlay layer;
  • the second generation module is specifically used for:
  • the second determining module includes:
  • a calculation submodule used for calculating the structure tensor matrix corresponding to each pixel in the target image, and performing eigenvalue decomposition on the structure tensor matrix to obtain two eigenvalues corresponding to each pixel;
  • the second determination sub-module is used to determine the texture degree corresponding to each pixel based on the eigenvalue with a larger numerical value among the two eigenvalues corresponding to each pixel;
  • the third determination submodule is configured to determine a flat area in the target image based on the texture degree corresponding to each pixel in the target image.
  • the brush position information of the brush object is determined by sampling the brush layer based on the preset brush density, and the preset brush layer corresponding to each brush layer is determined.
  • Brush Size which determines the brush size of the brush object.
  • the attributes of each brush object are determined in a simple and efficient manner, which improves the efficiency of determining the attributes of the brush objects involved in drawing the target style image, and further improves the efficiency of generating the target style image based on the attributes of the brush objects.
  • the embodiment of the present disclosure generates the final target style image by drawing at least two brush layers and then superimposing each brush layer, which can improve the quality of the generated target style image.
  • the image generating apparatus provided by the embodiments of the present disclosure can improve the quality of the target-style image and improve the user's satisfaction and experience on the basis of improving the efficiency of generating the target-style image.
  • embodiments of the present disclosure also provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the terminal device is made to implement the present invention.
  • the image generation methods described in the disclosed embodiments are disclosed.
  • an embodiment of the present disclosure further provides an image generation device, as shown in FIG. 6 , which may include:
  • Processor 601 , memory 602 , input device 603 and output device 604 The number of processors 601 in the image generating device may be one or more, and one processor is taken as an example in FIG. 6 .
  • the processor 601 , the memory 602 , the input device 603 and the output device 604 may be connected by a bus or in other ways, wherein the connection by a bus is taken as an example in FIG. 6 .
  • the memory 602 can be used to store software programs and modules, and the processor 601 executes various functional applications and data processing of the above-mentioned image generating apparatus by running the software programs and modules stored in the memory 602 .
  • the memory 602 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function, and the like. Additionally, memory 602 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input device 603 may be used to receive input numerical or character information, and to generate signal input related to user settings and function control of the image generating apparatus.
  • the processor 601 loads the executable files corresponding to the processes of one or more application programs into the memory 602 according to the following instructions, and the processor 601 executes the executable files stored in the memory 602 application to realize various functions of the above-mentioned image generation device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种图像生成方法、装置、设备及存储介质,所述方法包括:基于预设笔刷密度,确定笔刷对象的笔刷位置信息和笔刷大小。可见,以简单高效的方式确定笔刷对象的笔刷位置信息和笔刷大小,提高了确定绘制目标风格图像涉及的笔刷对象的属性的效率,进而提高了基于笔刷对象的属性生成目标风格图像的效率。另外,通过绘制至少两层笔刷图层,然后对各个笔刷图层进行叠加的方式生成目标风格图像,能够提高生成的目标风格图像的质量。因此,提供的图像生成方法,能够提高生成目标风格图像效率的基础上,进一步提高目标风格图像的质量。

Description

一种图像生成方法、装置、设备及存储介质
本申请要求于2020年09月02日提交国家知识产权局、申请号为202010909071.9、申请名称为“一种图像生成方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理领域,尤其涉及一种图像生成方法、装置、设备及存储介质。
背景技术
随着计算机技术的发展,基于计算机技术能够实现各种功能软件,用户对各种功能软件的需求日益增长。其中,基于计算机技术能够智能生成目标图像对应的特定风格图像,例如油画风格、素描风格、卡通风格等特定风格图像,然后对特定风格图像进行展示,能够丰富图像处理软件的功能,满足用户更多的图像处理需求。
由于智能生成目标图像对应的特定风格图像,需要确定绘制特定风格图像涉及的笔刷对象的大小、位置等属性,而目前确定笔刷对象的属性是通过为绘制特定风格图像涉及的所有笔刷对象建立数学模型的方式得到,具体地,将各个笔刷对象的属性作为该数学模型的变量,将确定笔刷对象的属性问题转化为对上述数学模型求解最优解的数学问题。
由于绘制特定风格图像涉及到的笔刷对象通常数量较多,且对数学模型求解最优解问题本身对性能消耗较高。因此,基于上述方式确定绘制特定风格图像涉及的笔刷对象的属性显然效率较低,进一步的导致了基于笔刷对象的属性生成特定风格图像的效率较低、耗时较长。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种图像生成方法、装置、设备及存储介质,能够提高图像生成的效率,减少耗时。
本公开提供了一种图像生成方法,所述方法包括:
为目标图像设置至少两层笔刷图层,其中,所述至少两层笔刷图层中的每层笔刷图层设置有预设笔刷密度;
基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,所述属性包括笔刷位置信息和笔刷大小;
分别为所述每层笔刷图层建立笔刷队列,并将所述笔刷对象存入所述笔刷对象所对应的笔刷图层的笔刷队列;
基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,包括:
基于每层笔刷图层的预设笔刷密度,对所述笔刷图层进行采样,确定所述每层笔刷图 层上的笔刷对象的笔刷位置信息。
一种可选的实施方式中,所述基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,包括:
基于每层笔刷图层的预设笔刷密度,以及预设的笔刷密度与笔刷尺寸的对应关系,确定所述预设笔刷密度对应的预设笔刷尺寸;
基于所述预设笔刷尺寸,确定所述笔刷对象的笔刷大小。
一种可选的实施方式中,所述属性还包括绘制方向,所述绘制方向为随机生成的。
一种可选的实施方式中,所述属性还包括笔刷颜色,所述笔刷颜色为基于所述目标图像上与所述笔刷对象的笔刷位置信息具有对应关系的像素的颜色值确定。
一种可选的实施方式中,所述基于每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像,包括:
基于每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
将至少两个笔刷队列对应的笔刷图层进行叠加,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述将所述至少两个笔刷队列对应的笔刷图层进行叠加,生成所述目标图像对应的目标风格图像,包括:
将所述至少两个笔刷队列对应的笔刷图层中对应位置的亮度值最小的像素的像素值,确定为所述目标图像对应的目标风格图像中对应位置的像素的像素值图像。
一种可选的实施方式中,所述基于每层笔刷图层的笔刷队列中的所述笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层,包括:
基于每层笔刷图层的笔刷队列中的笔刷对象的属性,利用图形运算单元并行绘制各个笔刷队列中的笔刷对象;
基于对各个笔刷队列中的笔刷对象的绘制,绘制各个笔刷队列对应的笔刷图层。
一种可选的实施方式中,所述基于每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像之前,还包括:
检测所述目标图像是否包括人物图像;
当所述目标图像包括人物图像时,基于对所述目标图像上皮肤区域的分割,为所述目标图像设置皮肤笔刷图层;
相应的,所述基于每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像,包括:
基于所述至少两层笔刷图层和所述皮肤笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述基于所述至少两层笔刷图层和所述皮肤笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像之前,还包括:
为所述皮肤笔刷图层建立笔刷队列,并将所述笔刷对象存入所述笔刷对象对应的皮肤 笔刷图层的笔刷队列中;其中,所述皮肤笔刷图层上的笔刷对象的属性中的笔刷大小为基于所述目标图像上检测到的面积最大的皮肤区域确定。
一种可选的实施方式中,所述基于所述至少两层笔刷图层和所述皮肤笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像,包括:
基于所述至少两层笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
将所述至少两个笔刷队列对应的笔刷图层进行叠加,得到第一初始叠加图层;
以及,基于所述皮肤笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述皮肤笔刷图层;
将所述第一初始叠加图层与所述皮肤笔刷图层进行叠加,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述基于每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像之前,还包括:
确定所述目标图像中的平坦区;
基于所述目标图像中的平坦区,为所述目标图像生成凹凸纹理图;
相应的,所述基于每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像,包括:
基于每层笔刷图层的笔刷队列中的笔刷对象的属性,以及所述凹凸纹理图,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述基于每层笔刷图层的笔刷队列中的笔刷对象的属性,以及所述凹凸纹理图,生成所述目标图像对应的目标风格图像,包括:
基于所述至少两层笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
将所述至少两个笔刷队列对应的笔刷图层进行叠加,得到第二初始叠加图层;
将所述凹凸纹理图与所述第二初始叠加图层进行叠加,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述将所述凹凸纹理图与所述第二初始叠加图层进行叠加,生成所述目标图像对应的目标风格图像之前,还包括:
计算所述第二初始叠加图层的方向场,并利用线积分将所述方向场可视化,得到所述第二初始叠加图层对应的初始纹理图;
相应的,所述基于所述目标图像中的平坦区,为所述目标图像生成凹凸纹理图,包括:
将所述第二初始叠加图层对应的初始纹理图中,与所述平坦区对应的区域的流线纹理抹平,得到所述目标图像对应的凹凸纹理图。
一种可选的实施方式中,所述确定所述目标图像中的平坦区,包括:
计算所述目标图像中每个像素对应的结构张量矩阵,并对所述结构张量矩阵进行特征值分解,得到每个像素对应的两个特征值;
基于每个像素对应的两个特征值中数值较大的特征值,确定每个像素对应的纹理程度;
基于所述目标图像中每个像素对应的纹理程度,确定所述目标图像中的平坦区。
第二方面,本公开提供了一种图像生成装置,所述装置包括:
第一设置模块,用于为目标图像设置至少两层笔刷图层,其中,所述至少两层笔刷图层中的每层笔刷图层设置有预设笔刷密度;
第一确定模块,用于基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,所述属性包括笔刷位置信息和笔刷大小;
第一存入模块,用于分别为所述每层笔刷图层建立笔刷队列,并将所述笔刷对象存入所述笔刷对象所对应的笔刷图层上的笔刷队列中;
第一生成模块,用于基于每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像。
第三方面,本公开提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现上述的方法。
第四方面,本公开提供了一种设备,包括:存储器,处理器,及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现上述的方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例提供了一种图像生成方法。具体的,该方法为目标图像设置至少两层笔刷图层,并且每层笔刷图层设置有预设笔刷密度。然后,基于每层笔刷图层的预设笔刷密度,确定每层笔刷图层上的笔刷对象的属性;其中,该属性包括笔刷位置信息和笔刷大小。接着该方法为每层笔刷图层建立笔刷队列,将笔刷对象中存入该笔刷对象所对应的笔刷图层的笔刷队列中。最终,该方法基于每层笔刷图层的笔刷队列中的笔刷对象的属性,生成目标图像对应的目标风格图像。进一步的,本公开通过基于预设笔刷密度对笔刷图层进行采样的方式,确定笔刷对象的笔刷位置信息,以及基于每层笔刷图层的预设笔刷密度,确定对应的预设笔刷尺寸,接着基于该预设笔刷尺寸确定笔刷对象的笔刷大小。可见,本公开以简单高效的方式确定笔刷对象的笔刷位置信息和笔刷大小,提高了确定绘制目标风格图像涉及的笔刷对象的属性的效率,进而提高了基于该笔刷对象的属性生成目标风格图像的效率。
另外,本公开通过绘制至少两层笔刷图层,然后对各个笔刷图层进行叠加的方式生成图像,能够提高生成的目标风格图像的质量。因此,本公开提供的图像生成方法,能够在提高生成目标风格图像效率的基础上,提高目标风格图像的质量。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种图像生成方法的流程图;
图2为本公开实施例提供的另一种图像生成方法的流程图;
图3为本公开实施例提供的一种效果对比图;
图4为本公开实施例提供的又一种图像生成方法的流程图;
图5为本公开实施例提供的一种图像生成装置的结构框图;
图6为本公开实施例提供的一种图像生成设备的结构框图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
绘制特定风格图像并进行展示,已经成为一种场景图像处理功能。基于此,具有将目标图像生成特定目标风格图像功能的软件,越来越受到用户的青睐。因此,如何高效地生成目标图像对应的特定风格图像,并且进一步提高生成的特定风格图像的质量,是目前技术人员面临的问题。
基于此,本公开提供了一种图像生成方法。具体的,该方法为目标图像设置至少两层笔刷图层,并且每层笔刷图层设置有预设笔刷密度。然后,基于每层笔刷图层的预设笔刷密度,确定每层笔刷图层上的笔刷对象的属性,其中,该属性包括笔刷位置信息和笔刷大小。接着该方法为每层笔刷图层建立笔刷队列,将笔刷对象中存入该笔刷对象所对应的笔刷图层的笔刷队列中。最终,该方法基于每层笔刷图层的笔刷队列中的笔刷对象的属性,生成目标图像对应的目标风格图像。
进一步的,本公开通过基于预设笔刷密度对笔刷图层进行采样的方式,确定笔刷对象的笔刷位置信息,以及基于每层笔刷图层的预设笔刷密度,确定对应的预设笔刷密度,接着基于该预设笔刷尺寸确定笔刷对象的笔刷大小。可见,本公开以简单高效的方式确定笔刷对象的笔刷位置信息和笔刷大小,提高了确定绘制目标图像对应的目标风格图像涉及的笔刷对象的属性的效率,进而提高了基于该笔刷对象的属性生成目标风格图像的效率。
另外,本公开通过绘制至少两层笔刷图层,然后对各个笔刷图层进行叠加的方式生成目标风格图像,能够提高生成的目标风格图像的质量。
综上,本公开提供的图像生成方法,能够在提高生成目标风格图像效率的基础上,提高目标风格图像的质量。
基于此,本公开实施例提供了一种图像生成方法,参考图1,为本公开实施例提供的一种图像生成方法的流程图,该方法包括:
S101:为目标图像设置至少两层笔刷图层,其中,所述至少两层笔刷图层中的每 层笔刷图层具有对应的预设笔刷密度。
本公开实施例中,目标图像可以为各种类型的图像,例如目标图像可以为人物图像、风景图像等。
一种可选的实施方式中,可以由用户确定目标图像。具体的,用户可以从终端相册中选择任意一张图像作为目标图像,也可以通过触发拍照功能,将拍照所得图像作为目标图像。确定目标图像的具体方式本公开实施例不做限制。
为了提高生成的目标风格图像的质量,本公开实施例中,在确定目标图像之后,可以为目标图像设置至少两层笔刷图层,其中,每层笔刷图层设置有对应的预设笔刷尺寸,且各个笔刷图层分别对应的预设笔刷尺寸不同。通过利用不同笔刷尺寸绘制不同的笔刷图层,使得生成的目标风格图像最大程度的贴合实际绘制效果,以提高目标风格图像的质量。
一种可选的实施方式中,可以为目标图像设置大中小三层笔刷图层。具体的,大笔刷图层对应的预设笔刷尺寸最大,其次是中笔刷图层对应的预设笔刷尺寸居中,而小笔刷图层对应的预设笔刷尺寸最小。通过对大中小三层笔刷图层的绘制,能够提高最终生成的目标风格图像的质量。
另外,对每层笔刷图层进行绘制之前,首先需要确定每层笔刷图层上的笔刷对象的属性。其中,笔刷对象的属性可以包括笔刷位置信息、笔刷大小、绘制方向、笔刷颜色等。
实际应用中,针对不同的笔刷图层可以设置不同的预设笔刷密度,也可以设置相同的预设笔刷密度。其中,预设笔刷密度决定对应的笔刷图层中各个笔刷对象之间的间隔。
一种可选的实施方式中,预设笔刷尺寸越大的笔刷图层对应的预设笔刷密度越小,而预设笔刷尺寸越小的笔刷图层对应的预设笔刷密度越大。
S102:基于每层笔刷图层对应的预设笔刷密度,对所述笔刷图层进行采样,确定所述笔刷图层上的笔刷对象的笔刷位置信息。
本公开实施例中,笔刷位置信息用于表示对应的笔刷对象需要绘制的位置,其中,笔刷位置信息可以为二维坐标值。
在一些实现方式中,可以基于每层笔刷图层的预设笔刷密度,确定每层笔刷图层上的笔刷对象的属性。例如,基于每层笔刷图层的预设笔刷密度,确定每层笔刷图层上的笔刷对象的笔刷位置信息。
具体的,在确定每个笔刷对象的笔刷位置信息时,首先需要确定该笔刷对象所在的笔刷图层,然后确定该笔刷图层对应的预设笔刷密度;其次,基于该预设笔刷密度对该笔刷图层进行采样,并基于采样得到的采样点的位置确定笔刷对象的笔刷位置信息。
由于本公开实施例是通过对笔刷图层进行采样的方式,确定各个笔刷图层上的笔刷对象的笔刷位置信息。因此,在对笔刷图层进行采样之前,首先确定该笔刷图层对应的预设笔刷密度,以便基于预设笔刷密度对该笔刷图层进行采样,从而确定各个笔 刷图层上的笔刷对象的笔刷位置信息。
一种可选的实施方式中,可以基于预设笔刷密度对笔刷图层进行均匀采样,确定该笔刷图层上的笔刷对象的笔刷位置信息。
假设目标图像的分辨率为W*H,其中一层笔刷图层对应的预设笔刷密度为d,该图层对应的笔刷大小为w*h。如果基于预设笔刷密度d对该笔刷图层进行均匀采样,则通过均匀采样确定的采样点之间的间距在行和列上分别为W/d和H/d,从而可以确定该笔刷图层上各个采样点的位置信息,并将各个采样点的位置信息确定为该笔刷图层上的笔刷对象的中心点的位置信息,即笔刷位置信息。另外,本公开实施例还可以确定该笔刷图层上需要绘制的笔刷对象的数量为(W*H*d*d)/(w*h)个。
当然,基于每层笔刷图层的预设笔刷密度,还可以确定每层笔刷图层上的笔刷对象的笔刷大小,后续介绍。
S103:分别为每层笔刷图层建立笔刷队列,并向所述笔刷队列中存入对应的笔刷图层上的所述笔刷对象;其中,所述笔刷对象的属性包括所述笔刷位置信息和笔刷大小,所述笔刷大小为基于所述笔刷对象所在的笔刷图层确定。
在一些实现方式中,在为没成笔刷图层建立笔刷队列后,并将所述笔刷对象存入所述笔刷对象所对应的笔刷图层的笔刷队列中。
本公开实施例中,可以为每层笔刷图层设置对应的预设笔刷尺寸,基于每层笔刷图层对应的预设笔刷尺寸确定该笔刷图层上的笔刷对象的笔刷大小。
一种可选的实施方式中,可以将每层笔刷图层对应的预设笔刷尺寸,确定为该笔刷图层上的各个笔刷对象的笔刷大小。其中,属于同一个笔刷图层的笔刷对象的笔刷大小相同,即基于同一个尺寸的笔刷对同一个笔刷图层进行绘制。
另一种可选的实施方式中,还可以基于每层笔刷图层对应的预设笔刷密度,确定每层笔刷图层上的笔刷对象的笔刷大小。具体的,预先设置笔刷密度与笔刷尺寸的对应关系,然后基于每层笔刷图层的预设笔刷密度,以及上述预先设置的笔刷密度与笔刷尺寸的对应关系,确定该预设笔刷密度所对应的预设笔刷尺寸;然后将该预设笔刷尺寸,确定为该笔刷图层上的笔刷对象的笔刷大小。
本公开实施例中,笔刷对象的属性不仅包括笔刷位置信息和笔刷大小,还包括绘制方向。其中,绘制方向用于表示对应的笔刷对象绘制的方向。
一种可选的实施方式中,为了确定提高绘制方向效率,可以随机生成各个笔刷对象的绘制方向。在绘制方向确定效率提高的前提下,本公开实施例能够提高生成目标风格图像的效率。
另外,本公开实施例中的笔刷对象的属性还包括笔刷颜色。其中,笔刷颜色用于表示笔刷对象的绘制颜色。
一种可选的实施方式中,可以基于目标图像上,与笔刷对象的笔刷位置信息具有对应关系的像素的颜色值,确定该笔刷对象的笔刷颜色。具体的,在确定笔刷对象的笔刷位置信息之后,可以基于该笔刷位置信息,确定在目标图像上与该笔刷位置信息对应的位置的像素,并获取该像素对应的颜色值,然后将该颜色值确定为该笔刷对象 的笔刷颜色。
本公开实施例中,在确定笔刷对象的各个属性之后,可以将各个笔刷对象存入该笔刷对象对应的笔刷图层的笔刷队列中。由于笔刷队列具有先进先出的特性,因此,各个笔刷对象存入笔刷队列的顺序,决定了该笔刷队列中的笔刷对象的绘制顺序。
另外,实际应用中,为每层笔刷图层建立笔刷队列的时机,可以在向笔刷队列中存入笔刷对象之前的任意时刻,例如可以在上述S101或S102之前执行,也可以在上述S101或S102之后执行,也可以与上述S101或S102并列执行,具体的执行先后顺序不受限制。
S104:基于每层笔刷图层对应的笔刷队列中的所述笔刷对象的属性,生成所述目标图像对应的目标风格图像。
其中,目标风格图像可以为特定艺术风格的图像,例如油画风格图像、简笔画风格图像、卡通风格、素描风格等。
本公开实施例中,在确定每层笔刷图层上的各个笔刷对象的属性,并且将各个笔刷对象存入对应的笔刷队列之后,可以基于各个笔刷队列中的笔刷对象的属性,分别绘制各个笔刷队列对应的笔刷图层。
一种可选的实施方式,由于每层笔刷图层彼此之间是独立的。因此,可以并行绘制各层笔刷图层,以提高笔刷图层的绘制效率,进一步提高图像生成的效率。具体的,可以利用图形运算单元GPU基于各层笔刷图层分别对应的笔刷队列中的笔刷对象,并行绘制各个笔刷图层。
另一种可选的实施方式中,由于各个笔刷对象彼此之间是独立的。因此,对于同一笔刷队列中的笔刷对象也可以并行绘制,进一步的提高了各个笔刷图层的绘制效率,从而提高图像生成的效率。具体的,可以利用GPU并行绘制各个笔刷队列中的笔刷对象,以提高图像生成的效率。
实际应用中,可以针对同一个笔刷队列中出队的若干个笔刷对象,利用GPU对该若干个笔刷对象并行绘制,基于对各个笔刷对象的绘制,最终完成对各个笔刷图层的绘制。
本公开实施例中,在完成各个笔刷图层的绘制之后,将各个笔刷图层进行叠加,最终生成目标图像对应的目标风格图像。
一种可选的实施方式中,由于亮度值越小的位置,在图像中的显示效果越好,因此本公开实施例可以将各个笔刷图层中对应位置的亮度值最小的像素的像素值,确定为目标风格图像中对应位置的像素的像素值。基于上述像素值的确定方式,完成目标风格图像中各个像素的像素值的确定,从而生成目标图像对应的目标风格图像。
实际应用中,可以分别比较各个笔刷图层中处于同一位置的像素,确定亮度值最小的像素,然后获取该亮度值最小像素的像素值,并将该亮度值最小像素的像素值确定为目标风格图像中与该像素处于同一位置的像素的像素值。例如,对于坐标为(m,n)的像素,确定各个笔刷图层中坐标为(m,n)的像素的亮度值,然后确定其中亮度值最小的像素,并将该亮度值最小像素的像素值确定为目标风格图像中坐标为(m,n) 的像素的像素值。
本公开实施例提供的图像生成方法中,通过基于预设笔刷密度对笔刷图层进行采样的方式,确定笔刷对象的笔刷位置信息,以及基于每层笔刷图层对应的预设笔刷尺寸,确定笔刷对象的笔刷大小,能够简单高效地确定各个笔刷对象属性的,进而提高了确定绘制目标风格图像涉及的笔刷对象的属性效率,进而提高了基于该笔刷对象的属性,生成目标风格图像的效率。
另外,本公开实施例通过绘制至少两层笔刷图层,然后对各个笔刷图层进行叠加的方式,生成最终的目标风格图像,能够提高生成的目标风格图像质量。
可见,本公开实施例提供的图像生成方法,提高图像生成的效率的基础上,进一步提高生成的目标风格图像的质量,实现高效地生成质量较高的目标风格图像,提高了用户的体验。
实际应用中,目标图像可以为人物图像,而对于人物图像通常对于人脸等皮肤部位的效果要求较高。因此,本公开实施例还提供了一种图像生成的方法,在上述实施例的基础上,进一步的加强了对皮肤部位的处理。
参考图2,为本公开实施例提供的另一种图像生成的方法流程图。在上述图1中各个步骤的基础上,还可以包括:
S201:检测所述目标图像是否包括人物图像。
本公开实施例中,在确定目标图像之后,可以检测目标图像是否为包括人物图像。
一种可选的实施方式中,可以通过人脸检测方式,对目标图像上的人脸进行识别,如果识别到目标图像上存在人脸,则确定目标图像包括人物图像。
另一种可选的实施方式中,还可以通过对人的其他特征进行检测,例如对腿部进行检测等,以确定目标图像是否包括人物图像。具体的检测方式本公开实施例不做限制。
实际应用中,可以将针对人物图像生成目标风格图像设置为一种可选的图像生成模式,在用户上传人物图像进行目标风格图像生成时,可以选择该图像生成模式,触发针对人物图像的图像生成流程。
S202:如果确定所述目标图像为人物图像,则基于对所述目标图像上皮肤区域的分割,为所述目标图像设置皮肤笔刷图层。
本公开实施例中,如果确定目标图像上包括人物图像,则针对目标图像进行皮肤区域的检测,以确定目标图像上的皮肤区域,然后基于皮肤分割算法将确定的各个皮肤区域从目标图像中分割出来,并基于分割出来的皮肤区域,为目标图像设置皮肤笔刷图层。
其中,皮肤笔刷图层的绘制是对人物图像上的人脸等皮肤区域进行更细节的绘制,使得最终生成的目标风格图像中的皮肤区域的效果更好,提高用户的体验。因此,对于皮肤笔刷图层的绘制,可以只包括对人物图像上的皮肤区域的绘制。
S203:为所述皮肤笔刷图层建立笔刷队列,并向所述笔刷队列中存入所述皮肤笔刷 图层上的笔刷对象;其中,所述皮肤笔刷图层上的笔刷对象的属性中的笔刷大小为基于在所述目标图像上检测到的面积最大的皮肤区域确定。
本公开实施例中,为目标图像设置皮肤笔刷图层之后,需要确定皮肤笔刷图层上的笔刷对象的属性。其中,皮肤笔刷图层上各个笔刷图层的属性的确定方式,可以参考上述实施例中各个笔刷图层上的笔刷对象的属性的确定方式。
一种可选的实施方式中,为了使得皮肤笔刷图层上的笔刷尺寸更适合于对目标图像上的皮肤区域的绘制,可以基于目标图像上各个皮肤区域的面积大小,确定皮肤笔刷图层上的笔刷对象的笔刷大小。
一种可选的实施方式中,可以基于在目标图像上检测到的面积最大的皮肤区域,确定皮肤笔刷图层上的笔刷对象的属性中的笔刷大小。其中,该面积最大的皮肤区域的面积越大,则确定的笔刷大小越大,该面积最大的皮肤区域的面积越小,则确定的笔刷大小越小。
另一种可选的实施方式中,由于皮肤区域中的人脸区域的处理效果是影响图像质量的因素。因此,本公开实施例可以基于人脸区域确定皮肤笔刷图层上的笔刷对象的笔刷大小。
具体的,可以检测目标图像上的人脸区域,如果目标图像上包括多个人脸区域,则确定多个人脸区域中面积最大的人脸区域,然后,基于面积最大的人脸区域,确定皮肤笔刷图层上的笔刷对象的笔刷大小。其中,面积最大的人脸区域的面积越大,则确定的笔刷大小越大,相反的,面积最大的人脸区域的面积越小,则确定的笔刷大小越小。
如果目标图像上只包括一个人脸区域,则基于该人脸区域的面积,确定皮肤笔刷图层上的笔刷对象的笔刷大小。具体的,该皮肤笔刷图层上的笔刷对象的笔刷大小与人脸区域的面积呈正比。
S204:基于所述至少两层笔刷图层和所述皮肤笔刷图层中的每层笔刷图层对应的笔刷队列中的所述笔刷对象的属性,生成所述目标图像对应的目标风格图像。
本公开实施例中,在向为目标图像设置的每层笔刷图层对应的笔刷队列中存入笔刷对象之后,基于各个笔刷队列中各个笔刷对象的属性,生成目标图像对应的目标风格图像。
一种可选的实施方式中,首先,基于所述至少两层笔刷图层中每层笔刷图层对应的笔刷队列中的所述笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;然后,将所述至少两个笔刷队列对应的笔刷图层进行叠加,得到第一初始叠加图层。以及,基于所述皮肤笔刷图层的笔刷队列中的所述笔刷对象的属性,绘制所述皮肤笔刷图层;然后,将所述第一初始叠加图层与所述皮肤笔刷图层进行叠加,生成所述目标图像对应的目标风格图像。
以目标风格图像为油画风格图像为例,如图3所示,为本公开实施例提供的一种效果对比图,其中,A为目标图像,B、C、D分别为利用单层笔刷图层绘制的效果。其中,B为仅利用大笔刷绘制笔刷图层的效果,C为仅利用中笔刷绘制笔刷图层的效果, D为仅利用小笔刷绘制笔刷图层的效果。F为对图3中B、C、D分别对应的笔刷图层进行叠加后得到的效果图,E为对皮肤笔刷图层进行绘制的效果,G为将皮肤笔刷图层叠加到F中的效果图,可见,G中叠加皮肤笔刷图层后的图像质量明显高于F中的图像质量。
本公开实施例中,对于上述至少两层笔刷图层中每层笔刷图层的绘制方式可参照上述实施例,在此不再赘述。
针对皮肤笔刷图层进行绘制之前,首先确定皮肤笔刷图层对应的笔刷队列中各个笔刷对象的属性。具体的,皮肤笔刷图层对应的笔刷队列中的笔刷对象的属性与上述至少两层笔刷图层分别对应的笔刷队列中的笔刷对象的属性相同,属性可以包括笔刷大小、笔刷位置信息、绘制方向和笔刷颜色。其中,笔刷大小为基于在目标图像上检测到的面积最大的皮肤区域确定,具体在上述S203中已经介绍,在此不再赘述。
对于笔刷位置信息的确定,可以对皮肤笔刷图层进行均匀采样,确定皮肤笔刷图层上的各个笔刷对象的中心点位置,将各个中心点位置确定为对应的笔刷对象的笔刷位置信息。
对于绘制方向的确定,可以对基于随机的方式确定皮肤笔刷图层上各个笔刷对象的绘制方向,从而提高确定绘制方向的效率。
对于笔刷颜色的确定,可以基于目标图像上与皮肤笔刷图层上的各个笔刷对象的笔刷位置信息具有对应关系的像素的颜色值,确定为对应的笔刷对象的笔刷颜色。
本公开实施例中,在确定皮肤笔刷图层对应的笔刷队列中各个笔刷对象的属性之后,基于各个笔刷对象的属性,绘制对应的笔刷对象,从而完成对皮肤笔刷图层的绘制。
一种可选的实施方式中,可以利用GPU对包括皮肤笔刷图层在内的各层笔刷图层上的笔刷对象并行绘制,从而提高各层笔刷图层的绘制效率,最终提高生成目标风格图像的效率。
本公开实施例对于包括皮肤笔刷图层的各层笔刷图层的绘制顺序不做限制。具体的,在将为目标图像设置的至少两层笔刷图层绘制完成后,将该至少两层笔刷图层进行叠加,得到第一初始叠加图层,然后,将绘制完成的皮肤笔刷图层与该第一初始叠加图层再次进行叠加,最终得到目标图像对应的目标风格图像。
一种可选的实施方式中,在对皮肤笔刷图层与第一初始叠加图层进行叠加时,也可以针对皮肤区域,将皮肤笔刷图层与第一初始叠加图层中对应位置的亮度值最小的像素的像素值,确定为目标风格图像中对应位置的像素的像素值。通过上述叠加方式,能够提高生成的目标风格图像中皮肤区域的质量,进一步提升用户对生成的目标风格图像的满意度。
本公开实施例提供的图像生成方法中,针对人物图像,不仅设置至少两层笔刷图层,还设置皮肤笔刷图层。该皮肤笔刷图层用于对人物图像中皮肤区域进行单独绘制,提高生成的目标风格图像中皮肤区域的效果,进一步的提升目标风格图像的质量。
实际场景中,目标风格图像的绘制过程会在画布上留下纹理痕迹,为了进一步的提高目标风格图像的质量,使得生成的目标风格图像能够最大程度上体现出实际绘制目标风格图像的效果,本公开实施例还提供了一种图像生成方法,在上述实施例的基础上,为图像添加了凹凸纹理效果。
参考图4,为本公开实施例提供的另一种图像生成方法的流程图,在图1中各个步骤的基础上,还包括:
S401:确定目标图像中的平坦区。
实际应用中,为目标风格图像添加凹凸纹理效果,需要基于目标图像本身的特征实现。因此,本公开实施例确定目标图像中的平坦区,然后基于目标图像的平坦区,为目标图像生成凹凸纹理图,进而基于该凹凸纹理图,为目标图像添加凹凸纹理效果。
一种可选的实施方式中,确定目标图像上的平坦区,具体可以通过确定目标图像上的各个像素的纹理强度,确定目标图像上的平坦区。
一种可选的实施方式中,计算目标图像中每个像素对应的结构张量矩阵,并对结构张量矩阵进行特征值分解,得到每个像素对应的两个特征值;然后,基于每个像素对应的两个特征值中数值较大的特征值,确定每个像素对应的纹理程度,最终,基于目标图像中每个像素对应的纹理程度,确定目标图像中的平坦区。
实际应用中,针对目标图像上的每个像素,首先计算每个像素分别在X轴和Y轴方向的梯度信息g x和g y,然后利用公式(1)基于该梯度信息确定每个像素对应的结构张量矩阵T σ,公式(1)如下所示:
Figure PCTCN2021112039-appb-000001
其中,“*”表示卷积运算,G σ表示标准差为σ的高斯核。该公式(1)中以一个像素为例进行示例介绍。
其次,对上述结构张量矩阵进行特征值分解,得到该像素对应的两个特征值s1和s2,其中,s1>=s2。然后,基于该像素对应的两个特征值中数值较大的特征值s1,确定该像素对应的纹理程度。其中,s1可以用于表示该像素对应的纹理程度,s1的值越大,说明该像素对应的纹理程度越高,该像素对应的位置越不平坦;而s1的值越小,说明该像素对应的纹理程度越低,该像素对应的位置越平坦。
本公开实施例中,可以基于目标图像上各个像素对应的纹理程度,确定目标图像中的平坦区。以一个像素为例,针对目标图像上的一个像素,将能够表示该像素对应的纹理程度的特征值s1,与预设阈值t进行比较,如果s1<t,则可以确定该像素对应的位置属于目标图像中的平坦区;相反的,如果s1>=t,则可以确定该像素对应的位置不属于目标图像中的平坦区。
另外,对上述结构张量矩阵T σ进行特征值分解,在得到特征值s1的同时,还可以得到另一个特征值s2,另外,还可以得到与特征值s1和s2分别对应的方向向量v1和v2。其中,与特征值s2对应的方向向量v2作为对应像素的方向向量,目标图像上各个像素分别对应的方向向量v2组成该目标图像的方向场。
基于上述方式,确定目标图像上各个像素是否属于目标图像中的平坦区,然后由属于平坦区的各个像素构成目标图像上的平坦区。
S402:基于目标图像中的平坦区,为目标图像生成凹凸纹理图。
本公开实施例中,在确定目标图像中的平坦区之后,基于目标图像上的平坦区,生成目标图像的凹凸纹理图。
一种可选的实施方式中,以油画风格图像为例,为了模仿实际场景中在画布上绘制油画的凹凸纹理效果,在生成目标图像对应的凹凸纹理图之前,首先对已经绘制的笔刷队列对应的笔刷图层进行叠加,得到第二初始叠加图层。然后,计算第二初始叠加图层的方向场,并利用线积分将所述方向场可视化,得到第二初始叠加图层对应的初始纹理图。最终,将第二初始叠加图层对应的初始纹理图中,与所述平坦区对应的区域的流线纹理抹平,得到所述目标图像对应的凹凸纹理图,该凹凸纹理图用于对目标风格图像增加凹凸纹理效果。
本公开实施例计算第二初始叠加图层的方向场的方式可以参照上述目标图像的方向场的确定方式。具体的,首先计算第二初始叠加图层上各个像素点分别对应的方向向量v2,然后,由第二初始叠加图层上各个像素点分别对应的方向向量v2组成第二初始叠加图层的方向场。
其中,上述线积分是一种向量场可视化技术,通过用低通滤波器沿着流线方向对白噪声做卷积得到一张体现流线的初始纹理图。
S403:基于每层笔刷图层对应的笔刷队列中的所述笔刷对象的属性,以及凹凸纹理图,生成目标图像对应的目标风格图像。
一种可选的实施方式中,首先基于所述至少两层笔刷图层中每层笔刷图层对应的笔刷队列中的所述笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;然后,将所述至少两个笔刷队列对应的笔刷图层进行叠加,得到第二初始叠加图层;最终,将所述凹凸纹理图与所述第二初始叠加图层进行叠加,生成所述目标图像对应的目标风格图像。
其中,对于将上述至少两个笔刷队列对应的笔刷图层进行叠加的具体实现方式可参照上述实施例进行理解,在此不再赘述。
本公开实施例中,通过将凹凸纹理图与第二初始叠加图层进行叠加的方式,为目标风格图像添加凹凸纹理效果,提升生成的目标风格图像的质量,提高用户的体验。
一种可选的实施方式中,以油画风格图像为例,可以利用凹凸映射技术bump map,通过该凹凸纹理图为油画风格图像添加凹凸纹理效果。具体的原理是,将凹凸纹理图作为油画风格图像的高度信息,通过计算凹凸纹理的梯度来计算法向信息,从而产生阴影效果为油画风格图像生成凹凸纹理感。如上图3所示,H为图3中的G对应的效果图,添加了凹凸纹理感之后的效果图。
另外,为了进一步的提高生成的目标风格图像的质量,还可以针对生成的目标风格图像执行锐化等处理。
在生成目标风格图像之前,还可以将凹凸纹理图与第二初始叠加图层进行叠加后 得到的结果,再次与绘制的皮肤笔刷图层进行叠加,最大程度上提高目标风格图像的质量。
本公开实施例提供的图像生成方法中,通过为图像增加凹凸纹理效果,使得生成的目标风格图像更贴近于真实作画场景中绘制的图像,提高了生成的目标风格图像的质量,从而提升了用户对目标风格图像的满意度。
与上述方法实施例相对应,本公开还提供了一种图像生成装置,参考图5,为本公开实施例提供的一种图像生成装置的结构示意图,所述装置包括:
第一设置模块501,用于为目标图像设置至少两层笔刷图层,其中,所述至少两层笔刷图层中的每层笔刷图层设置有预设笔刷密度;
第一确定模块502,用于基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,所述属性包括笔刷位置信息和笔刷大小;
第一存入模块503,用于分别为所述每层笔刷图层建立笔刷队列,并将所述笔刷对象存入所述笔刷对象所对应的笔刷图层上的笔刷队列中;
第一生成模块504,用于基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,第一确定模块502,具体用于基于每层笔刷图层的预设笔刷密度,以及预设的笔刷密度与笔刷尺寸的对应关系,确定所述预设笔刷密度对应的预设笔刷尺寸。
一种可选的实施方式中,第一确定模块502,具体用于基于每层笔刷图层的预设笔刷密度,以及预设的笔刷密度与笔刷尺寸的对应关系,确定所述预设笔刷密度对应的预设笔刷尺寸;基于所述预设笔刷尺寸,确定所述笔刷对象的笔刷大小。
一种可选的实施方式中,所述笔刷对象的属性还包括绘制方向,所述绘制方向为随机生成的。
一种可选的实施方式中,所述笔刷对象的属性还包括笔刷颜色,所述笔刷颜色为基于所述目标图像上与所述笔刷对象的笔刷位置信息具有对应关系的像素的颜色值确定。
一种可选的实施方式中,所述第一生成模块504,包括:
第一绘制子模块,用于基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
第一生成子模块,用于将至少两个笔刷队列对应的笔刷图层进行叠加,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述第一生成子模块504,包括:
第一确定子模块,用于将至少两个笔刷队列对应的笔刷图层中对应位置的亮度值最小的像素的像素值,确定为所述目标图像对应的目标风格图像中对应位置的像素的像素值。
一种可选的实施方式中,所述第一绘制子模块,包括:
第二绘制子模块,用于基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,利用图形运算单元并行绘制各个笔刷队列中的笔刷对象;
第三绘制子模块,用于基于对各个笔刷队列中的笔刷对象的绘制,绘制各个笔刷队列对应的笔刷图层。
一种可选的实施方式中,所述装置还包括:
检测模块,用于检测所述目标图像是否包括人物图像;
第二设置模块,用于当所述目标图像包括人物图像时,基于对所述目标图像上皮肤区域的分割,为所述目标图像设置皮肤笔刷图层;
相应的,所述第一生成模块504,具体用于:
基于所述至少两层笔刷图层和所述皮肤笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的所述目标图像对应的。
一种可选的实施方式中,所述装置还包括:
第二存入模块,用于为所述皮肤笔刷图层建立笔刷队列,并将所述笔刷对象存入所述笔刷对象对应的皮肤笔刷图层的笔刷队列中;其中,所述皮肤笔刷图层上的笔刷对象的属性中的笔刷大小为基于所述目标图像上检测到的面积最大的皮肤区域确定。
一种可选的实施方式中,所述第一生成模块504,包括:
第四绘制子模块,用于基于所述至少两层笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
第一叠加子模块,用于将至少两个笔刷队列对应的笔刷图层进行叠加,得到第一初始叠加图层;
第五绘制子模块,用于基于所述皮肤笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述皮肤笔刷图层;
第二叠加子模块,用于将所述第一初始叠加图层与所述皮肤笔刷图层进行叠加,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述装置还包括:
第二确定模块,用于确定所述目标图像中的平坦区;
第二生成模块,用于基于所述目标图像中的平坦区,为所述目标图像生成凹凸纹理图;
相应的,所述第一生成模块,具体用于:
基于每层笔刷图层的笔刷队列中的所述笔刷对象的属性,以及所述凹凸纹理图,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述第一生成模块504,包括:
第六绘制子模块,用于基于所述至少两层笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
第三叠加子模块,用于将至少两个笔刷队列对应的笔刷图层进行叠加,得到第二初始叠加图层;
第四叠加子模块,用于将所述凹凸纹理图与所述第二初始叠加图层进行叠加,生成所述目标图像对应的目标风格图像。
一种可选的实施方式中,所述装置还包括:
第一计算模块,用于计算所述第二初始叠加图层的方向场,并利用线积分将所述方向场可视化,得到所述第二初始叠加图层对应的初始纹理图;
相应的,所述第二生成模块,具体用于:
将所述第二初始叠加图层对应的初始纹理图中,与所述平坦区对应的区域的流线纹理抹平,得到所述目标图像对应的凹凸纹理图。
一种可选的实施方式中,所述第二确定模块,包括:
计算子模块,用于计算所述目标图像中每个像素对应的结构张量矩阵,并对所述结构张量矩阵进行特征值分解,得到每个像素对应的两个特征值;
第二确定子模块,用于基于每个像素对应的两个特征值中数值较大的特征值,确定每个像素对应的纹理程度;
第三确定子模块,用于基于所述目标图像中每个像素对应的纹理程度,确定所述目标图像中的平坦区。
本公开实施例提供的图像生成装置中,通过基于预设笔刷密度对笔刷图层进行采样的方式,确定笔刷对象的笔刷位置信息,以及基于每层笔刷图层对应的预设笔刷尺寸,确定笔刷对象的笔刷大小。以简单高效的方式确定各个笔刷对象属性,提高了确定绘制目标风格图像涉及的笔刷对象的属性的效率,进而提高了基于笔刷对象的属性生成目标风格图像的效率。
另外,本公开实施例通过绘制至少两层笔刷图层,然后对各个笔刷图层进行叠加的方式,生成最终的目标风格图像,能够提高生成的目标风格图像质量。
可见,本公开实施例提供的图像生成装置,能够在提高生成目标风格图像效率的基础上,提高目标风格图像的质量,提高了用户的满意度以及体验。
除了上述方法和装置以外,本公开实施例还提供了一种计算机可读存储介质,计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现本公开实施例所述的图像生成方法。
另外,本公开实施例还提供了一种图像生成设备,参见图6所示,可以包括:
处理器601、存储器602、输入装置603和输出装置604。图像生成设备中的处理器601的数量可以一个或多个,图6中以一个处理器为例。在本发明的一些实施例中,处理器601、存储器602、输入装置603和输出装置604可通过总线或其它方式连接,其中,图6中以通过总线连接为例。
存储器602可用于存储软件程序以及模块,处理器601通过运行存储在存储器602的软件程序以及模块,从而执行上述图像生成设备的各种功能应用以及数据处理。存储器602可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等。此外,存储器602可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。输入装置603可用于接收输入的数字或字符信息,以及产生与图像生 成设备的用户设置以及功能控制有关的信号输入。
具体在本实施例中,处理器601会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器602中,并由处理器601来运行存储在存储器602中的应用程序,从而实现上述图像生成设备的各种功能。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (18)

  1. 一种图像生成方法,其特征在于,所述方法包括:
    为目标图像设置至少两层笔刷图层,其中,所述至少两层笔刷图层中的每层笔刷图层设置有预设笔刷密度;
    基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,所述属性包括笔刷位置信息和笔刷大小;
    分别为所述每层笔刷图层建立笔刷队列,并将所述笔刷对象存入所述笔刷对象所对应的笔刷图层的笔刷队列中;
    基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像。
  2. 根据权利要求1所述的方法,其特征在于,所述基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,包括:
    基于每层笔刷图层的预设笔刷密度,对所述笔刷图层进行采样,确定所述每层笔刷图层上的笔刷对象的笔刷位置信息。
  3. 根据权利要求1所述的方法,其特征在于,所述基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,包括:
    基于每层笔刷图层的预设笔刷密度,以及预设的笔刷密度与笔刷尺寸的对应关系,确定所述预设笔刷密度对应的预设笔刷尺寸;
    基于所述预设笔刷尺寸,确定所述笔刷对象的笔刷大小。
  4. 根据权利要求1所述的方法,其特征在于,所述属性还包括绘制方向,所述绘制方向为随机生成的。
  5. 根据权利要求1所述的方法,其特征在于,所述属性还包括笔刷颜色,所述笔刷颜色为基于所述目标图像上与所述笔刷对象的笔刷位置信息具有对应关系的像素的颜色值确定。
  6. 根据权利要求1所述的方法,其特征在于,所述基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像,包括:
    基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
    将至少两个笔刷队列对应的笔刷图层进行叠加,生成所述目标图像对应的目标风格图像。
  7. 根据权利要求6所述的方法,其特征在于,所述将至少两个笔刷队列对应的笔刷图层进行叠加,生成所述目标图像对应的目标风格图像,包括:
    将至少两个笔刷队列对应的笔刷图层中对应位置的亮度值最小的像素的像素值,确定为所述目标图像对应的目标风格图像中对应位置的像素的像素值。
  8. 根据权利要求6所述的方法,其特征在于,所述基于所述每层笔刷图层的笔刷队列 中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层,包括:
    基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,利用图形运算单元并行绘制各个笔刷队列中的笔刷对象;
    基于对各个笔刷队列中的笔刷对象的绘制,绘制各个笔刷队列对应的笔刷图层。
  9. 根据权利要求1所述的方法,其特征在于,所述基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像之前,还包括:
    检测所述目标图像是否包括人物图像;
    当所述目标图像包括人物图像时,基于对所述目标图像上皮肤区域的分割,为所述目标图像设置皮肤笔刷图层;
    所述基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像,包括:
    基于所述至少两层笔刷图层和所述皮肤笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像。
  10. 根据权利要求9所述的方法,其特征在于,所述基于所述至少两层笔刷图层和所述皮肤笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像之前,还包括:
    为所述皮肤笔刷图层建立笔刷队列,并将所述笔刷对象存入所述笔刷对象对应的皮肤笔刷图层的笔刷队列中;其中,所述皮肤笔刷图层上的笔刷对象的属性中的笔刷大小为基于所述目标图像上检测到的面积最大的皮肤区域确定。
  11. 根据权利要求9所述的方法,其特征在于,所述基于所述至少两层笔刷图层和所述皮肤笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像,包括:
    基于所述至少两层笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
    将至少两个笔刷队列对应的笔刷图层进行叠加,得到第一初始叠加图层;
    以及,基于所述皮肤笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述皮肤笔刷图层;
    将所述第一初始叠加图层与所述皮肤笔刷图层进行叠加,生成所述目标图像对应的目标风格图像。
  12. 根据权利要求1所述的方法,其特征在于,所述基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像之前,还包括:
    确定所述目标图像中的平坦区;
    基于所述目标图像中的平坦区,为所述目标图像生成凹凸纹理图;
    所述基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像,包括:
    基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,以及所述凹凸纹理图,生成所述目标图像对应的目标风格图像。
  13. 根据权利要求12所述的方法,其特征在于,所述基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,以及所述凹凸纹理图,生成所述目标图像对应的目标风格图像,包括:
    基于所述至少两层笔刷图层中的每层笔刷图层的笔刷队列中的笔刷对象的属性,绘制所述笔刷队列对应的笔刷图层;
    将至少两个笔刷队列对应的笔刷图层进行叠加,得到第二初始叠加图层;
    将所述凹凸纹理图与所述第二初始叠加图层进行叠加,生成所述目标图像对应的目标风格图像。
  14. 根据权利要求13所述的方法,其特征在于,所述将所述凹凸纹理图与所述第二初始叠加图层进行叠加,生成所述目标图像对应的目标风格图像之前,还包括:
    计算所述第二初始叠加图层的方向场,并利用线积分将所述方向场可视化,得到所述第二初始叠加图层对应的初始纹理图;
    所述基于所述目标图像中的平坦区,为所述目标图像生成凹凸纹理图,包括:
    将所述第二初始叠加图层对应的初始纹理图中,与所述平坦区对应的区域的流线纹理抹平,得到所述目标图像对应的凹凸纹理图。
  15. 根据权利要求12所述的方法,其特征在于,所述确定所述目标图像中的平坦区,包括:
    计算所述目标图像中每个像素对应的结构张量矩阵,并对所述结构张量矩阵进行特征值分解,得到每个像素对应的两个特征值;
    基于每个像素对应的两个特征值中数值较大的特征值,确定每个像素对应的纹理程度;
    基于所述目标图像中每个像素对应的纹理程度,确定所述目标图像中的平坦区。
  16. 一种图像生成装置,其特征在于,所述装置包括:
    第一设置模块,用于为目标图像设置至少两层笔刷图层,其中,所述至少两层笔刷图层中的每层笔刷图层设置有预设笔刷密度;
    第一确定模块,用于基于每层笔刷图层的预设笔刷密度,确定所述每层笔刷图层上的笔刷对象的属性,所述属性包括笔刷位置信息和笔刷大小;
    第一存入模块,用于分别为所述每层笔刷图层建立笔刷队列,并将所述笔刷对象存入所述笔刷对象所对应的笔刷图层上的笔刷队列中;
    第一生成模块,用于基于所述每层笔刷图层的笔刷队列中的笔刷对象的属性,生成所述目标图像对应的目标风格图像。
  17. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现如权利要求1-15任一项所述的方法。
  18. 一种设备,其特征在于,包括:存储器,处理器,及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如权利要求1-15任一项所述的方法。
PCT/CN2021/112039 2020-09-02 2021-08-11 一种图像生成方法、装置、设备及存储介质 WO2022048414A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/023,893 US20230334729A1 (en) 2020-09-02 2021-08-11 Image generation method, apparatus, and device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010909071.9A CN112070854B (zh) 2020-09-02 2020-09-02 一种图像生成方法、装置、设备及存储介质
CN202010909071.9 2020-09-02

Publications (1)

Publication Number Publication Date
WO2022048414A1 true WO2022048414A1 (zh) 2022-03-10

Family

ID=73665253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/112039 WO2022048414A1 (zh) 2020-09-02 2021-08-11 一种图像生成方法、装置、设备及存储介质

Country Status (3)

Country Link
US (1) US20230334729A1 (zh)
CN (1) CN112070854B (zh)
WO (1) WO2022048414A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070854B (zh) * 2020-09-02 2023-08-08 北京字节跳动网络技术有限公司 一种图像生成方法、装置、设备及存储介质
CN114119847B (zh) * 2021-12-05 2023-11-07 北京字跳网络技术有限公司 一种图形处理方法、装置、计算机设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241593A (zh) * 2007-02-06 2008-08-13 英特维数位科技股份有限公司 图层影像的影像处理装置及其方法
CN103164158A (zh) * 2013-01-10 2013-06-19 深圳市欧若马可科技有限公司 触摸屏上绘画创作与教学的方法、系统及装置
CN103903293A (zh) * 2012-12-27 2014-07-02 腾讯科技(深圳)有限公司 笔触感艺术图像的生成方法及装置
CN104820999A (zh) * 2015-04-28 2015-08-05 成都品果科技有限公司 一种将自然图像转换成水墨画风格图像的方法
CN106683151A (zh) * 2015-11-05 2017-05-17 北大方正集团有限公司 画刷轨迹的绘制方法及绘制装置
US20190304200A1 (en) * 2018-03-30 2019-10-03 Clo Virtual Fashion Apparatus and method for transferring garment draping between avatars
CN112070854A (zh) * 2020-09-02 2020-12-11 北京字节跳动网络技术有限公司 一种图像生成方法、装置、设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846390B (zh) * 2017-02-27 2020-10-13 迈吉客科技(北京)有限公司 一种图像处理的方法及装置
CN108319953B (zh) * 2017-07-27 2019-07-16 腾讯科技(深圳)有限公司 目标对象的遮挡检测方法及装置、电子设备及存储介质
CN111127596B (zh) * 2019-11-29 2023-02-14 长安大学 一种基于增量Voronoi序列的分层油画笔刷绘制方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241593A (zh) * 2007-02-06 2008-08-13 英特维数位科技股份有限公司 图层影像的影像处理装置及其方法
CN103903293A (zh) * 2012-12-27 2014-07-02 腾讯科技(深圳)有限公司 笔触感艺术图像的生成方法及装置
CN103164158A (zh) * 2013-01-10 2013-06-19 深圳市欧若马可科技有限公司 触摸屏上绘画创作与教学的方法、系统及装置
CN104820999A (zh) * 2015-04-28 2015-08-05 成都品果科技有限公司 一种将自然图像转换成水墨画风格图像的方法
CN106683151A (zh) * 2015-11-05 2017-05-17 北大方正集团有限公司 画刷轨迹的绘制方法及绘制装置
US20190304200A1 (en) * 2018-03-30 2019-10-03 Clo Virtual Fashion Apparatus and method for transferring garment draping between avatars
CN112070854A (zh) * 2020-09-02 2020-12-11 北京字节跳动网络技术有限公司 一种图像生成方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112070854A (zh) 2020-12-11
CN112070854B (zh) 2023-08-08
US20230334729A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
WO2021109876A1 (zh) 图像处理方法、装置、设备及存储介质
CN106780293B (zh) 头部特写肖像的样式传递
US11538096B2 (en) Method, medium, and system for live preview via machine learning models
WO2017193906A1 (zh) 一种图像处理方法及处理系统
CN115699114B (zh) 用于分析的图像增广的方法和装置
WO2022048414A1 (zh) 一种图像生成方法、装置、设备及存储介质
WO2022048421A1 (zh) 一种图像绘制过程的生成方法、装置、设备及存储介质
WO2021164550A1 (zh) 图像分类方法及装置
US20200410211A1 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
WO2022012085A1 (zh) 人脸图像处理方法、装置、存储介质及电子设备
US20150110404A1 (en) Automatically suggesting regions for blur kernel estimation
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
WO2021228031A1 (zh) 渲染方法、设备以及系统
CN112288665B (zh) 图像融合的方法、装置、存储介质及电子设备
US9466096B2 (en) Deblurring images having spatially varying blur
CN111127623A (zh) 模型的渲染方法、装置、存储介质及终端
WO2023066121A1 (zh) 三维模型的渲染
CN112766215A (zh) 人脸融合方法、装置、电子设备及存储介质
Li et al. High-resolution network for photorealistic style transfer
TWI711004B (zh) 圖片處理方法和裝置
CN113538623A (zh) 确定目标图像的方法、装置、电子设备及存储介质
WO2023103684A1 (zh) 人像头发流动特效处理方法、装置、介质和电子设备
CN111275610A (zh) 一种人脸变老图像处理方法及系统
WO2022022260A1 (zh) 图像风格迁移方法及其装置
CN112348069B (zh) 数据增强方法、装置、计算机可读存储介质及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863481

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21863481

Country of ref document: EP

Kind code of ref document: A1