CN115358959A - Generation method, device and equipment of special effect graph and storage medium - Google Patents

Generation method, device and equipment of special effect graph and storage medium Download PDF

Info

Publication number
CN115358959A
CN115358959A CN202211035725.5A CN202211035725A CN115358959A CN 115358959 A CN115358959 A CN 115358959A CN 202211035725 A CN202211035725 A CN 202211035725A CN 115358959 A CN115358959 A CN 115358959A
Authority
CN
China
Prior art keywords
map
illumination
target object
light source
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211035725.5A
Other languages
Chinese (zh)
Inventor
袁琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211035725.5A priority Critical patent/CN115358959A/en
Publication of CN115358959A publication Critical patent/CN115358959A/en
Priority to PCT/CN2023/114831 priority patent/WO2024041623A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the disclosure provides a method, a device, equipment and a storage medium for generating a special effect graph. Acquiring current light source information, a normal map of an original image and a target object mask map; the light source information comprises light source color, light source position and illumination intensity; generating a target illumination map according to the normal map, the target object mask map and the light source information; and fusing the target illumination pattern and the original image to obtain a target illumination special effect image. According to the method for generating the special effect graph, the target illumination graph is generated according to the normal graph, the target object mask graph and the light source information, the special effect graph with the lighting effect can be generated, and the display content of the image can be enriched.

Description

Generation method, device and equipment of special effect graph and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, and in particular relates to a method, a device, equipment and a storage medium for generating a special effect graph.
Background
At present, the functions of taking pictures and making videos in the mobile terminal become one of the functions commonly used by users. In the process of photographing and making videos, users often find that the light effect of the photographed or made videos cannot meet the personalized requirements of the users in natural light scenes.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device, equipment and a storage medium for generating a special effect graph, which can generate the special effect graph with a polishing effect and enrich the display content of the image.
In a first aspect, an embodiment of the present disclosure provides a method for generating a special effect graph, including:
acquiring current light source information, a normal map of an original image and a target object mask map; the light source information comprises light source color, light source position and illumination intensity;
generating a target illumination map according to the normal map, the target object mask map and the light source information;
and fusing the target illumination pattern and the original image to obtain a target illumination special effect image.
In a second aspect, an embodiment of the present disclosure further provides an apparatus for generating a special effect map, including:
the acquisition module is used for acquiring current light source information, a normal map of an original image and a target object mask map; the light source information comprises light source color, light source position and illumination intensity;
the target illumination map generation module is used for generating a target illumination map according to the normal map, the target object mask map and the light source information;
and the target illumination special effect image acquisition module is used for fusing the target illumination special effect image and the original image to obtain a target illumination special effect image.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for generating the special effects map according to the embodiment of the present disclosure.
In a fourth aspect, the present disclosure also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are used for executing the method for generating the special effect graph according to the present disclosure.
The embodiment of the disclosure discloses a method, a device, equipment and a storage medium for generating a special effect graph. Acquiring current light source information, a normal map of an original image and a target object mask map; the light source information comprises light source color, light source position and illumination intensity; generating a target illumination map according to the normal map, the target object mask map and the light source information; and fusing the target illumination pattern and the original image to obtain a target illumination special effect pattern. According to the method for generating the special effect graph, the target illumination graph is generated according to the normal graph, the target object mask graph and the light source information, the special effect graph with the lighting effect can be generated, and the display content of the image can be enriched.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a method for generating a special effect graph according to an embodiment of the present disclosure;
FIG. 2a is an exemplary diagram of a target object mask provided by an embodiment of the present disclosure;
FIG. 2b is an exemplary graph of a second illumination intensity map provided by embodiments of the present disclosure;
FIG. 2c is an exemplary diagram of an illumination intensity mask provided by an embodiment of the present disclosure;
FIG. 3a is an exemplary diagram of an inverse target object mask diagram provided by an embodiment of the present disclosure;
FIG. 3b is an exemplary diagram of a first backlight intensity map provided by an embodiment of the present disclosure;
FIG. 3c is an illustration of a first blur mask map provided by an embodiment of the present disclosure;
FIG. 3d is an exemplary diagram of a second blur mask map provided by embodiments of the present disclosure;
FIG. 3e is an exemplary diagram of a fusion mask map provided by an embodiment of the present disclosure;
FIG. 3f is an exemplary diagram of a second backward illumination intensity graph provided by embodiments of the present disclosure;
FIG. 3g is an exemplary diagram of a target backlight intensity map provided by an embodiment of the present disclosure;
FIG. 4a is an exemplary diagram of a gray scale map provided by an embodiment of the present disclosure;
FIG. 4b is an exemplary diagram of a local object mask map provided by an embodiment of the present disclosure;
FIG. 4c is an exemplary diagram of a smooth grayscale map provided by an embodiment of the present disclosure;
FIG. 4d is an exemplary diagram of a smoothed local object graph provided by an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of a device for generating a special effect graph according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that, before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure in a proper manner according to the relevant laws and regulations and obtain the authorization of the user.
For example, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested operation to be performed would require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and is not intended to limit the implementation of the present disclosure, and other ways of satisfying the relevant laws and regulations may be applied to the implementation of the present disclosure.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
Fig. 1 is a flowchart illustrating a method for generating a special effect diagram according to an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a situation of generating an illumination special effect diagram, and the method may be executed by a special effect diagram generation apparatus, and the apparatus may be implemented in a form of software and/or hardware, or optionally, implemented by an electronic device, and the electronic device may be a mobile terminal, a PC terminal, a server, or the like.
As shown in fig. 1, the method includes:
s110, obtaining the current light source information, a normal map of the original image and a target object mask map.
The light source information includes light source color, light source position and illumination intensity. The light source may be a virtual light source, and the light source position may be changed based on a user's trigger operation. In this embodiment, the virtual light source may be generated in the following manner: firstly, generating a convertible (rotating, translating and zooming) empty virtual object, and placing the virtual object at the world coordinate origin; then generating a convertible light source object, and offsetting and placing the light source object at a position a certain distance d away from the origin of the time coordinate; and finally, the light source object is used as a child object of the empty virtual object, and the transformation of the empty virtual object is mapped by acquiring the dragging operation of the touch screen, so that the effect of driving the light source object to move on the spherical surface is achieved. Wherein d can be used as a spherical radius, and the light source object is a virtual light source. Wherein, the light source color and the illumination intensity may be preset, i.e. may be set by a user.
The normal map may be an image formed by normal information of each pixel point in the original image. The normal information is represented by a three-dimensional vector, namely the normal vector, and three components of the normal information are respectively mapped into three color channel values, so that a normal graph is obtained. In this embodiment, any normal estimation algorithm may be adopted to determine the normal information of each pixel point of the original image, and a normal map is generated based on the normal information.
The target object may be any object such as a person, an animal, or a plant, which is not limited herein. And the pixel value of a pixel point in the target object mask image represents the confidence of the pixel point belonging to the target object. In this embodiment, the manner of obtaining the target object mask map of the original image may be: and identifying the target object of the original image, acquiring the confidence coefficient of each pixel point belonging to the target object, and generating a target object mask image based on the confidence coefficient. The pixel value of the target object mask image is a value between 0 and 1, wherein "0" indicates that the pixel point does not belong to the target object, and is black in the target object mask image, and "1" indicates that the pixel point belongs to the target object, and is white in the target object mask image. For example, fig. 2a is an exemplary diagram of a mask diagram of a target object in the present embodiment, as shown in fig. 2a, the target object is a portrait, a white area is a portrait, and a black area is a background area.
And S120, generating a target illumination map according to the normal map, the target object mask map and the light source information.
And the color value of each pixel point in the target illumination map represents the illumination color of the pixel point. In this embodiment, the target illumination map is composed of a target object illumination map, a background area illumination map, and a backlight illumination map.
In this embodiment, the manner of generating the target illumination map according to the normal map, the target object mask map and the light source information may be: generating a target object illumination map according to the normal map, the light source information and the target object mask map; generating a background area illumination map based on the light source information and the target object mask map; generating a backlight map according to the normal map and the target object mask map; and fusing the target object illumination map, the background area illumination map and the backlight illumination map to obtain the target illumination map.
And generating the target object illumination map according to the illumination color of each pixel point of the target object. And generating a background area illumination map according to the illumination color of each pixel point in the background area. And generating a backlight map according to the backlight color of each pixel point of the original image.
Specifically, the process of generating the target object illumination map according to the normal map, the light source information, and the target object mask map may be: the method comprises the steps of firstly determining the illumination intensity of each pixel point of a target object according to a normal map, light source information and a target object mask map, then fusing the illumination intensity and the light source color to obtain the illumination color of each pixel point of the target object, and generating a target object illumination map based on the illumination color of each pixel point of the target object. The process of generating the background region illumination map based on the light source information and the target object mask map may be: the method comprises the steps of firstly determining the illumination intensity of each pixel point in a background area according to light source information and a target object mask image, then fusing the illumination intensity with the light source color to obtain the illumination color of each pixel point in the background area, and generating a background area illumination image based on the illumination color of each pixel point in the background area. The process of generating the backlight map according to the normal map and the target object mask map may be: the method comprises the steps of firstly determining the back illumination intensity of each pixel point of an original image according to a normal map and a target object mask map, then fusing the back illumination intensity with the light source color to obtain the back illumination color of each pixel point, and finally generating a back illumination map based on the back illumination color. In this embodiment, a target illumination pattern is generated according to the target object illumination pattern, the background area illumination pattern, and the backlight illumination pattern, and different illumination patterns are generated based on the image area and the illumination direction, so that the illumination patterns with staggered brightness and darkness are obtained.
Optionally, the method for generating the target object illumination map according to the normal map, the light source information, and the target object mask map may be: determining a first illumination intensity map according to the normal map and the light source information; fusing the first illumination intensity graph and the light source color to obtain an initial illumination graph; and fusing the initial illumination map and the target object mask map to obtain the target object illumination map.
And the pixel values of the pixels in the first illumination intensity graph represent the illumination intensity of the pixels. Specifically, the manner of determining the first illumination intensity map according to the normal map and the light source information may be: the method comprises the steps of firstly determining intensity attenuation information of each pixel point according to normal information and light source information in a normal map, adjusting the intensity of a light source based on the intensity attenuation information to obtain the illumination intensity of each pixel point, and finally generating a first illumination intensity map based on the illumination intensity of each pixel point. The manner of fusing the first illumination intensity map and the light source color may be: and multiplying the illumination intensity of each pixel point in the first illumination intensity graph by the color of the light source to obtain the illumination color of each pixel point. The method for fusing the initial illumination map and the target object mask map may be as follows: and multiplying the illumination color of the initial illumination pattern by the pixel value of the corresponding pixel point in the target object mask pattern to obtain the target object illumination pattern. In this embodiment, the initial illumination map and the target object mask map are fused, so that the illumination color of each pixel point of the target object can be accurately determined.
Optionally, the manner of determining the first illumination intensity map according to the normal map and the light source information may be: determining first included angle information between incident light and pixel points according to a normal map and light source information; determining attenuation information according to the distance between the light source and the pixel points in the original image; adjusting the illumination intensity according to the first included angle information and the attenuation information to obtain the target intensity of the pixel point; and generating a first illumination intensity map based on the target intensity of the pixel point.
Specifically, the process of determining the first included angle information between the incident light and the pixel point according to the normal map and the light source information may be: and determining an illumination direction vector, extracting a normal vector of each pixel point in the normal map, then respectively carrying out normalization processing on the illumination direction vector and the normal vector of each pixel point, and carrying out point multiplication on the normalized illumination direction vector and the normalized normal vector to obtain first included angle information. Here, the illumination direction vector may be a vector in which the light source position (represented by world coordinates) points to the center position of the target object (screen coordinates are converted into world coordinates), or a vector in which the light source position points to the center position of the target object face.
Specifically, the method for determining attenuation information according to the distance between the light source and the pixel point in the original image may be: converting the light source position from world coordinates to screen coordinates, then calculating the distance between the light source position in a screen coordinate system and a pixel point in an original image, and then subtracting the ratio of the distance to the halo radius from 1 to obtain an intermediate result value; and then, setting an index of the attenuation value for the intermediate result value to obtain attenuation information. The process of converting the light source position from the world coordinate to the screen coordinate may be: firstly, the world coordinates of the light source are multiplied by an MVP (Model View project) transformation matrix to obtain the projection coordinates of the light source, and the x component and the y component of the projection coordinates of the light source are linearly transformed to obtain the coordinates of a light source screen.
Specifically, the manner of adjusting the illumination intensity according to the first included angle information and the attenuation information may be: and multiplying the illumination intensity by the first included angle information and the attenuation information in sequence to obtain the target intensity of each pixel point. The manner of generating the first illumination intensity map based on the target intensity of the pixel point may be: the target intensity is taken as the pixel value of the pixel, thereby obtaining a first illumination intensity map. In this embodiment, the illumination intensity is adjusted according to the first included angle information and the attenuation information, so that the accuracy of determining the illumination intensity of the pixel point can be improved.
In this embodiment, the manner of generating the background region illumination map based on the light source information and the target object mask map may be: acquiring the distance between a light source and a pixel point in an original image; generating a second illumination intensity map according to the distance and the illumination intensity; fusing the second illumination intensity image and the target object mask image to obtain an illumination intensity mask image; and fusing the set color and the illumination color based on the illumination intensity mask image to obtain a background area illumination image.
Specifically, the manner of obtaining the distance between the light source and the pixel point in the original image may be: and converting the light source position from the world coordinate to the screen coordinate, and then calculating the distance between the light source position in the screen coordinate system and the pixel point in the original image. The manner of generating the second illumination intensity map according to the distance and the illumination intensity may be: and performing exponential operation on the distance, subtracting an exponential operation result from 1 to obtain an intermediate result, multiplying the intermediate result by the illumination intensity of the light source to obtain the illumination intensity of each pixel point, and generating a second illumination intensity graph according to the illumination intensity of each pixel point. Exemplarily, fig. 2b is an exemplary diagram of a second illumination intensity map in the present embodiment.
Specifically, the manner of fusing the second illumination intensity map and the target object mask map may be as follows: and subtracting the pixel value of the corresponding pixel point in the target object mask image from the pixel value of the pixel point in the second illumination intensity image, if the pixel value obtained by subtraction is less than 0, adjusting the pixel value to be 0, and if the pixel value obtained by subtraction is greater than 1, adjusting the pixel value to be 1 to obtain the illumination intensity mask image. Exemplarily, fig. 2c is an exemplary diagram of the illumination intensity mask map in the present embodiment, and as shown in fig. 2c, the illumination intensity mask map can be understood as an illumination intensity map for matting out the target object from the second illumination intensity map.
The set color may be black, and the corresponding color value is (0,0,0). Specifically, the mode of fusing the set color and the illumination color based on the illumination intensity mask diagram may be: and taking the pixel value of the illumination intensity mask as a weighting coefficient of the illumination color, taking the result of subtracting the pixel value of the illumination intensity mask from 1 as a weighting coefficient of the set color, carrying out weighted summation on the set color and the illumination color based on the weighting coefficient, and finally multiplying the color value after weighted summation with the set value to obtain the illumination color of each pixel point, thereby obtaining the illumination image of the background area. Wherein the set value may be set to 0.5. In this embodiment, the determined illumination colors of the background area and the target object area are different, so that the image shows an alternating light and dark effect.
Optionally, the manner of generating the backlight map according to the normal map and the target object mask map may be: determining a first backlight intensity map according to the normal map, the target object mask map and the visual angle information; carrying out reverse processing on the target object mask image to obtain a reverse target object mask image; generating a second backlight intensity map according to the target object mask map and the reverse target object mask map; fusing the first back illumination intensity graph and the second back illumination intensity graph to obtain a target back illumination intensity graph; and fusing the target backlight intensity graph and the light source color to obtain a backlight graph.
The view angle information may be a view angle of a virtual camera corresponding to the current image, and may be represented by a vector of a view angle direction. The way of performing the inverse processing on the target object mask map may be: and respectively subtracting the pixel value of each pixel point in the target object mask image by using 1 to obtain a reverse target object mask image. Illustratively, fig. 3a is an exemplary diagram of the reverse target object mask diagram in the present embodiment, and as shown in fig. 3a, the human image area becomes black and the background area becomes white compared with the target object mask diagram.
Specifically, the manner of determining the first backlight intensity map according to the normal map, the target object mask map and the viewing angle information may be: determining second included angle information between normal information and visual angle information of each pixel point in the normal map; determining the initial back illumination intensity of each pixel point according to the second included angle information; generating an initial back illumination intensity map based on the initial back illumination intensity; and fusing the initial back illumination intensity image and the target object mask image to obtain a first back illumination intensity image.
The process of determining the second included angle information between the normal information and the viewing angle information of each pixel point in the normal map may be: and respectively carrying out normalization processing on the normal vector and the visual angle direction vector, carrying out point multiplication on the normalized normal vector and the visual angle direction vector, and intercepting the point multiplication result between 0 and 1 to obtain second included angle information. The process of determining the initial backlight intensity of each pixel point according to the second included angle information may be: and subtracting the second included angle information by using 1, performing exponential operation of setting control intensity on the subtraction result, and determining the exponential operation result as the initial back illumination intensity of each pixel point. Wherein the set control strength may be a value set by a user. The method for fusing the initial backlight intensity map and the target object mask map may be as follows: and multiplying the pixel value of the pixel point in the initial back illumination intensity image by the pixel value of the corresponding pixel point in the target object mask image. Illustratively, fig. 3b is an exemplary diagram of a first backlight intensity map in the present embodiment. In this embodiment, an intensity map with a contour light effect may be generated.
Optionally, the manner of generating the second backlight intensity map according to the target object mask map and the reverse target object mask map may be: respectively carrying out fuzzy processing on the target object mask image and the reverse target object mask image to obtain a first fuzzy mask image and a second fuzzy mask image; fusing the second fuzzy mask image and the reverse target object mask image to obtain a fused mask image; and fusing the fused mask image with the first blurred mask image, and fusing the second backlight intensity image.
The first blurred mask image may be a target object mask image subjected to a blurring process, and the second blurred mask image may be an inverted target object mask image subjected to a blurring process. The blur may be a gaussian blur. Illustratively, fig. 3c is an exemplary view of a first blur mask map, and fig. 3d is an exemplary view of a second blur mask map. Specifically, the manner of fusing the second blurred mask image and the reverse target object mask image may be: and taking the maximum value of the pixel values of the second fuzzy mask image and the pixel values corresponding to the reverse target object mask image, and generating a fusion mask image based on the maximum pixel value. Illustratively, fig. 3e is an exemplary diagram of the fusion mask map in the present embodiment. Specifically, the fusing of the fusion mask map and the first blurred mask map may be performed in the following manner: and taking the minimum value of the pixel values of the fusion mask image and the pixel values corresponding to the first fuzzy mask image, and generating a second backlight intensity image based on the minimum pixel value. Illustratively, fig. 3f is an exemplary diagram of a second backlight intensity map in this embodiment. In this embodiment, the second backlight intensity map is produced to have the effect of outlining the target object.
Specifically, the manner of fusing the first back-lighting intensity map and the second back-lighting intensity map may be: and adding the pixel values of the first back illumination intensity image and the pixel values of the second back illumination intensity image to obtain a target back illumination intensity image. Illustratively, fig. 3g is an exemplary diagram of a target backlight intensity map in the present embodiment. The way of fusing the target back illumination intensity map and the light source color may be: and multiplying the illumination intensity in the target back illumination intensity graph by the color of the light source to obtain a back illumination graph.
Optionally, the target object illumination map, the background area illumination map, and the back illumination map are fused, and the manner of obtaining the target illumination map may be: determining the relative position between the light source and the target object according to the first included angle information; and fusing the target object illumination map, the background area illumination map and the backlight illumination map based on the relative positions to obtain the target illumination map.
Wherein, the relative position comprises that the light source is positioned in the front direction of the target object, the light source is positioned in the back direction of the target object and the light source is positioned in the side direction of the target object. The first included angle information is a dot product of the normal vector and the illumination direction vector. The manner of determining the relative position between the light source and the target object according to the first angle information may be: if the first included angle information glare is in the range of (t, 1), i.e. t < glare < 1, then the light source is positioned in the forward direction of the target object, and if the first included angle information glare is in the range of [ -1, -t), i.e. -1 < glare < t, then the light source is positioned in the backward direction of the target object; and if the first included angle information is in the range of [ -t, t ], namely-t is less than or equal to glare and less than or equal to t, the light source is positioned in the lateral direction of the target object. Where t may be a value between 0-1, set for example to 0.1 or 0.2.
Specifically, the target object illumination map, the background area illumination map and the backlight illumination map are fused based on the relative positions, and the manner of obtaining the target illumination map may be: if the light source is positioned in the forward direction of the target object, fusing the target object illumination map and the background area illumination map to obtain a target illumination map; if the light source is positioned in the back of the target object, fusing the background area illumination map and the back illumination map to obtain a target illumination map; if the light source is positioned at the side direction of the target object, performing interpolation fusion on the target object illumination map and the backlight illumination map to obtain a middle illumination map; and fusing the intermediate light map and the background area light map to obtain a target light map.
Specifically, the manner of fusing the target object illumination map and the background area illumination map may be: and adding the pixel value of the target object illumination image and the pixel value of the background area illumination image. The way of fusing the background region illumination map and the backlight illumination map may be: and adding the pixel value of the background area illumination pattern and the pixel value of the backlight pattern.
Specifically, the interpolation and fusion of the target object illumination map and the backlight illumination map may be as follows: determining a mapping relation between [ -t, t ] and [0,1] through interpolation operation, determining a target value corresponding to the first included angle information according to the mapping relation, determining a result of subtracting the target value from 1 as a weighting coefficient of the target object illumination map, determining the target value as a weighting coefficient of the back illumination map, and performing weighting calculation on the target object illumination map and the back illumination map according to the weighting coefficient to obtain an intermediate illumination map. The manner of fusing the intermediate illumination map and the background region illumination map may be: and adding the pixel value of the intermediate illumination map and the pixel value of the background area illumination map. In this embodiment, the target illumination map is determined based on the relative position between the light source and the target object, and the accuracy and the degree of reality of the target illumination map can be improved.
And S130, fusing the target illumination pattern and the original image to obtain a target illumination special effect pattern.
Specifically, the method of fusing the target illumination map and the original image may be: and adding the color values of the target illumination map and the color values of the original image.
Optionally, the method further comprises the following steps: acquiring a gray scale image and a local object mask image of an original image; fusing the gray level image and the local object mask image to obtain a local object image; adjusting the color of the light source according to the normal map, the visual angle information and the position of the light source to obtain the illumination color of the target; fusing the set color and the target illumination color based on the local object image to obtain a local object illumination special effect image; and fusing the local object illumination pattern and the target object illumination pattern to obtain an updated target object illumination pattern.
The local object is an object composed of a local region of the target object. In this embodiment, if the target object is a portrait, the local object is hair.
The manner of obtaining the gray scale map of the original image may be: and carrying out gray level processing on the original image to obtain a gray level image. Exemplarily, fig. 4a is an exemplary diagram of a gray scale map in the present embodiment.
The method for acquiring the local object mask map of the original image may be as follows: and identifying the local object in the original image to obtain a mask image of the local object. Exemplarily, fig. 4b is an exemplary diagram of a local object mask diagram in the present embodiment.
Specifically, the manner of fusing the gray-scale image and the local object mask image to obtain the local object image may be: carrying out smoothing treatment on the gray level image to obtain a smooth gray level image; and fusing the smooth gray level image and the local object mask image to obtain a smooth local object image.
The method for smoothing the grayscale map may be: determining a first difference value between the gray value and a first set value, determining a second difference value between a second set value and the first set value, calculating a proportion value between the first difference value and the second difference value, intercepting the proportion value to 0-1, finally, carrying out N-power processing on the proportion value to obtain a processed gray value, and generating a smooth gray value based on the processed gray value. Where the first set point may be 0, the second set point may be 0.5, and n may be 3. The formula in which the proportional value is processed to the power of N may be expressed as x (a-bx), where a and b are constants and x is the proportional value. Illustratively, FIG. 4c is an exemplary graph of a smooth gray scale map in the present embodiment,
specifically, the method of fusing the smooth grayscale map and the local object mask map may be: and multiplying the pixel value of the smooth gray level image and the pixel value of the local object mask image. Exemplarily, fig. 4d is an exemplary diagram of a smoothed partial object diagram in the present embodiment. In this embodiment, the grayscale image is smoothed, so that more details of the local object can be highlighted, and the local object has a highlight effect. For the hair of the embodiment, the gray level image is smoothed, so that more detailed hair can be obtained, and a more accurate illumination effect can be obtained.
Specifically, the light source color is adjusted according to the normal map, the viewing angle information, and the light source position, and the manner of obtaining the target illumination color may be: determining the direction of reflected light illumination according to the normal map and the position of the light source; determining third included angle information of a visual angle and a reflected illumination direction; and adjusting the color of the light source according to the third included angle information to obtain the target illumination color.
The reflected illumination direction may be understood as the direction of the reflected light. The process of determining the reflected illumination direction from the normal map and the light source position may be: and determining an illumination direction vector according to the light source position and the central position of the target object, and performing linear calculation on the illumination direction vector and the normal direction vector to obtain a reflected illumination direction vector. The linear calculation of the illumination direction vector and the normal direction vector may be as follows: and calculating a point multiplication result of the illumination direction vector and the normal direction vector, multiplying the point multiplication result, the set value and the normal direction vector, and subtracting the multiplied vector from the illumination direction vector. Wherein the set value may be 2.
The manner of determining the viewing angle information and the third included angle information of the reflected illumination direction may be: and taking the dot product result of the visual angle direction vector and the reflected illumination direction vector as third included angle information.
Adjusting the color of the light source according to the third included angle information, wherein the manner of obtaining the target illumination color may be: and performing exponential operation of setting a highlight value on the third included angle information, and multiplying the exponential operation result by the light source color to obtain the target illumination color. Wherein the highlight value is set to a value set by a user. In this embodiment, the color of the light source is adjusted according to the third included angle information, so that the local object can exhibit a highlight effect.
Correspondingly, the method for obtaining the local object illumination special effect graph by fusing the set color and the target illumination color based on the local object graph may be as follows: and fusing the set color and the target illumination color based on the smooth local object image to obtain a local object illumination special effect image.
The set color may be black, and the corresponding color value is (0,0,0). The set color and the target illumination color may be fused based on the smoothed partial object map by taking a pixel value of the smoothed partial object map as a weighting coefficient of the target illumination color, taking a result value of subtracting the pixel value of the smoothed partial object map from 1 as a weighting coefficient of the set color, and performing weighted summation of the set color and the target illumination color based on the weighting coefficients.
Specifically, the manner of fusing the local object illumination map and the target object illumination map may be as follows: and adding the pixel value of the local object illumination map and the corresponding pixel value in the target object illumination map.
Correspondingly, the target illumination map, the background area illumination map and the backlight illumination map are fused, and the target illumination map can be obtained by the following steps: and fusing the updated target object illumination map, the background area illumination map and the backlight illumination map to obtain the target illumination map.
According to the technical scheme of the embodiment of the disclosure, the current light source information, the normal map of the original image and the target object mask map are obtained; the light source information comprises light source color, light source position and illumination intensity; generating a target illumination map according to the normal map, the target object mask map and the light source information; and fusing the target illumination pattern and the original image to obtain a target illumination special effect pattern. According to the method for generating the special effect graph, the target illumination graph is generated according to the normal graph, the target object mask graph and the light source information, the special effect graph with the lighting effect can be generated, and the display content of the image can be enriched.
Fig. 5 is a schematic structural diagram of a device for generating a special effect map according to an embodiment of the present disclosure, and as shown in fig. 5, the device includes:
an obtaining module 510, configured to obtain current light source information, a normal map of an original image, and a target object mask map; the light source information comprises light source color, light source position and illumination intensity;
a target illumination map generation module 520, configured to generate a target illumination map according to the normal map, the target object mask map, and the light source information;
a target illumination special effect map obtaining module 530, configured to fuse the target illumination special effect map and the original image to obtain a target illumination special effect map.
Optionally, the target illumination map generating module 520 is further configured to:
generating a target object illumination map according to the normal map, the light source information and the target object mask map;
generating a background area illumination map based on the light source information and the target object mask map;
generating a backlight map according to the normal map and the target object mask map;
and fusing the target object illumination map, the background area illumination map and the back illumination map to obtain a target illumination map.
Optionally, the target illumination map generating module 520 is further configured to:
determining a first illumination intensity map according to the normal map and the light source information;
fusing the first illumination intensity map and the light source color to obtain an initial illumination map;
and fusing the initial illumination map and the target object mask map to obtain a target object illumination map.
Optionally, the target illumination map generating module 520 is further configured to:
determining first included angle information between incident light and pixel points according to the normal map and the light source information;
determining attenuation information according to the distance between the light source and the pixel points in the original image;
adjusting the illumination intensity according to the first included angle information and the attenuation information to obtain the target intensity of the pixel point;
and generating a first illumination intensity map based on the target intensity of the pixel points.
Optionally, the target illumination map generating module 520 is further configured to:
acquiring the distance between the light source and a pixel point in the original image;
generating a second illumination intensity map according to the distance and the illumination intensity;
fusing the second illumination intensity image with the target object mask image to obtain an illumination intensity mask image;
and fusing the set color and the illumination color based on the illumination intensity mask image to obtain a background area illumination image.
Optionally, the target illumination map generating module 520 is further configured to:
determining a first backlight intensity map according to the normal map, the target object mask map and the visual angle information;
carrying out reverse processing on the target object mask image to obtain a reverse target object mask image;
generating a second back illumination intensity map according to the target object mask map and the reverse target object mask map;
fusing the first back illumination intensity map and the second back illumination intensity map to obtain a target back illumination intensity map;
and fusing the target backlight intensity map and the light source color to obtain a backlight map.
Optionally, the target illumination map generating module 520 is further configured to:
determining second included angle information between the normal information and the visual angle information of each pixel point in the normal map;
determining the initial back illumination intensity of each pixel point according to the second included angle information;
generating an initial back-lighting intensity map based on the initial back-lighting intensity;
and fusing the initial back illumination intensity graph and the target object mask graph to obtain a first back illumination intensity graph.
Optionally, the target illumination map generating module 520 is further configured to:
respectively carrying out fuzzy processing on the target object mask image and the reverse target object mask image to obtain a first fuzzy mask image and a second fuzzy mask image;
fusing the second fuzzy mask image and the reverse target object mask image to obtain a fused mask image;
and fusing the fused mask image with the first fuzzy mask image, and obtaining a second backlight intensity image.
Optionally, the method further includes: a target object illumination map update module to:
acquiring a gray scale image and a local object mask image of the original image; wherein the local object is an object formed by a local area of the target object;
fusing the gray level image and the local object mask image to obtain a local object image;
adjusting the color of the light source according to the normal map, the visual angle information and the position of the light source to obtain the illumination color of the target;
fusing the set color and the target illumination color based on the local object image to obtain a local object illumination special effect image;
and fusing the local object illumination pattern and the target object illumination pattern to obtain an updated target object illumination pattern.
Optionally, the target object illumination map updating module is further configured to:
carrying out smoothing treatment on the gray level image to obtain a smooth gray level image;
fusing the smooth gray level image and the local object mask image to obtain a smooth local object image;
based on the local object image, fusing the set color and the target illumination color to obtain a local object illumination special effect image, which comprises the following steps:
and fusing the set color and the target illumination color based on the smooth local object image to obtain a local object illumination special effect image.
Optionally, the target object illumination map updating module is further configured to:
determining a reflected illumination direction according to the normal map and the light source position;
determining the visual angle information and third included angle information of the reflected illumination direction;
and adjusting the color of the light source according to the third included angle information to obtain the target illumination color.
Optionally, the target illumination map generating module 520 is further configured to:
determining the relative position between the light source and the target object according to the first included angle information; wherein the relative positions comprise the light source being positioned in the forward direction of the target object, the light source being positioned in the backward direction of the target object, and the light source being positioned in the lateral direction of the target object;
and fusing the target object illumination map, the background area illumination map and the back illumination map based on the relative position to obtain a target illumination map.
Optionally, the target illumination map generating module 520 is further configured to:
if the light source is positioned in the forward direction of the target object, fusing the target object illumination map and the background area illumination map to obtain a target illumination map;
if the light source is positioned in the back of the target object, fusing the background area illumination map and the back illumination map to obtain a target illumination map;
if the light source is positioned at the side direction of the target object, carrying out interpolation fusion on the target object illumination map and the backlight illumination map to obtain a middle illumination map;
and fusing the intermediate light map and the background area light map to obtain a target light map.
The generation device of the special effect graph provided by the embodiment of the disclosure can execute the generation method of the special effect graph provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring now to fig. 6, a schematic diagram of an electronic device (e.g., the terminal device or the server in fig. 6) 500 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An editing/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The electronic device provided by the embodiment of the present disclosure and the method for generating a special effect diagram provided by the embodiment of the present disclosure belong to the same inventive concept, and technical details that are not described in detail in the embodiment may refer to the embodiment of the present disclosure, and the embodiment of the present disclosure have the same beneficial effects.
The disclosed embodiments provide a computer storage medium on which a computer program is stored, which when executed by a processor implements the method for generating a special effects map provided by the above embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring current light source information, a normal map of an original image and a target object mask map; the light source information comprises light source color, light source position and illumination intensity; generating a target illumination map according to the normal map, the target object mask map and the light source information; and fusing the target illumination pattern and the original image to obtain a target illumination special effect image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (16)

1. A method for generating a special effect graph is characterized by comprising the following steps:
acquiring current light source information, a normal map of an original image and a target object mask map; the light source information comprises light source color, light source position and illumination intensity;
generating a target illumination map according to the normal map, the target object mask map and the light source information;
and fusing the target illumination pattern and the original image to obtain a target illumination special effect pattern.
2. The method of claim 1, wherein generating a target illumination map according to the normal map, the target object mask map, and the light source information comprises:
generating a target object illumination map according to the normal map, the light source information and the target object mask map;
generating a background area illumination map based on the light source information and the target object mask map;
generating a backlight map according to the normal map and the target object mask map;
and fusing the target object illumination map, the background area illumination map and the back illumination map to obtain a target illumination map.
3. The method of claim 2, wherein generating a target object illumination map according to the normal map, the light source information, and the target object mask map comprises:
determining a first illumination intensity map according to the normal map and the light source information;
fusing the first illumination intensity map and the light source color to obtain an initial illumination map;
and fusing the initial light pattern and the target object mask pattern to obtain a target object light pattern.
4. The method of claim 3, wherein determining a first illumination intensity map from the normal map and the luminaire information comprises:
determining first included angle information between incident light and pixel points according to the normal map and the light source information;
determining attenuation information according to the distance between the light source and the pixel points in the original image;
adjusting the illumination intensity according to the first included angle information and the attenuation information to obtain the target intensity of the pixel point;
and generating a first illumination intensity map based on the target intensity of the pixel points.
5. The method of claim 2, wherein generating a background region illumination map based on the light source information and the target object mask map comprises:
acquiring the distance between the light source and a pixel point in the original image;
generating a second illumination intensity map according to the distance and the illumination intensity;
fusing the second illumination intensity image with the target object mask image to obtain an illumination intensity mask image;
and fusing the set color and the illumination color based on the illumination intensity mask image to obtain a background area illumination image.
6. The method of claim 2, wherein generating a backlight map from the normal map and the target object mask map comprises:
determining a first backlight intensity map according to the normal map, the target object mask map and the visual angle information;
carrying out reverse processing on the target object mask image to obtain a reverse target object mask image;
generating a second back illumination intensity map according to the target object mask map and the reverse target object mask map;
fusing the first back illumination intensity graph and the second back illumination intensity graph to obtain a target back illumination intensity graph;
and fusing the target backlight intensity graph and the light source color to obtain a backlight graph.
7. The method of claim 6, wherein determining a first backlight intensity map from the normal map, the target object mask map, and perspective information comprises:
determining second included angle information between the normal information and the visual angle information of each pixel point in the normal graph;
determining the initial back illumination intensity of each pixel point according to the second included angle information;
generating an initial back-lighting intensity map based on the initial back-lighting intensity;
and fusing the initial back illumination intensity graph and the target object mask graph to obtain a first back illumination intensity graph.
8. The method of claim 6, wherein generating a second backlight intensity map from the target object mask map and the inverse target object mask map comprises:
respectively carrying out fuzzy processing on the target object mask image and the reverse target object mask image to obtain a first fuzzy mask image and a second fuzzy mask image;
fusing the second fuzzy mask image and the reverse target object mask image to obtain a fused mask image;
and fusing the fused mask image with the first fuzzy mask image, and obtaining a second backlight intensity image.
9. The method of claim 2, further comprising:
acquiring a gray scale image and a local object mask image of the original image; wherein the local object is an object formed by a local area of the target object;
fusing the gray level image and the local object mask image to obtain a local object image;
adjusting the color of the light source according to the normal map, the visual angle information and the position of the light source to obtain a target illumination color;
fusing the set color and the target illumination color based on the local object image to obtain a local object illumination special effect image;
and fusing the local object illumination pattern and the target object illumination pattern to obtain an updated target object illumination pattern.
10. The method of claim 9, wherein fusing the gray-scale map and the local object mask map to obtain a local object map comprises:
carrying out smoothing treatment on the gray level image to obtain a smooth gray level image;
fusing the smooth gray level image and the local object mask image to obtain a smooth local object image;
based on the local object image, fusing the set color and the target illumination color to obtain a local object illumination special effect image, which comprises the following steps:
and fusing the set color and the target illumination color based on the smooth local object image to obtain a local object illumination special effect image.
11. The method of claim 9, wherein adjusting the color of the light source according to the normal map, the viewing angle information and the position of the light source to obtain the target illumination color comprises:
determining a reflected illumination direction according to the normal map and the light source position;
determining the visual angle information and third included angle information of the reflected illumination direction;
and adjusting the color of the light source according to the third included angle information to obtain the target illumination color.
12. The method of claim 4, wherein fusing the target object illumination map, the background region illumination map, and the back illumination map to obtain a target illumination map comprises:
determining the relative position between the light source and the target object according to the first included angle information; wherein the relative positions comprise the light source being positioned in the forward direction of the target object, the light source being positioned in the backward direction of the target object, and the light source being positioned in the lateral direction of the target object;
and fusing the target object illumination map, the background area illumination map and the back illumination map based on the relative position to obtain a target illumination map.
13. The method of claim 12, wherein fusing the target object illumination map, the background region illumination map, and the back illumination map based on the relative position to obtain a target illumination map comprises:
if the light source is positioned in the forward direction of the target object, fusing the target object illumination pattern and the background area illumination pattern to obtain a target illumination pattern;
if the light source is positioned in the back of the target object, fusing the background area illumination map and the back illumination map to obtain a target illumination map;
if the light source is positioned at the side direction of the target object, carrying out interpolation fusion on the target object illumination map and the backlight illumination map to obtain a middle illumination map;
and fusing the intermediate light map and the background area light map to obtain a target light map.
14. An apparatus for generating a special effect map, comprising:
the acquisition module is used for acquiring current light source information, a normal map of an original image and a target object mask map; the light source information comprises light source color, light source position and illumination intensity;
the target illumination map generation module is used for generating a target illumination map according to the normal map, the target object mask map and the light source information;
and the target illumination special effect image acquisition module is used for fusing the target illumination special effect image and the original image to obtain a target illumination special effect image.
15. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of generating a special effects map as claimed in any one of claims 1-13.
16. A storage medium containing computer-executable instructions for performing a method of generating a special effects graph according to any one of claims 1-13 when executed by a computer processor.
CN202211035725.5A 2022-08-26 2022-08-26 Generation method, device and equipment of special effect graph and storage medium Pending CN115358959A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211035725.5A CN115358959A (en) 2022-08-26 2022-08-26 Generation method, device and equipment of special effect graph and storage medium
PCT/CN2023/114831 WO2024041623A1 (en) 2022-08-26 2023-08-25 Special effect map generation method and apparatus, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211035725.5A CN115358959A (en) 2022-08-26 2022-08-26 Generation method, device and equipment of special effect graph and storage medium

Publications (1)

Publication Number Publication Date
CN115358959A true CN115358959A (en) 2022-11-18

Family

ID=84004509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211035725.5A Pending CN115358959A (en) 2022-08-26 2022-08-26 Generation method, device and equipment of special effect graph and storage medium

Country Status (2)

Country Link
CN (1) CN115358959A (en)
WO (1) WO2024041623A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024041623A1 (en) * 2022-08-26 2024-02-29 北京字跳网络技术有限公司 Special effect map generation method and apparatus, device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884637B (en) * 2021-01-29 2023-04-07 北京市商汤科技开发有限公司 Special effect generation method, device, equipment and storage medium
CN113096231B (en) * 2021-03-18 2023-10-31 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN114331823A (en) * 2021-12-29 2022-04-12 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115358959A (en) * 2022-08-26 2022-11-18 北京字跳网络技术有限公司 Generation method, device and equipment of special effect graph and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024041623A1 (en) * 2022-08-26 2024-02-29 北京字跳网络技术有限公司 Special effect map generation method and apparatus, device, and storage medium

Also Published As

Publication number Publication date
WO2024041623A1 (en) 2024-02-29

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN111243049B (en) Face image processing method and device, readable medium and electronic equipment
CN110349107B (en) Image enhancement method, device, electronic equipment and storage medium
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
WO2024041623A1 (en) Special effect map generation method and apparatus, device, and storage medium
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN114331823A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114742856A (en) Video processing method, device, equipment and medium
WO2024067320A1 (en) Virtual object rendering method and apparatus, and device and storage medium
CN113989717A (en) Video image processing method and device, electronic equipment and storage medium
WO2024016923A1 (en) Method and apparatus for generating special effect graph, and device and storage medium
CN112802206A (en) Roaming view generation method, device, equipment and storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN111583102A (en) Face image processing method and device, electronic equipment and computer storage medium
CN108256477B (en) Method and device for detecting human face
CN115526796A (en) Image processing method, device, equipment and storage medium
CN115330925A (en) Image rendering method and device, electronic equipment and storage medium
CN110555799A (en) Method and apparatus for processing video
CN115358919A (en) Image processing method, device, equipment and storage medium
CN115272060A (en) Transition special effect diagram generation method, device, equipment and storage medium
CN113256785B (en) Image processing method, apparatus, device and medium
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN115019021A (en) Image processing method, device, equipment and storage medium
CN114693859A (en) Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment
CN114693860A (en) Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination