CN117455753A - Special effect template generation method, special effect generation device and storage medium - Google Patents

Special effect template generation method, special effect generation device and storage medium Download PDF

Info

Publication number
CN117455753A
CN117455753A CN202311322422.6A CN202311322422A CN117455753A CN 117455753 A CN117455753 A CN 117455753A CN 202311322422 A CN202311322422 A CN 202311322422A CN 117455753 A CN117455753 A CN 117455753A
Authority
CN
China
Prior art keywords
mask
special effect
expansion
main body
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311322422.6A
Other languages
Chinese (zh)
Other versions
CN117455753B (en
Inventor
王萍萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shuhang Technology Beijing Co ltd
Original Assignee
Shuhang Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shuhang Technology Beijing Co ltd filed Critical Shuhang Technology Beijing Co ltd
Priority to CN202311322422.6A priority Critical patent/CN117455753B/en
Publication of CN117455753A publication Critical patent/CN117455753A/en
Application granted granted Critical
Publication of CN117455753B publication Critical patent/CN117455753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a special effect template generation method, a special effect generation device and a storage medium. The special effect template generation method comprises the following steps: acquiring an image to be processed and a main mask thereof; acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for an image to be processed according to a main body mask; acquiring special effect adjusting parameters and mask expansion parameters in response to parameter control instructions triggered based on a parameter control interface; generating a main body recognition special effect template according to the special effect adjusting parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjusting parameters for the image to be processed according to the expansion mask corresponding to the image to be processed, and the main body mask is obtained after the expansion mask is subjected to expansion processing according to the mask expansion parameters. The visual parameter control interface can be provided to generate the main body recognition special effect template by the acquired related parameters and the initial special effect template, so that the efficiency of generating the special effect template can be improved.

Description

Special effect template generation method, special effect generation device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a special effect template generating method, a special effect generating method, a device, and a storage medium.
Background
At present, the application of the image is wider and wider, the requirement of the user on the expression effect of the image is higher and higher, and various special effects are added for the image, so that the expression effect of the image can be improved. When adding special effects to an image, a user needs to use special effect templates provided by image editing software, so that the special effect templates need to be designed in advance and integrated into the software for the user to use.
In the related art, if different image effects need to be obtained, corresponding special effect templates need to be regenerated, specifically, when the special effect templates are generated, a designer designs a required effect diagram, then communicates with related developers, and the related developers develop the special effect templates from the beginning according to the effect diagram. The development process is complex, and multiple personnel are required to be coordinated, so that the efficiency of generating the special effect template is not improved.
Disclosure of Invention
The embodiment of the application provides a special effect template generation method, a special effect generation device and a storage medium, which provide a visualized parameter control interface for designers to acquire corresponding control parameters, so that a new main body identification special effect template is generated by combining related control parameters on the basis of the existing initial special effect template, a new effect diagram is not required to be designed, communication and participation of multiple parties are not required in the generation process of the special effect template, and the generation efficiency of the special effect template is improved.
An embodiment of the present application provides a method for generating a special effect template, where the method includes:
acquiring an image to be processed and a main mask of the image to be processed;
acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask;
responding to a parameter control instruction triggered based on a parameter control interface, and acquiring special effect adjustment parameters and mask expansion parameters, wherein the special effect adjustment parameters are used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed;
generating a main body recognition special effect template according to the special effect adjusting parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjusting parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
A second aspect of an embodiment of the present application provides a special effect generating method, where the method includes:
acquiring a target processing image and a target subject identification special effect template, wherein the target subject identification special effect template is generated based on special effect adjustment parameters, mask expansion parameters and an initial special effect template;
Obtaining a target expansion mask of the target processing image, which is matched with the mask expansion parameter, through the target subject identification special effect template;
and generating a subject identification special effect matched with the special effect adjusting parameter for the target processing image through the target subject identification special effect template according to the target expansion mask.
A third aspect of an embodiment of the present application provides a special effect template generating apparatus, where the apparatus includes:
the image processing module is used for processing the main mask of the image to be processed;
the initial special effect template acquisition module is used for acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask;
the parameter acquisition module is used for responding to a parameter control instruction triggered based on a parameter control interface to acquire special effect adjustment parameters and mask expansion parameters, wherein the special effect adjustment parameters are used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed;
and the special effect template generation module is used for generating a main body recognition special effect template according to the special effect adjustment parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjustment parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
In some alternative embodiments, the special effect adjustment parameters include a subject stroking parameter and a background filling parameter;
the parameter control interface comprises a main body description parameter adjusting control, a background filling parameter adjusting control and a mask expansion parameter adjusting control;
the parameter acquisition module is specifically configured to: acquiring the main body tracing parameters in response to a main body tracing parameter control instruction triggered based on the main body tracing parameter adjustment control, wherein the main body tracing parameters are used for controlling the display effect of main body tracing lines in the main body recognition special effect;
acquiring the background filling parameters in response to a background filling parameter control instruction triggered based on the background filling parameter adjustment control, wherein the background filling parameters are used for controlling the filling display effect of an image background area in the main body recognition special effect, and the image background area comprises an area except the expansion mask in the image to be processed;
and responding to a mask expansion parameter control command triggered by the mask expansion parameter adjustment control, and acquiring the mask expansion parameter.
In some alternative embodiments, the mask expansion parameters include mask texture and range control parameters;
The special effect template generation module is specifically used for: acquiring the image size of the image to be processed, and determining a mask expansion range corresponding to the image to be processed according to the image size and the range control parameter;
obtaining an expansion mask corresponding to the image to be processed according to the mask expansion range, the mask texture and the main mask;
and generating a main body recognition special effect template based on the expansion mask according to the special effect adjustment parameter, the mask expansion parameter and the initial special effect template.
In some optional embodiments, the special effect template generating module is further specifically configured to: determining the center point of the mask texture;
determining mask adjacent points corresponding to the center points in the mask textures according to preset adjacent point control parameters;
determining a mask expansion pixel value according to the pixel value of the center point and the pixel value of the mask adjacent point;
and obtaining an expansion mask corresponding to the image to be processed according to the mask expansion range, the mask expansion pixel value and the main mask.
A fourth aspect of the present application provides an apparatus for generating a special effect, where the apparatus includes:
The data acquisition module is used for acquiring a target processing image and a target main body identification special effect template, wherein the target main body identification special effect template is generated based on special effect adjustment parameters, mask expansion parameters and an initial special effect template;
the target expansion mask acquisition module is used for acquiring a target expansion mask matched with the mask expansion parameter of the target processing image through the target main body recognition special effect template;
and the special effect generation module is used for generating a main body recognition special effect matched with the special effect adjustment parameter for the target processing image through the target main body recognition special effect template according to the target expansion mask.
In some alternative embodiments, the mask expansion parameters include mask texture and range control parameters;
the target inflation mask acquiring module is specifically configured to: acquiring a main mask of the target processing image;
acquiring the image size of the target processing image, and determining a mask expansion range corresponding to the target processing image according to the image size and the range control parameter;
and obtaining a target expansion mask corresponding to the target processing image according to the mask expansion range, the mask texture and the main mask.
In some optional embodiments, the target inflation mask acquiring module is further specifically configured to:
acquiring a center point of the mask texture;
determining mask adjacent points corresponding to the center points in the mask textures according to preset adjacent point control parameters;
determining a mask expansion pixel value according to the pixel value of the center point and the pixel value of the mask adjacent point;
and obtaining a target expansion mask corresponding to the target processing image according to the mask expansion range, the mask expansion pixel value and the main mask.
In some alternative embodiments, the special effect adjustment parameters include a subject stroking parameter and a background filling parameter;
the special effect generation module is specifically used for: generating a tracing line for the target processing image along the edge of the target expansion mask according to the main body tracing parameter, wherein the main body tracing parameter is used for controlling at least one display effect of color, thickness, line texture and line transparency of the tracing line;
and filling an image background area into the target processing image according to the background filling parameter, wherein the image background area comprises an area except the target expansion mask in the target processing image, and the background filling parameter is used for controlling at least one display effect of filling color, filling texture and filling transparency of the image background area.
A fifth aspect of embodiments of the present application provides an electronic device, including a memory and a processor, where the memory stores a plurality of instructions; the processor loads instructions from the memory to execute steps in the special effect template generating method provided in the first aspect of the embodiment of the application or execute steps in the special effect generating method provided in the second aspect of the embodiment of the application.
A sixth aspect of the embodiments of the present application provides a computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform steps in the special effects template generation method provided in the first aspect of the embodiments of the present application or perform steps in the special effects generation method provided in the second aspect of the embodiments of the present application.
By adopting the scheme of the embodiment of the application, the image to be processed and the main mask of the image to be processed can be obtained; acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask; responding to a parameter control instruction triggered based on a parameter control interface, and acquiring special effect adjustment parameters and mask expansion parameters, wherein the special effect adjustment parameters are used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed; generating a main body recognition special effect template according to the special effect adjusting parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjusting parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
Therefore, a visual parameter control interface can be provided for a designer, and special effect adjustment parameters and mask expansion parameters for controlling the main body recognition special effect can be obtained through the parameter control interface. On the basis of the existing initial special effect template, the special effect adjusting parameters and the mask expansion parameters are combined to generate a new main body identification special effect template for special effect generation which is matched with the special effect condition parameters for the expanded expansion mask, a new effect diagram is not required to be designed, communication and participation of multiple parties are not required in the generation process of the special effect template, and the generation efficiency of the special effect template is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of data interaction timing sequence during generation of a special effect template according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a special effect template generating method according to an embodiment of the present application;
FIG. 3 is an interactive schematic diagram of a special effect template generation process provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a parameter control interface provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a parameter control interface provided in an embodiment of the present application;
fig. 6 is a schematic diagram of data interaction timing sequence during special effect generation according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a special effect generating method according to an embodiment of the present application;
FIG. 8 is an interactive schematic diagram of a special effect generation process provided in an embodiment of the present application;
FIG. 9 is a schematic diagram showing the comparison of the expansion ranges of a main mask according to an embodiment of the present application;
FIG. 10 is a schematic diagram showing a comparison of the distance between a line drawn before and after expansion of a mask body and a person body according to an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a contrast of the transparency of a main mask filling according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a contrast between background texture filling and solid color filling provided in an embodiment of the present application;
FIG. 13 is a schematic diagram of a contrast between a solid color fill and a texture fill of a stroked line provided in an embodiment of the present application;
fig. 14 is a block diagram of a specific-effect template generating apparatus according to an embodiment of the present application;
Fig. 15 is a block diagram of a special effect generating apparatus according to an embodiment of the present application;
fig. 16 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides a special effect template generation method, a special effect generation device and a storage medium. Specifically, the special effect template generating method and/or the special effect generating method in the embodiment of the application may be executed by a computer device, where the computer device may be a device such as a terminal or a server. The terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a personal computer (PC, personal Computer), a personal digital assistant (PDA, personal Digital Assistant) and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
An embodiment of the present invention provides a method for generating a special effect template, and fig. 1 is a schematic diagram of data interaction timing sequence during generating a special effect template provided in the embodiment of the present invention, where a design client is a client corresponding to a designer who performs special effect template design, and a design server is a server for performing special effect template generation. In the embodiment of the present application, data interaction is performed between the design client and the design server according to steps S1 to S5 shown in fig. 1.
Specifically, the design server obtains the image to be processed, the main mask thereof and the corresponding initial special effect template. One or more initial special effect templates can be preset in the design server, and when a plurality of initial special effect templates are arranged in the design server, a designer can select a required initial special effect template through the design client. In this embodiment, only one initial special effect template in the design server is taken as an example for specific explanation, but the specific explanation is not limited to the specific explanation.
The design client displays a parameter control interface for a designer to input corresponding special effect adjustment parameters and mask expansion parameters. Further, the design client uploads the parameters to the design server, and the design server generates a new main body recognition special effect template according to the parameters and combining with the initial special effect template, so that the generation of the new special effect template is completed.
Fig. 2 is a schematic flow chart of a special effect template generating method according to an embodiment of the present application. The specific flow of the special effect template generation method can be as follows:
201. and acquiring the image to be processed and a main mask of the image to be processed.
The image to be processed is an image used for representing the display effect of the main body recognition special effect corresponding to the corresponding special effect template in the special effect template generation process. The image to be processed contains a character body, and the character body is a target object in the image to be processed.
In the specific effect template generating process, the image content of the image to be processed has no specific requirement, and in the different specific effect template generating processes, the same image to be processed can be used, or different images to be processed can be used, which is not limited specifically.
In one application scenario, the image to be processed may be a preset image including a person body. In another application scenario, the image to be processed may also be uploaded or specified in real time by a designer, which is not specifically limited herein.
The main mask is a mask corresponding to a human main body in the image to be processed. In an application scenario, if the image to be processed is a preset image, the main mask may also be a character main mask of the preset image to be processed, so as to reduce the data calculation amount required in the special effect template generating process.
In another application scenario, the main mask may also be a mask obtained by performing main recognition on the image to be processed according to a main recognition algorithm, so as to obtain a more accurate main mask and improve the accuracy of the processing process.
In some embodiments of the present application, a subject recognition algorithm is obtained and image processing is performed according to the subject recognition algorithm to obtain a corresponding subject mask. The subject recognition algorithm is used for subject recognition of the image to obtain a subject mask of the image. The specific subject identification algorithm may be set and adjusted according to actual requirements, and is not specifically limited herein.
Specifically, for the image to be processed, person body recognition, animal body recognition, or body recognition (for example, vehicle body recognition) for some specific object may be performed. In the embodiment of the present application, taking person body recognition as an example to make a specific explanation, the obtained body recognition algorithm is a person body recognition algorithm.
Furthermore, the region where the person main body in the image to be processed is located can be identified by using a person main body identification algorithm, and a main body mask is arranged on the region, so that a subsequent main body identification special effect can be conveniently added.
When the main mask is set for the region where the human main body is located, relevant parameters of the main mask (for example, parameters for controlling display effects such as color, texture, transparency, etc. of the main mask) may be acquired or adjusted according to a parameter control interface, or may be integrated in the main body recognition algorithm in advance, which is not limited herein.
202. And obtaining an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask.
The initial special effect template is a special effect template which is generated in advance, and when the initial special effect template is used, a main body identification special effect is generated for the image to be processed directly according to a main body mask corresponding to the image to be processed.
Specifically, the effect of the main body recognition effect generated by the initial special effect template is fixed in advance, and in the embodiment of the application, main body recognition features which are adjusted or improved based on the main body recognition effect corresponding to the initial special effect template need to be obtained, for example, the line thickness of the edge-drawing special effect, the distance between the edge-drawing line and the main body of the person are modified, so that the main body recognition special effect template is adjusted based on corresponding parameters based on the initial special effect template, and a new main body recognition special effect template is generated.
It should be noted that, in the embodiment of the present application, the initial special effect template is adjusted to obtain the main body identification special effect template, so that the effect diagram does not need to be redesigned, which is beneficial to improving the processing efficiency. Meanwhile, because the main body recognition special effect template is used for adjusting the initial special effect template, the type of main body recognition special effect which can be generated by the main body recognition special effect template is the same as that of the initial special effect template, and the specific effect details are different. For example, if the initial special effect template can generate an edge-tracing special effect, the main body recognizes that the special effect template can generate an edge-tracing special effect, and details of the edge-tracing special effect corresponding to the special effect template and the edge-tracing special effect are different, for example, color, thickness, distance from the edge-tracing line to the main body of the person are different, and the like.
203. And responding to a parameter control instruction triggered based on a parameter control interface, and acquiring a special effect adjustment parameter and a mask expansion parameter, wherein the special effect adjustment parameter is used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed.
The parameter control interface is a visual parameter control interface provided for special effect template designers, and the visual parameter control interface can facilitate the designers to control the required parameter values according to actual requirements, thereby controlling the display effect of the required special effect. The parameter control instructions are input by a designer based on a parameter control interface.
The special effect adjustment parameter is a parameter for controlling the display effect of the special effect, and in the embodiment of the application, the special effect to be generated is taken as an example to describe that the main body identification special effect. Namely, the corresponding special effects are that main body identification is carried out on the image to be processed to determine main body articles or main body characters in the image to be processed, and then the image special effects (such as main body edge drawing, background area filling and other main body identification special effects) related to the main body are added according to the identified main body.
The mask expansion parameters are used for controlling the expansion process of the main mask of the image to be processed, and the expansion mask corresponding to the image to be processed is obtained. In the embodiment of the application, when the newly generated main body recognition special effect template is used, main body recognition special effects are generated according to the expansion mask corresponding to the image to be processed, so that mask expansion parameters can be used for adjusting the distance between the line drawing and the character main body.
In some embodiments of the present application, the special effect adjustment parameters include a subject stroking parameter and a background filling parameter;
the parameter control interface comprises a main body description parameter adjusting control, a background filling parameter adjusting control and a mask expansion parameter adjusting control;
The obtaining the special effect adjusting parameter and the mask expansion parameter in response to the parameter control instruction triggered based on the parameter control interface comprises the following steps:
acquiring the main body tracing parameters in response to a main body tracing parameter control instruction triggered based on the main body tracing parameter adjustment control, wherein the main body tracing parameters are used for controlling the display effect of main body tracing lines in the main body recognition special effect;
acquiring the background filling parameters in response to a background filling parameter control instruction triggered based on the background filling parameter adjustment control, wherein the background filling parameters are used for controlling the filling display effect of an image background area in the main body recognition special effect, and the image background area comprises an area except the expansion mask in the image to be processed;
and responding to a mask expansion parameter control command triggered by the mask expansion parameter adjustment control, and acquiring the mask expansion parameter.
In the embodiment of the present application, the description is given by taking the specific adjustment parameter including the main body description parameter and the background filling parameter as an example, and in the actual use process, the specific adjustment parameter may include only one of the two parameters, or may include other parameters, which is not limited herein specifically.
The main body stroking parameter may include a plurality of parameters for controlling a specific stroking display effect, for example, specific parameters for controlling a stroking line color, a stroking line width, a stroking line texture, and the like, which are not particularly limited herein.
Likewise, the background filling parameters may also include a plurality of parameters for controlling a specific background filling display effect, for example, specific parameters for controlling a display effect such as a background filling color, a background filling transparency, a background filling texture, and the like, which are not particularly limited herein.
Therefore, a visual parameter control interface is provided for a designer of the special effect template, and the designer can flexibly adjust special effect adjusting parameters through the parameter control interface, so that the required special effect template is generated, participation of other developers is not needed, and the special effect template generation efficiency is improved and the special effect template generation complexity is reduced.
204. Generating a main body recognition special effect template according to the special effect adjusting parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjusting parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
Specifically, the main mask is subjected to regional expansion control according to the mask expansion parameters so as to adjust the regional range covered by the mask, thereby obtaining a better main recognition special effect display effect. For example, the expansion mask can be obtained by adjusting the area range of the mask of the main body according to the expansion parameters of the mask, so that the distance between the line and the main body of the character can be adjusted, or the distance between the filled background and the main body of the character can be adjusted when the background is filled, the design requirement of a designer can be met, and the reality effect can be flexibly adjusted.
In some embodiments of the present application, the mask expansion parameters include mask texture and range control parameters;
generating a main body recognition special effect template according to the special effect adjusting parameter, the mask expansion parameter and the initial special effect template, wherein the main body recognition special effect template comprises the following steps:
acquiring the image size of the image to be processed, and determining a mask expansion range corresponding to the image to be processed according to the image size and the range control parameter;
obtaining an expansion mask corresponding to the image to be processed according to the mask expansion range, the mask texture and the main mask;
and generating a main body recognition special effect template based on the expansion mask according to the special effect adjustment parameter, the mask expansion parameter and the initial special effect template.
The mask texture is used for controlling the texture of the generated expansion mask, and the range control parameter is used for controlling the expansion range of the mask.
In one application scenario, the above-mentioned range control parameter may be directly an expansion step size in the expansion control. In another application scenario, the expansion step length can be obtained through calculation according to the range control parameter and the image size, so that the mask expansion range of the current image to be processed is determined according to the expansion step length, and a better processing effect is obtained for different images to be processed.
In some embodiments of the present application, the obtaining the expansion mask corresponding to the image to be processed according to the mask expansion range, the mask texture, and the main mask includes:
determining the center point of the mask texture;
determining mask adjacent points corresponding to the center points in the mask textures according to preset adjacent point control parameters;
determining a mask expansion pixel value according to the pixel value of the center point and the pixel value of the mask adjacent point;
and obtaining an expansion mask corresponding to the image to be processed according to the mask expansion range, the mask expansion pixel value and the main mask.
Therefore, the mask expansion pixel value corresponding to the expansion region during mask expansion can be determined according to the mask texture, so that a better mask expansion effect is obtained.
Specifically, the main body recognition special effect template is matched with the special effect adjusting parameters and the mask expansion parameters, namely, when the main body recognition special effect template is used for processing the image, the generated main body recognition special effect is generated based on the special effect adjusting parameters and the mask expansion parameters which are set in the main body recognition special effect template. For example, when using the subject recognition special effect template, the color of the generated subject line is the same as the color of the subject line specified in the subject line parameters, and the line is generated along the edge of the expansion mask determined according to the mask expansion parameters.
Fig. 3 is an interactive schematic diagram of a special effect template generating process provided in the embodiment of the present application, as shown in fig. 3, when generating a special effect template, a designer inputs a parameter control instruction, so as to trigger a device for generating a special effect template to determine parameters according to the instruction, and at the same time, the device obtains an image to be processed and a main mask thereof, and an initial special effect template, and generates a new main identification special effect template based on the data. It should be noted that, when the image to be processed and the initial special effect template are obtained, the designer may also select the required image to be processed and initial special effect template according to the actual requirement, which is not limited herein.
In the embodiment of the present application, specific image processing steps are integrated based on the acquired parameters and the initial special effect template, so as to generate the main body recognition special effect template. The parameters used in the image processing step are the special effect adjustment parameter and the mask expansion parameter, and the image processing step may refer to a step in the related art, and is not specifically limited herein. For example, when performing mask expansion control, the image processing step is implemented based on a preset mask expansion algorithm; when the main body is subjected to edge tracing, the image processing step used for the main body mask area is realized based on a preset edge detection algorithm, and line drawing is performed along the detected edge; in performing background region filling, the image processing step used includes adding a corresponding color or texture to the image background region. It should be noted that, the specific body recognition algorithm may be integrated into the specific effect template, which is not limited herein.
In the embodiment of the application, a main mask of an image to be processed is obtained; acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask; responding to a parameter control instruction triggered based on a parameter control interface, and acquiring special effect adjustment parameters and mask expansion parameters, wherein the special effect adjustment parameters are used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed; generating a main body recognition special effect template according to the special effect adjusting parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjusting parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
Therefore, a visual parameter control interface can be provided for a designer, and special effect adjustment parameters and mask expansion parameters for controlling the main body recognition special effect can be obtained through the parameter control interface. On the basis of the existing initial special effect template, the special effect adjusting parameters and the mask expansion parameters are combined to generate a new main body identification special effect template for special effect generation which is matched with the special effect condition parameters for the expanded expansion mask, a new effect diagram is not required to be designed, communication and participation of multiple parties are not required in the generation process of the special effect template, and the generation efficiency of the special effect template is improved.
Based on a main body recognition algorithm, a general template material package of a main body related special effect is completed by combining the special effect adjustment parameter and the mask expansion parameter, the template material package is used for rapidly achieving main body segmentation, background filling, main body edge drawing and other effects, the parameters can be flexibly adjusted, and the generation efficiency of the special effect template is improved. Specifically, the generated special effect template can be used for carrying out processing such as main body segmentation, main body mask expansion, image rendering and the like on the image to be processed, so that main body identification special effects are added to the image to be processed.
Specifically, in the embodiment of the present application, when performing image processing based on the special effect template, the processing is performed in a processing stage of an image processor (GPU, graphics processing unit), that is, implemented in a shader (shader).
It should be noted that, for each parameter, a corresponding visualized parameter adjustment control may be set in the parameter control interface, so as to facilitate parameter adjustment by a designer.
In the embodiment of the application, the related parameters including the main body description parameter, the background filling parameter and the mask expansion parameter are taken as examples for specific explanation. Fig. 4 and fig. 5 are schematic diagrams of parameter control interfaces provided in the embodiments of the present application, as shown in fig. 4 and fig. 5, in the embodiments of the present application, a distance adjustment parameter of a mask (mask), a background filling color parameter, a background filling texture parameter, a background filling opacity parameter, a color parameter of a line, a texture parameter of a line, an opacity parameter of a line, a width parameter of a line, and other parameters may be specifically set, and the parameters may be adjusted for each control corresponding to each parameter, so that the designer may use the device conveniently. The distance adjustment parameters can control the distance between the edge of the mask and the character main body, so that the distance between the line and the character main body can be adjusted.
In the special effect generation method provided in the embodiment of the present application, image rendering is performed based on the main body special effect recognition template generated by the special effect template generation method to generate a corresponding main body recognition special effect.
Fig. 6 is a schematic diagram of data interaction timing at the time of special effect generation, where a front-end user client is a client used by a front-end user, for example, a mobile device used by a user who needs to perform image processing, and an image processing server is a server for performing special effect generation on an image. The image processing server in fig. 6 may be the same server as the design server in fig. 1, or may be a different server, which is not limited herein. In this embodiment, data interaction is performed between the front-end user client and the image processing server according to steps A1 to A7 shown in fig. 6.
Specifically, the front-end user selects the target processing image to be processed and the target subject identification special effect template to be used through the front-end user client, and the front-end user client uploads the data to the image processing server. After the image processing server acquires the data, acquiring a target expansion mask of a target processing image according to a target main body recognition special effect template, further generating a main body recognition special effect for the target processing image, and transmitting the processed image with the main body recognition special effect to a front-end user client. The front-end user client displays the processed image with the subject identification effect.
In fig. 6, an example of an image processing procedure performed at the image processing server is described. In the actual use process, the main body recognition special effect template can be configured in the front-end user client in advance, so that the whole special effect generation process can be executed in the front-end user client without data interaction with the image processing server, and the special effect generation process is not particularly limited.
In the embodiment of the present application, the image processing procedure is specifically described as an example performed in the front-end user client. Referring to fig. 7, fig. 7 is a flow chart of a special effect generating method according to an embodiment of the present application. The specific flow of the special effect generation method can be as follows:
701. and acquiring a target processing image and a target subject identification special effect template, wherein the target subject identification special effect template is generated based on the special effect adjustment parameter, the mask expansion parameter and the initial special effect template.
The target processing image is an image which needs to be subjected to image processing to add a corresponding subject recognition special effect. The target subject recognition special effect template is a subject recognition special effect template for performing image processing on a target processing image. In an application scenario, a client used by a front-end user is preconfigured with a plurality of main body recognition special effect templates generated by the special effect template generation method according to the first aspect of the embodiment of the application, and the front-end user selects one of the main body recognition special effect templates as a target main body recognition special effect template according to actual requirements.
In the embodiment of the application, a special effect generation method is provided for a user, and the user can add a corresponding main body identification special effect for a target processing image based on the special effect generation method, so that the display effect of the image is enriched.
702. And acquiring a target expansion mask of the target processing image, which is matched with the mask expansion parameter, through the target subject identification special effect template.
In an application scene, a preset main body recognition algorithm is called to carry out main body recognition, and mask expansion processing is carried out according to the mask expansion parameters to obtain a corresponding target expansion mask.
In another application scenario, a main body recognition algorithm is integrated in the target main body special effect recognition template, main body recognition is carried out on the image to be processed according to the main body recognition algorithm integrated in the target main body special effect recognition template, and the area where the main body mask in the image to be processed is located is determined, so that mask expansion processing is carried out according to mask expansion parameters.
It should be noted that, the target main body special effect recognition template may be integrated with adjustment parameters for the main body mask in advance, for example, the adjustment parameters may be used to adjust colors, transparency, textures, and the like corresponding to the main body mask, which is not limited herein.
In the embodiment of the present application, the above-mentioned subject identification algorithm is a human subject identification algorithm, but is not limited to this specific one.
In some embodiments of the present application, the mask expansion parameters include mask texture and range control parameters;
the target expansion mask matching the mask expansion parameter for obtaining the target processing image through the target subject recognition special effect template includes:
acquiring a main mask of the target processing image;
acquiring the image size of the target processing image, and determining a mask expansion range corresponding to the target processing image according to the image size and the range control parameter;
and obtaining a target expansion mask corresponding to the target processing image according to the mask expansion range, the mask texture and the main mask.
Specifically, the obtaining the target expansion mask corresponding to the target processing image according to the mask expansion range, the mask texture and the main mask includes:
acquiring a center point of the mask texture;
determining mask adjacent points corresponding to the center points in the mask textures according to preset adjacent point control parameters;
determining a mask expansion pixel value according to the pixel value of the center point and the pixel value of the mask adjacent point;
And obtaining a target expansion mask corresponding to the target processing image according to the mask expansion range, the mask expansion pixel value and the main mask.
The above-mentioned neighboring point limiting parameter may be set and adjusted according to actual requirements, for example, in the embodiment of the present application, the neighboring point limiting parameter may be set to have the number of neighboring points of 4, and the pixel interval between the neighboring points and the mask center point along the horizontal direction and the vertical direction of 0 (i.e., neighboring), so as to obtain 4 mask neighboring points (in the main mask or in the mask texture image) near the mask center point.
In one application scenario, the expansion range of the mask is an expansion step, and for each pixel point on the main mask, expansion processing is performed according to the expansion step along the normal direction.
In another application scenario, the expansion range of the mask may be set to an expansion radius, and according to the expansion radius and the image size, an expansion step in a horizontal direction and an expansion step in a vertical direction are calculated respectively, and then an expansion step in a normal direction is calculated according to the two expansion steps, so that expansion processing in the normal direction is realized.
703. And generating a subject identification special effect matched with the special effect adjusting parameter for the target processing image through the target subject identification special effect template according to the target expansion mask.
Specifically, according to the target expansion mask, a subject recognition special effect is generated through a target subject recognition template. When the target main body recognition template is used for image processing, the target main body recognition template is processed according to the special effect adjusting parameters and the processing mode corresponding to the initial special effect template, and a corresponding main body recognition special effect is generated.
It should be noted that, the specific process of performing image processing on the target subject identification special effect template may refer to the corresponding description in the embodiment of the special effect template generation method. And performing image rendering and other treatments on the image to be treated based on the image processing steps preset in the main body special effect identification template according to the main body mask and the special effect adjustment parameters, so as to add main body identification special effects matched with the special effect adjustment parameters, such as a line drawing special effect, a background area filling special effect and the like.
In some embodiments of the present application, the special effect adjustment parameters include a subject description parameter and a background filling parameter;
the generating, according to the target expansion mask, a subject recognition effect matching the effect adjustment parameter for the target processing image through the target subject recognition effect template includes:
generating a tracing line for the target processing image along the edge of the target expansion mask according to the main body tracing parameter, wherein the main body tracing parameter is used for controlling at least one display effect of color, thickness, line texture and line transparency of the tracing line;
And filling an image background area into the target processing image according to the background filling parameter, wherein the image background area comprises an area except the target expansion mask in the target processing image, and the background filling parameter is used for controlling at least one display effect of filling color, filling texture and filling transparency of the image background area.
It should be noted that the main body stroking parameter may also be used to control other display effects of the stroking line, and the background filling parameter may also be used to control other display effects of the image background area, which is not limited herein.
The parameters corresponding to the line texture and the filling texture may be parameters indicating a storage location of a corresponding texture image, and the corresponding texture is obtained from the texture image, which is not limited herein.
Fig. 8 is an interactive schematic diagram of a special effect generating process provided in the embodiment of the present application, where, as shown in fig. 8, when special effect generation is performed, a front-end user selects a target processing image to be processed, and a target subject identification special effect template to be used. And then, acquiring a target expansion mask of the target processing image according to the selected target subject identification special effect template, and generating a required subject identification special effect according to the target subject identification special effect template and the target expansion mask.
It should be noted that, in the embodiment of the present application, the specific process of processing the target processing image according to the target subject special effect recognition template is implemented according to the image processing steps preset in the target subject special effect recognition template. In this embodiment of the present application, the above processing procedure is further specifically described with reference to a specific application scenario.
In the embodiment of the application, specific description is given by taking the specific effect adjustment parameter including the main body description parameter and the background filling parameter as examples, but the specific description is not limited to the specific description.
Specifically, character body recognition is performed on the target processing image according to a body recognition algorithm, a body mask (mask) corresponding to the character body is obtained, and mask expansion processing is performed on the body mask. It should be noted that, the mask is enlarged along the normal direction, after the mask is enlarged, the line is drawn again, then the corresponding line will be far away from the human body, through adjusting the size of inflation expansion, just can control the distance of line and human body.
Specifically, in the embodiment of the present application, mask expansion is performed based on a preset expansion algorithm through the following steps. And obtaining mask expansion parameters, including texture, uv coordinates and radius values, wherein the texture can be used as a filling texture of a mask region, the uv coordinates are texture coordinates used by an opengl patch shader in a range of [0.0,1.0], the texture is a 2D picture, and the radius is a preset float value. From the above radius values, the expansion step in the vertical direction (stepSize) and the expansion step in the horizontal direction are calculated. Wherein the radius value is divided by the image height and width to obtain a vertical expansion step and a horizontal expansion step, respectively. In this way, using the radius value divided by the height (or width) of the image to be processed ensures that a consistent dilation effect is obtained across different sizes of target processed images and textures.
Specifically, for each pixel point at the edge of the original main mask, determining an expansion step length along the normal direction according to the expansion step length along the vertical direction and the expansion step length along the horizontal direction (for example, projecting the expansion step lengths along the two directions to the normal direction and adding the expansion step lengths), thereby determining an expanded pixel point corresponding to the pixel point, and connecting all the expanded pixel points to obtain the region where the expanded main mask is located.
Creating a maximum Alpha value (maxA) with an initial value of 0, and acquiring a pixel color of a center position from a texture corresponding to the mask, wherein the Alpha value represents image transparency, and can represent a value of a pixel corresponding to the mask, namely determining the value of the center point of the texture from the texture used for filling the mask area, and taking the value of the pixel of the center point of the mask as the pixel value of the mask center point of the main mask.
And acquiring pixel colors of adjacent points in the upper, lower, left and right directions around the central position, namely determining four adjacent points corresponding to the texture central point from textures used for filling the mask area as mask adjacent points, and acquiring pixel values of the mask adjacent points. The average of the pixel values of four adjacent points is calculated in order to obtain a smoother dilation edge.
The average of the pixel values of the four points is compared with the pixel value of the mask center point to find the maximum Alpha value and returned as the final result, i.e., as the fill pixel value of the mask area. Thus, an opaque region in a texture is expanded (mask condition) by calculating the average value of pixels around a given point and selecting the largest Alpha value to achieve the expansion effect.
Based on the algorithm, the distance between the line drawing and the human body can be flexibly controlled by modifying the input radius value, and the information such as the color, the transparency and the like of the mask filling area can be changed, so that different main body recognition special effect templates are obtained. When special effect generation is performed by using the special effect template, different main body recognition special effects can be realized by selecting different main body recognition special effect templates.
Specifically, in the opengl fragment shader, each pixel display is a four-dimensional variable of rgba, and the displayed color can be adjusted by changing the parameter value of rgb; similarly, changing the parameter value corresponding to a can adjust the transparency of the result.
After the expansion control is performed on the main mask, the background area can be filled based on the expanded main mask (i.e. the target expansion mask), i.e. the non-mask area is filled with solid color or texture.
Specifically, a preset built-in function step can be used to realize threshold operation for the mask, and whether the mask value exceeds the threshold value is judged according to the preset threshold value, so that whether the pixel point belongs to the mask area is determined, the binarization judging effect is realized, and the mask area judging accuracy is improved.
For example, the threshold may be set to 0.55, if the mask value corresponding to a pixel is greater than 0.55, the function returns to 1 to represent that the pixel belongs to the mask area, otherwise returns to 0 to represent that the pixel does not belong to the mask area (i.e. belongs to the image background area), so that the pixel at the transition position is better judged, and a better display effect is obtained.
Furthermore, the obtained binary mask result is used as alpha input of the normal mixed mode, and the filling effect at the background can be obtained.
Whether the background is filled, the color of the pure color filling of the background, the texture of the background filling, the transparency of the background filling content and the like can be controlled according to the background filling parameters.
Based on the expanded main mask, the edge drawing effect drawing can be performed. In one application scenario, an original color (srcColor, i.e. the color of the image to be processed for which a base image is sampled), a tracing line color (outlineColor, i.e. a line color parameter set in a main body tracing parameter), and an alpha value are obtained. The alpha value is a preset transparency parameter, and specifically, the image has four channels rgba, the alpha value is the value of the channel a, namely, the transparency parameter, the alpha value is 1 to represent opacity, and the alpha value is not 1 to represent translucence.
The step length of the tracing in the horizontal and vertical directions is calculated and used for moving in texture coordinates, and the step length can be changed by adjusting parameters, so that the width (namely the thickness) of the tracing line is controlled.
Sampling according to surrounding pixels of texture coordinates, and then calculating the transparency of each sampling position in the mask, wherein the method specifically comprises the following steps: tl: transparency of the upper left corner sampling point; tr: transparency bl of upper right corner sample point: transparency of the lower left corner sampling point; br: transparency of the lower right corner sampling point; t: transparency of the upper sampling point; b: transparency of the lower sampling point; l: transparency of the left sampling point; r: transparency of the right sampling point; c: transparency of the current position sampling point. The current position is the pixel point being processed, and the current point is the pixel point when no offset is made to the uv coordinate.
Comparing and determining maximum value points in tl, tr, bl and br as a first transparency maximum value mask1; the comparison determines the maximum point among t, b, l and r as the second transparency maximum mask2.
And taking the value obtained by subtracting the transparency c of the current position from the larger value of the mask1 and the mask2 as a transparency difference mask3, if the mask3 is smaller than a preset transparency threshold (for example, 0.2), setting the transparency target value mask4 of the point to be 0, otherwise setting the transparency target value mask4 of the point to be 1, and performing binarization processing to obtain the transparency target value mask4 of the drawing area of the line drawing, wherein the area with the value of 1 of all masks 4 represents the area needing drawing the line drawing.
For each pixel point in the image to be processed, if the number of character bodies value body count returned by the body recognition algorithm is greater than 0 and mask4 is greater than 0.1, mixing the srccolor.rgb and the outlinec olor.rgb according to mask 3 x outlinec olor.a. alpha by using a mix () function, and assigning the result to the srccolor.rgb. Where srccolor. Rgb represents the values of the rgb three channels (i.e., red, green, blue three channels) of the original input map, outlinecolor. Rgb represents the values of the rgb three channels of the contour.
Returning the srcColor as the output of the function, the output is a color value of the vec4 type. Thereby realizing the calculation of the final color value of the edging strip according to the main body edging parameters.
Thus, the final color value is calculated from the incoming srcColor, outlineColor and alpha values. Transparency is calculated from the pixel samples around the texture coordinates and compared and processed with a threshold. Then, judging whether to mix the srcColor with the outlineColor according to the conditions, and finally returning to the mixed color value, and calculating a proper color value of the tracing line for each image to be processed, so that a better display effect can be obtained.
It should be noted that, whether to start the drawing of the line, the color of the pure color filling of the line, the texture of the line filling, the transparency of the filling content of the line, etc. can be controlled according to the main body line drawing parameters.
Fig. 9 is a schematic diagram showing the comparison of the expansion range of the main mask provided in the embodiment of the present application, and fig. 10 is a schematic diagram showing the comparison of the expansion range of the main mask with the distance between the line drawn before and after the expansion of the main mask and the person main body, as can be seen from fig. 9 and 10, the distance between the edge of the main mask and the person main body after the expansion of the main mask is further, so that the line drawn can be prevented from shielding the person main body, and a better display effect can be obtained. Fig. 11 is a schematic diagram of comparing the transparency of main mask filling provided in the embodiment of the present application, as shown in fig. 11, different transparency of main mask filling may be obtained through parameter control in different main recognition special effect templates, so as to meet the needs of users.
Fig. 12 is a schematic diagram showing a comparison of background texture filling and solid color filling according to an embodiment of the present application, and fig. 13 is a schematic diagram showing a comparison of solid color filling and texture filling of a borderline according to an embodiment of the present application. As can be seen from fig. 12 and 13, the image processing can be performed using the subject identification special effect template integrating different subject description parameters and/or background filling parameters, so as to obtain rich subject identification special effects and meet the user requirements.
In fig. 9 to 13, the face region of the person main body is coded to protect the privacy of the person concerned, but the present invention is not limited to the embodiment.
In the embodiment of the application, the pre-generated target main body recognition special effect template is used for rapidly performing image processing. When the main body recognition special effect template is generated, template adjustment parameters are rich, a designer can freely combine the parameters, and templates with different effects can be quickly obtained through parameter adjustment, so that templates such as a mask special effect package, a main body background filling special effect package, a main body edge tracing special effect package and the like are quickly output. Meanwhile, the calculation of the main body recognition special effect when the image is processed is performed in the GPU instead of the CPU, so that the operation efficiency is improved, the operation efficiency can meet the online requirement, and the user can conveniently perform online image processing.
With reference to fig. 14, fig. 14 is a structural block diagram of a special effect template generating device provided in the embodiment of the present application, where the device includes:
A to-be-processed image acquiring module 1401, configured to acquire a to-be-processed image and a main mask of the to-be-processed image;
an initial special effect template obtaining module 1402, configured to obtain an initial special effect template, where the initial special effect template is used to generate a main body recognition special effect for the image to be processed according to the main body mask;
a parameter obtaining module 1403, configured to obtain a special effect adjustment parameter and a mask expansion parameter in response to a parameter control instruction triggered based on a parameter control interface, where the special effect adjustment parameter is used to adjust a display effect of a main body recognition special effect corresponding to the image to be processed;
the special effect template generating module 1404 is configured to generate a main body recognition special effect template according to the special effect adjustment parameter, the mask expansion parameter, and the initial special effect template, where the main body recognition special effect template is configured to generate a main body recognition special effect matched with the special effect adjustment parameter for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after performing expansion processing on the main body mask according to the mask expansion parameter.
In some alternative embodiments, the special effect adjustment parameters include a subject stroking parameter and a background filling parameter;
The parameter control interface comprises a main body description parameter adjusting control, a background filling parameter adjusting control and a mask expansion parameter adjusting control;
the parameter obtaining module 1403 is specifically configured to: acquiring the main body tracing parameters in response to a main body tracing parameter control instruction triggered based on the main body tracing parameter adjustment control, wherein the main body tracing parameters are used for controlling the display effect of main body tracing lines in the main body recognition special effect;
acquiring the background filling parameters in response to a background filling parameter control instruction triggered based on the background filling parameter adjustment control, wherein the background filling parameters are used for controlling the filling display effect of an image background area in the main body recognition special effect, and the image background area comprises an area except the expansion mask in the image to be processed;
and responding to a mask expansion parameter control command triggered by the mask expansion parameter adjustment control, and acquiring the mask expansion parameter.
In some alternative embodiments, the mask expansion parameters include mask texture and range control parameters;
the special effects template generation module 1404 is specifically configured to: acquiring the image size of the image to be processed, and determining a mask expansion range corresponding to the image to be processed according to the image size and the range control parameter;
Obtaining an expansion mask corresponding to the image to be processed according to the mask expansion range, the mask texture and the main mask;
and generating a main body recognition special effect template based on the expansion mask according to the special effect adjustment parameter, the mask expansion parameter and the initial special effect template.
In some alternative embodiments, the special effects template generation module 1404 is also specifically configured to: determining the center point of the mask texture;
determining mask adjacent points corresponding to the center points in the mask textures according to preset adjacent point control parameters;
determining a mask expansion pixel value according to the pixel value of the center point and the pixel value of the mask adjacent point;
and obtaining an expansion mask corresponding to the image to be processed according to the mask expansion range, the mask expansion pixel value and the main mask.
The embodiment of the application discloses a special effect template generating device, which acquires an image to be processed and a main mask of the image to be processed through an image acquisition module 1401 to be processed; acquiring an initial special effect template by an initial special effect template acquisition module 1402, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask; acquiring special effect adjusting parameters and mask expansion parameters by a parameter acquiring module 1403 in response to a parameter control instruction triggered based on a parameter control interface, wherein the special effect adjusting parameters are used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed; and generating a main body recognition special effect template by a special effect template generation module 1404 according to the special effect adjustment parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjustment parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
Therefore, a visual parameter control interface can be provided for a designer, and special effect adjustment parameters and mask expansion parameters for controlling the main body recognition special effect can be obtained through the parameter control interface. On the basis of the existing initial special effect template, the special effect adjusting parameters and the mask expansion parameters are combined to generate a new main body identification special effect template for special effect generation which is matched with the special effect condition parameters for the expanded expansion mask, a new effect diagram is not required to be designed, communication and participation of multiple parties are not required in the generation process of the special effect template, and the generation efficiency of the special effect template is improved.
With reference to fig. 15, fig. 15 is a block diagram of a special effect generating device according to the embodiment of the present application, where the device includes:
a data acquisition module 1501 for acquiring a target processing image and a target subject identification special effect template, wherein the target subject identification special effect template is generated based on a special effect adjustment parameter, a mask expansion parameter and an initial special effect template;
a target expansion mask acquiring module 1502, configured to acquire a target expansion mask matching the mask expansion parameter of the target processing image through the target subject identification special effect template;
And a special effect generating module 1503, configured to generate a main body recognition special effect matched with the special effect adjustment parameter for the target processing image through the target main body recognition special effect template according to the target expansion mask.
In some alternative embodiments, the mask expansion parameters include mask texture and range control parameters;
the target inflation mask acquiring module 1502 specifically is configured to: acquiring a main mask of the target processing image;
acquiring the image size of the target processing image, and determining a mask expansion range corresponding to the target processing image according to the image size and the range control parameter;
and obtaining a target expansion mask corresponding to the target processing image according to the mask expansion range, the mask texture and the main mask.
In some alternative embodiments, the target inflation mask acquiring module 1502 is further specifically configured to:
acquiring a center point of the mask texture;
determining mask adjacent points corresponding to the center points in the mask textures according to preset adjacent point control parameters;
determining a mask expansion pixel value according to the pixel value of the center point and the pixel value of the mask adjacent point;
And obtaining a target expansion mask corresponding to the target processing image according to the mask expansion range, the mask expansion pixel value and the main mask.
In some alternative embodiments, the special effect adjustment parameters include a subject stroking parameter and a background filling parameter;
the special effect generation module 1503 is specifically configured to: generating a tracing line for the target processing image along the edge of the target expansion mask according to the main body tracing parameter, wherein the main body tracing parameter is used for controlling at least one display effect of color, thickness, line texture and line transparency of the tracing line;
and filling an image background area into the target processing image according to the background filling parameter, wherein the image background area comprises an area except the target expansion mask in the target processing image, and the background filling parameter is used for controlling at least one display effect of filling color, filling texture and filling transparency of the image background area.
The embodiment of the application discloses a special effect generating device, which acquires a target processing image and a target main body identification special effect template through a data acquisition module 1501, wherein the target main body identification special effect template is generated based on special effect adjustment parameters, mask expansion parameters and an initial special effect template; acquiring a target expansion mask matched with the mask expansion parameter of the target processing image through the target subject identification special effect template by a target expansion mask acquisition module 1502; and generating a subject identification effect matched with the effect adjustment parameter for the target processing image through the target subject identification effect template according to the target expansion mask by an effect generation module 1503.
Thus, the special effect template is identified by using the target main body generated in advance, and the image processing is rapidly performed. When the target main body recognition special effect template is generated, template adjustment parameters are rich, a designer can freely combine the parameters, and templates with different effects can be quickly obtained through parameter adjustment, so that templates such as a mask special effect package, a main body background filling special effect package, a main body edge tracing special effect package and the like are quickly output. Meanwhile, the calculation of the main body recognition special effect when the image is processed is performed in the GPU instead of the CPU, so that the operation efficiency is improved, the operation efficiency can meet the online requirement, and the user can conveniently perform online image processing.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Correspondingly, the embodiment of the application also provides electronic equipment, which can be a terminal, and the terminal can be terminal equipment such as a smart phone, a tablet personal computer, a notebook computer, a touch screen, a game machine, a personal computer (PC, personal Computer), a personal digital assistant (PDA, personal Digital Assistant) and the like. As shown in fig. 16, fig. 16 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. The electronic device 1600 includes a processor 1601 having one or more processing cores, a memory 1602 having one or more computer readable storage media, and a computer program stored on the memory 1602 and executable on the processor. The processor 1601 is electrically connected to a memory 1602. It will be appreciated by those skilled in the art that the electronic device structure shown in the figures is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 1601 is a control center of the electronic device 1600, connects various portions of the entire electronic device 1600 using various interfaces and lines, and performs various functions and processes of the electronic device 1600 by running or loading software programs and/or modules stored in the memory 1602, and invoking data stored in the memory 1602, thereby performing overall monitoring of the electronic device 1600. The processor 1601 may be a central processing unit CPU, a graphics processor GPU, a network processor (NP, network Processor), etc., and may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application.
In the embodiment of the present application, the processor 1601 in the electronic device 1600 loads instructions corresponding to the processes of one or more application programs into the memory 1602 according to the following steps, and the processor 1601 executes the application programs stored in the memory 1602, so as to implement various functions, for example:
acquiring an image to be processed and a main mask of the image to be processed;
acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask;
Responding to a parameter control instruction triggered based on a parameter control interface, and acquiring special effect adjustment parameters and mask expansion parameters, wherein the special effect adjustment parameters are used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed;
generating a main body recognition special effect template according to the special effect adjusting parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjusting parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
Or for example:
acquiring a target processing image and a target subject identification special effect template, wherein the target subject identification special effect template is generated based on special effect adjustment parameters, mask expansion parameters and an initial special effect template;
obtaining a target expansion mask of the target processing image, which is matched with the mask expansion parameter, through the target subject identification special effect template;
and generating a subject identification special effect matched with the special effect adjusting parameter for the target processing image through the target subject identification special effect template according to the target expansion mask.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 16, the electronic device 1600 further includes: a touch display screen 1603, a radio frequency circuit 1604, an audio circuit 1605, an input unit 1606, and a power supply 1607. The processor 1601 is electrically connected to the touch display 1603, the rf circuit 1604, the audio circuit 1605, the input unit 1606, and the power supply 1607, respectively. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 16 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The touch display 1603 may be used to display a graphical user interface and to receive operational instructions generated by a user acting on the graphical user interface. The touch display 1603 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 1601, and can receive and execute commands sent from the processor 1601. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 1601 to determine a type of touch event, and the processor 1601 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 1603 to implement the input and output functions. In some embodiments, however, the touch panel and the display panel may be implemented as two separate components to implement the input and output functions. I.e. the touch-sensitive display 1603 may also implement the input function as part of the input unit 1606.
The radio frequency circuit 1604 may be used to transceive radio frequency signals to establish wireless communication with a network device or other electronic device via wireless communication.
Audio circuitry 1605 may be used to provide an audio interface between a user and the electronic device through speakers, microphones, and so forth. The audio circuit 1605 may transmit the received electrical signal after audio data conversion to a speaker, which converts the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 1605 and converted into audio data, which are processed by the audio data output processor 1601 for transmission to, for example, another electronic device via the radio frequency circuit 1604 or for output to the memory 1602 for further processing. Audio circuitry 1605 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device.
The input unit 1606 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
A power supply 1607 is used to power the various components of the electronic device 1600. Optionally, the power supply 1607 may be logically connected to the processor 1601 by a power management system, so as to perform functions of managing charging, discharging, and power consumption by the power management system. The power supply 1607 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 16, the electronic device 1600 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium in which a plurality of computer programs are stored, the computer programs being capable of being loaded by a processor to perform any one of the special effects template generation methods or any one of the steps of the special effects generation methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
Acquiring an image to be processed and a main mask of the image to be processed;
acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask;
responding to a parameter control instruction triggered based on a parameter control interface, and acquiring special effect adjustment parameters and mask expansion parameters, wherein the special effect adjustment parameters are used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed;
generating a main body recognition special effect template according to the special effect adjusting parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjusting parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
Or for example:
acquiring a target processing image and a target subject identification special effect template, wherein the target subject identification special effect template is generated based on special effect adjustment parameters, mask expansion parameters and an initial special effect template;
Obtaining a target expansion mask of the target processing image, which is matched with the mask expansion parameter, through the target subject identification special effect template;
and generating a subject identification special effect matched with the special effect adjusting parameter for the target processing image through the target subject identification special effect template according to the target expansion mask.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because the computer program stored in the storage medium can execute any special effect template generating method or any step in any special effect generating method provided by the embodiment of the present application, the beneficial effects that any special effect template generating method or any special effect generating method provided by the embodiment of the present application can be realized, and detailed descriptions are omitted herein.
According to one aspect of the present application, there is also provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternative implementations of the above embodiments.
The specific template generating method, specific generating device and storage medium provided by the embodiment of the application are described in detail, specific examples are applied to the description of the principle and implementation of the application, and the description of the above embodiments is only used for helping to understand the method and core ideas of the application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (12)

1. A method for generating a special effect template, the method comprising:
acquiring an image to be processed and a main mask of the image to be processed;
acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask;
responding to a parameter control instruction triggered based on a parameter control interface, and acquiring special effect adjustment parameters and mask expansion parameters, wherein the special effect adjustment parameters are used for adjusting the display effect of a main body recognition special effect corresponding to the image to be processed;
generating a main body recognition special effect template according to the special effect adjusting parameters, the mask expansion parameters and the initial special effect template, wherein the main body recognition special effect template is used for generating a main body recognition special effect matched with the special effect adjusting parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the main body mask is subjected to expansion processing according to the mask expansion parameters to obtain the main body recognition special effect template.
2. The special effects template generation method of claim 1, wherein the special effects adjustment parameters include a main body stroking parameter and a background filling parameter;
the parameter control interface comprises a main body description parameter adjusting control, a background filling parameter adjusting control and a mask expansion parameter adjusting control;
the responding to the parameter control instruction triggered based on the parameter control interface obtains the special effect adjusting parameter and the mask expansion parameter, and the method comprises the following steps:
acquiring a main body tracing parameter in response to a main body tracing parameter control instruction triggered based on the main body tracing parameter adjustment control, wherein the main body tracing parameter is used for controlling the display effect of a main body tracing line in the main body recognition special effect;
acquiring the background filling parameters in response to a background filling parameter control instruction triggered based on the background filling parameter adjustment control, wherein the background filling parameters are used for controlling a filling display effect of an image background area in the main body recognition special effect, and the image background area comprises an area except the expansion mask in the image to be processed;
and responding to a mask expansion parameter control instruction triggered by the mask expansion parameter adjustment control, and acquiring the mask expansion parameter.
3. The special effect template generation method according to claim 1, wherein the mask expansion parameters include mask texture and range control parameters;
generating a main body recognition special effect template according to the special effect adjusting parameter, the mask expansion parameter and the initial special effect template comprises the following steps:
acquiring the image size of the image to be processed, and determining a mask expansion range corresponding to the image to be processed according to the image size and the range control parameter;
obtaining an expansion mask corresponding to the image to be processed according to the mask expansion range, the mask texture and the main mask;
and generating a main body recognition special effect template based on the expansion mask according to the special effect adjusting parameter, the mask expansion parameter and the initial special effect template.
4. The special effect template generation method according to claim 3, wherein the obtaining the expansion mask corresponding to the image to be processed according to the mask expansion range, the mask texture and the main mask comprises:
determining a center point of the mask texture;
determining a mask adjacent point corresponding to the center point in the mask texture according to a preset adjacent point control parameter;
Determining a mask expansion pixel value according to the pixel value of the center point and the pixel value of the mask adjacent point;
and obtaining an expansion mask corresponding to the image to be processed according to the mask expansion range, the mask expansion pixel value and the main mask.
5. A special effect generation method, characterized in that the method comprises:
acquiring a target processing image and a target subject identification special effect template, wherein the target subject identification special effect template is generated based on special effect adjustment parameters, mask expansion parameters and an initial special effect template;
acquiring a target expansion mask matched with the mask expansion parameter of the target processing image through the target main body recognition special effect template;
and generating a main body recognition special effect matched with the special effect adjusting parameter for the target processing image through the target main body recognition special effect template according to the target expansion mask.
6. The special effect generation method according to claim 5, wherein the mask expansion parameters include mask texture and range control parameters;
the obtaining the target expansion mask matched with the mask expansion parameter of the target processing image through the target main body recognition special effect template comprises the following steps:
Acquiring a main mask of the target processing image;
acquiring the image size of the target processing image, and determining a mask expansion range corresponding to the target processing image according to the image size and the range control parameter;
and obtaining a target expansion mask corresponding to the target processing image according to the mask expansion range, the mask texture and the main mask.
7. The special effect generation method according to claim 6, wherein the obtaining the target expansion mask corresponding to the target processing image according to the mask expansion range, the mask texture, and the main mask includes:
acquiring a center point of the mask texture;
determining a mask adjacent point corresponding to the center point in the mask texture according to a preset adjacent point control parameter;
determining a mask expansion pixel value according to the pixel value of the center point and the pixel value of the mask adjacent point;
and obtaining a target expansion mask corresponding to the target processing image according to the mask expansion range, the mask expansion pixel value and the main mask.
8. The special effect generation method according to claim 5, wherein the special effect adjustment parameters include a main body stroking parameter and a background filling parameter;
Generating a subject identification effect matched with the effect adjustment parameter for the target processing image through the target subject identification effect template according to the target expansion mask, including:
generating a tracing line for the target processing image along the edge of the target expansion mask according to the main body tracing parameter, wherein the main body tracing parameter is used for controlling at least one display effect of color, thickness, line texture and line transparency of the tracing line;
and filling an image background area for the target processing image according to the background filling parameter, wherein the image background area comprises an area except for the target expansion mask in the target processing image, and the background filling parameter is used for controlling at least one display effect of filling color, filling texture and filling transparency of the image background area.
9. A special effect template generating apparatus, the apparatus comprising:
the image processing device comprises a to-be-processed image acquisition module, a processing module and a processing module, wherein the to-be-processed image acquisition module is used for acquiring an to-be-processed image and a main mask of the to-be-processed image;
the initial special effect template acquisition module is used for acquiring an initial special effect template, wherein the initial special effect template is used for generating a main body identification special effect for the image to be processed according to the main body mask;
The parameter acquisition module is used for responding to a parameter control instruction triggered based on a parameter control interface to acquire special effect adjustment parameters and mask expansion parameters, wherein the special effect adjustment parameters are used for adjusting the display effect of the main body recognition special effect corresponding to the image to be processed;
the special effect template generation module is used for generating a main body identification special effect template according to the special effect adjustment parameters, the mask expansion parameters and the initial special effect template, wherein the main body identification special effect template is used for generating a main body identification special effect matched with the special effect adjustment parameters for the image to be processed according to an expansion mask corresponding to the image to be processed, and the expansion mask is obtained after the main body mask is subjected to expansion processing according to the mask expansion parameters.
10. A special effect generation apparatus, characterized in that the apparatus comprises:
the data acquisition module is used for acquiring a target processing image and a target main body identification special effect template, wherein the target main body identification special effect template is generated based on special effect adjustment parameters, mask expansion parameters and an initial special effect template;
the target expansion mask acquisition module is used for acquiring a target expansion mask matched with the mask expansion parameter of the target processing image through the target main body recognition special effect template;
And the special effect generation module is used for generating a main body recognition special effect matched with the special effect adjustment parameter for the target processing image through the target main body recognition special effect template according to the target expansion mask.
11. An electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to execute the application program in the memory to perform the steps of the special effects template generation method of any one of claims 1 to 4 or the steps of the special effects generation method of any one of claims 5 to 8.
12. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the special effects template generation method of any one of claims 1 to 4 or the steps of the special effects generation method of any one of claims 5 to 8.
CN202311322422.6A 2023-10-12 2023-10-12 Special effect template generation method, special effect generation device and storage medium Active CN117455753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311322422.6A CN117455753B (en) 2023-10-12 2023-10-12 Special effect template generation method, special effect generation device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311322422.6A CN117455753B (en) 2023-10-12 2023-10-12 Special effect template generation method, special effect generation device and storage medium

Publications (2)

Publication Number Publication Date
CN117455753A true CN117455753A (en) 2024-01-26
CN117455753B CN117455753B (en) 2024-06-18

Family

ID=89593850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311322422.6A Active CN117455753B (en) 2023-10-12 2023-10-12 Special effect template generation method, special effect generation device and storage medium

Country Status (1)

Country Link
CN (1) CN117455753B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522760A (en) * 2023-11-13 2024-02-06 书行科技(北京)有限公司 Image processing method, device, electronic equipment, medium and product

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110177219A (en) * 2019-07-01 2019-08-27 百度在线网络技术(北京)有限公司 The template recommended method and device of video
CN111784568A (en) * 2020-07-06 2020-10-16 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer readable medium
CN111899192A (en) * 2020-07-23 2020-11-06 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN113157179A (en) * 2021-03-23 2021-07-23 北京达佳互联信息技术有限公司 Picture adjustment parameter adjusting method and device, electronic equipment and storage medium
CN113221499A (en) * 2021-05-31 2021-08-06 Tcl华星光电技术有限公司 Mask layout generation method and device, computer equipment and storage medium
CN114037595A (en) * 2021-11-10 2022-02-11 梅卡曼德(北京)机器人科技有限公司 Image data processing method, image data processing device, electronic equipment and storage medium
CN115002336A (en) * 2021-11-30 2022-09-02 荣耀终端有限公司 Video information generation method, electronic device and medium
CN115564931A (en) * 2022-09-30 2023-01-03 广州欢聚时代信息科技有限公司 Method for generating dressing image, device, equipment, medium and product thereof
CN115619919A (en) * 2022-09-02 2023-01-17 网易(杭州)网络有限公司 Scene object highlighting method and device, electronic equipment and storage medium
CN116229203A (en) * 2023-01-10 2023-06-06 京东科技控股股份有限公司 Image processing method, training method, device, system and medium for model
CN116688492A (en) * 2023-06-15 2023-09-05 网易(杭州)网络有限公司 Special effect rendering method and device for virtual model, electronic equipment and storage medium
WO2023182937A2 (en) * 2022-03-25 2023-09-28 脸萌有限公司 Special effect video determination method and apparatus, electronic device and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110177219A (en) * 2019-07-01 2019-08-27 百度在线网络技术(北京)有限公司 The template recommended method and device of video
CN111784568A (en) * 2020-07-06 2020-10-16 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer readable medium
CN111899192A (en) * 2020-07-23 2020-11-06 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN113157179A (en) * 2021-03-23 2021-07-23 北京达佳互联信息技术有限公司 Picture adjustment parameter adjusting method and device, electronic equipment and storage medium
CN113221499A (en) * 2021-05-31 2021-08-06 Tcl华星光电技术有限公司 Mask layout generation method and device, computer equipment and storage medium
CN114037595A (en) * 2021-11-10 2022-02-11 梅卡曼德(北京)机器人科技有限公司 Image data processing method, image data processing device, electronic equipment and storage medium
CN115002336A (en) * 2021-11-30 2022-09-02 荣耀终端有限公司 Video information generation method, electronic device and medium
WO2023182937A2 (en) * 2022-03-25 2023-09-28 脸萌有限公司 Special effect video determination method and apparatus, electronic device and storage medium
CN115619919A (en) * 2022-09-02 2023-01-17 网易(杭州)网络有限公司 Scene object highlighting method and device, electronic equipment and storage medium
CN115564931A (en) * 2022-09-30 2023-01-03 广州欢聚时代信息科技有限公司 Method for generating dressing image, device, equipment, medium and product thereof
CN116229203A (en) * 2023-01-10 2023-06-06 京东科技控股股份有限公司 Image processing method, training method, device, system and medium for model
CN116688492A (en) * 2023-06-15 2023-09-05 网易(杭州)网络有限公司 Special effect rendering method and device for virtual model, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YARUI QI: "Analysis of the Development Trend of Douyin Special Effects from the Perspective of Acceptance Aesthetics", JOURNAL OF HUMANITIES, ARTS AND SOCIAL SCIENCE, 14 September 2022 (2022-09-14), pages 378 - 389 *
彭啸: "基于TV模型和纹理合成的图像修复算法研究", 中国优秀硕士论文电子期刊网, 15 December 2011 (2011-12-15), pages 138 - 864 *
殷虎: "基于图像分割的立体匹配算法研究", 中国优秀硕士论文电子期刊网, 15 June 2011 (2011-06-15), pages 138 - 472 *
邵平;杨路明;: "基于广义掩膜积分图像的快速模板匹配", 计算机科学, no. 06, 25 June 2008 (2008-06-25), pages 227 - 230 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522760A (en) * 2023-11-13 2024-02-06 书行科技(北京)有限公司 Image processing method, device, electronic equipment, medium and product
CN117522760B (en) * 2023-11-13 2024-06-25 书行科技(北京)有限公司 Image processing method, device, electronic equipment, medium and product

Also Published As

Publication number Publication date
CN117455753B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN109427083B (en) Method, device, terminal and storage medium for displaying three-dimensional virtual image
CN107256555B (en) Image processing method, device and storage medium
CN110689500B (en) Face image processing method and device, electronic equipment and storage medium
CN117455753B (en) Special effect template generation method, special effect generation device and storage medium
CN110689479B (en) Face makeup method, device, equipment and medium
CN113546411B (en) Game model rendering method, device, terminal and storage medium
CN112053423A (en) Model rendering method and device, storage medium and computer equipment
CN113018856A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116797631A (en) Differential area positioning method, differential area positioning device, computer equipment and storage medium
CN112053416B (en) Image processing method, device, storage medium and computer equipment
CN112316425B (en) Picture rendering method and device, storage medium and electronic equipment
CN113645476A (en) Picture processing method and device, electronic equipment and storage medium
CN117593493A (en) Three-dimensional face fitting method, three-dimensional face fitting device, electronic equipment and storage medium
CN111107264A (en) Image processing method, image processing device, storage medium and terminal
CN116382540A (en) Display method and device for electronic paper, electronic equipment and storage medium
CN112837403B (en) Mapping method, mapping device, computer equipment and storage medium
CN117274432B (en) Method, device, equipment and readable storage medium for generating image edge special effect
CN110662023B (en) Method and device for detecting video data loss and storage medium
CN115086738B (en) Information adding method, information adding device, computer equipment and storage medium
CN115810066A (en) Image processing method, image processing device, storage medium and electronic equipment
CN116328298A (en) Virtual model rendering method and device, computer equipment and storage medium
CN118304652A (en) Game map display method and device, computer equipment and storage medium
CN114972009A (en) Image processing method and device, electronic equipment and storage medium
CN116236779A (en) Mapping processing method and device, computer equipment and storage medium
CN115731339A (en) Virtual model rendering method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant