CN117372240A - Display method and device of special effect image, electronic equipment and storage medium - Google Patents

Display method and device of special effect image, electronic equipment and storage medium Download PDF

Info

Publication number
CN117372240A
CN117372240A CN202210750504.XA CN202210750504A CN117372240A CN 117372240 A CN117372240 A CN 117372240A CN 202210750504 A CN202210750504 A CN 202210750504A CN 117372240 A CN117372240 A CN 117372240A
Authority
CN
China
Prior art keywords
image
special effect
style
area
effect image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210750504.XA
Other languages
Chinese (zh)
Inventor
赵楠
黄奇伟
吕烨华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210750504.XA priority Critical patent/CN117372240A/en
Publication of CN117372240A publication Critical patent/CN117372240A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the disclosure provides a display method and device of special effect images, electronic equipment and a storage medium. The method is characterized in that: responding to a special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area; when the preset display condition is detected to be reached, displaying a target special effect image corresponding to the original image; the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style. According to the technical scheme, the technical problems that in the prior art, the image display effect is unstable, single and lack of interestingness are solved, the image areas different in each area image can be processed in a targeted manner through simple interaction with users, different display styles are subjected to fusion display, the interestingness of image display is improved, and the display effect of special effect images is enriched.

Description

Display method and device of special effect image, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to an image processing technology, in particular to a display method and device of a special effect image, electronic equipment and a storage medium.
Background
Image effect rendering is an indispensable effect image display technology in short video, photographing, game, animation and other technologies, and the images are rendered into more attractive effect images by effect rendering, so that corresponding sound and visual effects are displayed.
In the related art, the special effect rendering of the image is generally to perform special effect rendering on the whole image, and in some scenes, the special effect processing method cannot achieve a good image display effect, and the whole image display effect is single, so that interestingness is lacked, and user experience is affected.
Disclosure of Invention
The disclosure provides a display method, a display device, electronic equipment and a storage medium for special effect images so as to realize fusion display of different display styles in the special effect images.
In a first aspect, an embodiment of the present disclosure provides a method for displaying a special effect image, including:
responding to a special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area;
when the preset display condition is detected to be reached, displaying a target special effect image corresponding to the original image;
the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style.
In a second aspect, an embodiment of the present disclosure further provides a display device for a special effect image, including:
the original image acquisition module is used for responding to the special effect triggering operation to acquire an original image, wherein the original image at least comprises a first image area and a second image area;
and the special effect image display module is used for displaying a target special effect image corresponding to the original image when the preset display condition is detected, wherein the first image area in the target special effect image is displayed in a first image style and the second image area is displayed in a second image style.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method for displaying a special effect image according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a storage medium containing computer-executable instructions, the computer-readable storage medium storing computer instructions for causing a processor to execute the method for displaying a special effect image according to any embodiment of the present invention.
According to the method, an original image is obtained by responding to special effect triggering operation, the original image comprises a first image area and a second image area, corresponding special effect rendering is conducted on the first image area and the second image area in the original image respectively, feature rendering is conducted on the first image area in a first image style, special effect rendering is conducted on the second image area in a second image style, and when a preset display condition is detected to be achieved, a target special effect image is displayed in the first image style in the first image area and is displayed in the second image style in the second image area. The method solves the technical problems that in the prior art, the image display effect is unstable and single and lacks of interestingness, and realizes that through simple interaction with users, different image areas of each area image can be processed pertinently, different display styles are fused and displayed, the interestingness of image display is increased, and the display effect of special effect images is enriched.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a method for displaying a special effect image according to an embodiment of the disclosure;
fig. 2 is a flowchart of another method for displaying a special effect image according to an embodiment of the disclosure;
fig. 3 is a flowchart of another method for displaying a special effect image according to an embodiment of the disclosure;
fig. 4 is a flowchart of another method for displaying a special effect image according to an embodiment of the disclosure;
FIG. 5 is an image schematic diagram of a second style image according to an embodiment of the present disclosure;
FIG. 6 is an image schematic of a target weighted image according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a display device for special effect images according to an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It will be appreciated that the data (including but not limited to the data itself, the acquisition or use of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
Fig. 1 is a schematic flow chart of a method for displaying a special effect image according to an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a situation of special effect image rendering, and the method may be performed by a device for displaying a special effect image, where the device may be implemented in a form of software and/or hardware, and optionally, may be implemented by an electronic device, where the electronic device may be a mobile terminal, a PC end, a server, or the like.
As shown in fig. 1, the method of the embodiment of the disclosure may specifically include:
s110, responding to a special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area.
The special effect triggering operation is understood to be an operation for triggering a function of performing special effect processing on the image. The original image can be understood as the original image for the special effect processing to be performed.
In an embodiment of the present disclosure, before responding to the special effect triggering operation, receiving the special effect triggering operation may be further included. The triggering modes of the special effect triggering operation can be various. Optionally, the receiving special effect triggering operation may be, but is not limited to: receiving special effect triggering operation acting on a preset special effect triggering control; or receiving sound information for enabling special effects, which is acquired based on the sound acquisition device; or, receiving motion information (such as hand motion information, head motion information, limb motion information, etc.) for enabling special effects; alternatively, an effect enabling command for enabling an effect is received, or the like. The special effect trigger control can be a virtual control element arranged on an application program interface, for example, can be a special effect enabling element or an image acquisition element.
In the embodiment of the disclosure, the first image area and the second image area are respectively different sub-areas of the original image. It should be noted that, the original image may include only the first image area and the second display area, and at this time, the second image area may be the rest of image areas except for the first image area in the original image. For example, the original image may be a person image, the first image region may be an image of a face region in the person image, and the second image region may be an image of a remaining region after the face region is removed.
The original image may also include not only the first image area and the second display area, but also other image areas other than the first image area and the second display area. For example, the original image may be a person image, wherein the first image region may be a face region in the person image, and the second image region may be a region remaining after the face region is removed.
Specifically, before performing special effect processing on an original image, the original image subjected to special effect processing is first acquired, and a first image area and a second image area to be processed, which are included in the original image, are determined.
S120, when the fact that the preset display condition is met is detected, displaying a target special effect image corresponding to the original image; the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style.
The preset display condition can be understood as a preset condition for judging whether the target special effect image is displayed or not. Illustratively, the preset presentation conditions may include at least one of the following conditions: reaching preset special effect image display time; the target special effect image corresponding to the original image is generated; and receiving a preset display triggering operation.
In the embodiment of the disclosure, the image style can be understood as different display effects of the image by adjusting parameters such as color, saturation, brightness, transparency and the like of the image. In the embodiment of the disclosure, the image style is not limited to be divided into specific types, and the user can set according to own requirements. The method comprises the steps of carrying out a first treatment on the surface of the For example, the image style may be classified into a canvas style, a home style, a sweet style, a pseudo-classic style, a fresh style, a comic style, and the like, and the image style may be further subdivided, and may include, for example, a american comic style, a japanese comic style, and the like. The first image style may be understood as an image style displayed in the first image area after the special effect processing is performed on the first image area, and the second image style may be understood as an image style displayed in the second image area after the special effect processing is performed on the second image area. It will be appreciated that the first image style and the second image style may or may not be the same.
Specifically, when special effect processing is performed on an original image, special effect processing is performed on a first image area of the original image according to a first image style, and special effect processing is performed on a second image area of the original image according to a second image style, so that a target special effect image corresponding to the original image is generated. And when the preset display condition of the target special effect image is detected, displaying the target special effect image. At this time, when the target special effect image is displayed, the first image area of the target special effect image is displayed in a first image style, and the second image area of the target special effect image is displayed in a second image style.
According to the method, an original image is obtained by responding to special effect triggering operation, the original image comprises a first image area and a second image area, corresponding special effect rendering is conducted on the first image area and the second image area in the original image respectively, feature rendering is conducted on the first image area in a first image style, special effect rendering is conducted on the second image area in a second image style, and when a preset display condition is detected to be achieved, a target special effect image is displayed in the first image style in the first image area and is displayed in the second image style in the second image area. The method solves the technical problems that in the prior art, the image display effect is unstable and single and lacks of interestingness, and realizes that through simple interaction with users, different image areas of each area image can be processed pertinently, different display styles are fused and displayed, the interestingness of image display is increased, and the display effect of special effect images is enriched.
Fig. 2 is a flowchart of another method for displaying a special effect image according to an embodiment of the present disclosure, where special effect processing of a first image area and a second image area is further described, as shown in fig. 2, and the method includes:
S210, responding to a special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area.
S220, dividing a first image area in the original image to obtain a first divided image, and converting the first divided image into a first special effect image of a first image style.
Optionally, segmenting the first image area in the original image to obtain a first segmented image, which specifically includes: and carrying out edge detection on a first image area in the original image to obtain area edge information of the first image area, and dividing the first image area in the original image according to the area edge information to obtain a first divided image.
Optionally, segmenting the first image area in the original image to obtain a first segmented image, including: and acquiring the pixel value and the position information of each pixel point at the edge of the first image area, copying each pixel point of the first image area into a blank image according to the pixel value and the position information of each pixel point, and generating a first segmentation image identical to the first image area.
Optionally, segmenting the first image area in the original image to obtain a first segmented image, including: the method comprises the steps of determining the coordinates of each pixel point of a first image area in an original image, determining the position of the first image area in the original image according to the coordinates of each pixel point, and dividing the original image by the position of the first image area in the original image to obtain a first divided image corresponding to the first image area.
Optionally, in another embodiment of the present invention, segmenting the first image area in the original image to obtain a first segmented image includes: and carrying out image segmentation on the original image according to the region key points of the first image region to obtain a first segmented image corresponding to the first image region.
The region keypoints may be understood as key pixel points on which the segmentation of the first image region depends. For example, the key point information of the target subject in the first image area may be specifically. Taking the first image area as a face area as an example, the area key points may be face key points.
Specifically, image segmentation is performed on the original image according to the region key points of the first image region to obtain a first segmented image corresponding to the first image region, which may include: determining region key points of a first image region according to preset template key points of a segmentation template for segmenting the first image region; and matching the template key points of the segmentation template with the region key points to obtain a first image region, and further segmenting the first image region in the original image to obtain a first segmented image corresponding to the first image region.
The first special effect image may be an image obtained by performing special effect processing on the first split image according to the first image style. Further, after the first segmented image is obtained, a first image style corresponding to the first image area is obtained, special effect processing is carried out on the first segmented image according to the first image style corresponding to the first image area, and the first segmented image is converted into a first special effect image of the first image style.
There are various ways of performing the stylization processing on the image. In order to ensure the result of the image stylization processing and improve the image stylization processing efficiency, optionally, in another embodiment of the present invention, the converting the first segmented image into the first special effect image of the first image style includes: and converting the first segmentation image into an initial conversion image of a first image style based on a pre-trained style conversion model, and endowing the initial conversion image with transparency to obtain a first special effect image.
The style conversion model may be a neural network model that is trained in advance to perform style conversion, and is used to convert the first segmented image into an initial converted image of the first image style. Specifically, the pre-trained style conversion model may be trained based on pre-built neural network structures. The pre-constructed neural network structure may be a convolutional neural network (ConvolutionalNeuralNetwork, CNN) and/or a recurrent neural network (Recurrent Neural Network, RNN), among others.
Illustratively, taking a face image as a first segmentation image as an example, a large number of sample face images are collected as training samples before training a style conversion model, and a desired special effect image of a first image style corresponding to the sample face images is used as a label of the sample face images. In the training process, a sample face image is input into a style conversion model to be trained, a model conversion image of a first image style after training is output by the style conversion model to be trained is obtained, loss calculation is carried out on the output model conversion image and an expected special effect image through a loss function, and parameters in the style conversion model to be trained are adjusted according to the calculated loss. When the convergence of the loss function in the style conversion model to be trained is detected, the style conversion model to be trained is trained, and the style conversion model is obtained. The loss functions of the style conversion model may include, but are not limited to: any one of a mean square error function, an average absolute value error function, and a cross entropy error function.
Specifically, the first segmentation image is input into a pre-trained style conversion model, and is converted into an initial conversion image of a first image style, the value of alpha in an alpha channel in the initial conversion image is obtained, transparency is given to the initial conversion image according to the value of alpha, and then the first special effect image is obtained.
S230, generating a second special effect image of a second image style corresponding to the second segmentation area.
Wherein the second divided region may be an original image excluding the first divided region; the second special effect image may be understood as an image obtained by special effect processing of the second divided image according to the second image style.
Specifically, after the second segmented image is obtained, determining a second image style corresponding to the second image area, performing special effect processing on the second segmented image according to the second image style corresponding to the second image area, and converting the second segmented image into a second special effect image of the second image style.
Optionally, in another embodiment of the present invention, the generating the second special effect image of the second image style corresponding to the second segmentation area includes: dividing a second image area in the original image to obtain a second divided image, and converting the second divided image into a second special effect image in a second image style; or converting the original image into a second-style image of a second image style, and determining a second special effect image of the second image style corresponding to the second segmentation area according to the second-style image.
Similarly, segmenting the second image region in the original image to obtain a second segmented image, including: acquiring pixel values and position information of each pixel point at the edge of a second image area, copying each pixel point of the second image area into a blank image according to the pixel values and the position information of each pixel point, and generating a second divided image identical to the second image area; or determining the coordinates of each pixel point of the second image area in the original image, and determining the position of the second image area in the original image according to the coordinates of each pixel point, so that the position of the second image area in the original image is used for dividing the original image, and a second divided image corresponding to the second image area is obtained.
It will be appreciated that if the second image area is an image area of the original image other than the first image area; a first image region in the original image may be segmented, and remaining image regions in the original image other than the first image region may be used as a second segmented image.
In the embodiment of the disclosure, the original image may be processed first to obtain a special effect image of a second image style corresponding to the original image, and then image segmentation is performed to determine the second special effect image. Optionally, determining a second image style corresponding to the second image area, performing special effect processing on the original image according to the second image style corresponding to the second image area, converting the original image into a second style image of the second image style, acquiring a first segmentation area, determining a second segmentation area of the original image without the first segmentation area, segmenting the second style image through the second segmentation area, and determining a second special effect image.
Optionally, in another embodiment of the present invention, the converting the original image into a second style image of a second image style includes: the original image is converted into a second-style image of a second image style based on a style conversion algorithm corresponding to the second image style, wherein the style conversion algorithm includes at least one of bilateral filtering processing, edge detection processing, color block quantization processing, and filter processing.
Specifically, an original image is obtained, and style conversion is carried out on the original image according to a style conversion algorithm, so that a second style image of a second image style is generated.
The style conversion algorithm is exemplified as an edge filter process, an edge detection process, a color block quantization process, and a filter process. Firstly, denoising an original image through a bilateral filtering algorithm to generate a denoised original image; secondly, carrying out feature processing on the denoised image according to an edge detection algorithm to obtain an original feature image, and carrying out feature processing on the denoised image according to the original feature image to obtain a feature processed image; thirdly, performing color block quantization processing on the original image subjected to the feature processing through a color block quantization algorithm to generate an image subjected to color block quantization; and fourthly, performing filter processing on the image after the color block quantization according to a filter processing algorithm to generate a second-style image of a second image style.
Optionally, in another embodiment of the present invention, the second image area is an image area in the original image except for the first image area; the determining a second special effect image of a second image style corresponding to the second segmentation area according to the second style image comprises the following steps: and acquiring region position information corresponding to the first special effect image, determining a region to be scratched in the second style image according to the region position information, and setting the pixel value of each pixel point in the region to be scratched to be a preset value respectively to obtain a second special effect image of a second image style corresponding to the second segmentation region.
The area position information may be UV coordinate information of each pixel point in the first image area corresponding to the first special effect image. The preset value corresponding to each pixel point in the region to be scratched can be the same or different. Typically, the preset value may be a pixel value of 0.
Specifically, a first image area corresponding to a first special effect image is obtained, UV coordinates of each pixel point of the first image area in an original image are determined, UV coordinates of each pixel point of the first special effect image in a second style image are determined according to the UV coordinates of each pixel point of the first image area in the original image, the UV coordinates of each pixel point of the first special effect image in the second style image are determined to be area position information corresponding to the first special effect image, an area to be cut out in the second style image is determined according to the area position information, pixel values of each pixel point in the area to be cut out are respectively set to be preset values, and then the second special effect image of the second image style corresponding to the second segmentation area is obtained.
S240, generating a target special effect image according to the first special effect image and the second special effect image.
Specifically, after the first special effect image and the second special effect image are obtained, the first special effect image and the second special effect image are subjected to image fusion, and a target special effect image is generated.
S250, when the fact that the preset display condition is met is detected, displaying a target special effect image corresponding to the original image; the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style.
According to the embodiment of the disclosure, the first image area of the original image is segmented, then the first segmented image generated by the segmentation of the first image area is subjected to style conversion through the style conversion model, the corresponding transparency is set to generate the first special effect image, and the calculation amount of the model can be reduced by performing independent model processing on the first image area, so that the calculation capacity of the model can be concentrated in the first image area, and the display effect of the first special effect image is improved; the second image area of the original image is subjected to image segmentation according to the first image area, the second segmented image is further processed through a style conversion algorithm, a second special effect image is generated, the style of the second image area in the original image can be clearer, the color is close to reality, the display effect of the second special effect image is improved, and the display effect of the target special effect image generated according to the first special effect image and the second special effect image is further improved.
Fig. 3 is a flowchart of another method for displaying a special effect image according to an embodiment of the present disclosure, where how to fuse a first special effect image and a second special effect image is further described, as shown in fig. 3, the method includes:
s310, responding to a special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area.
S320, dividing a first image area in the original image to obtain a first divided image, and converting the first divided image into a first special effect image of a first image style.
S330, converting the original image into a second-style image of a second image style, acquiring region position information corresponding to the first special effect image, determining a region to be scratched in the second-style image according to the region position information, and setting a pixel value of each pixel point in the region to be scratched to be a preset value respectively to obtain a second special effect image of the second image style corresponding to the second segmentation region.
S340, determining to-be-processed pixel points corresponding to each special effect pixel point in the first special effect image in the to-be-scratched area of the second special effect image according to the area position information corresponding to the first special effect image.
Specifically, a first image area corresponding to the first special effect image is obtained, UV coordinates of each pixel point of the first image area in the original image are determined, and recording is carried out. Further, the UV coordinates of each pixel point of the first special effect image in the second style image can be determined according to the UV coordinates of each pixel point of the first image area in the original image, the UV coordinates of each pixel point of the first special effect image in the second style image are determined to be the area position information corresponding to the first special effect image, and the pixel point to be processed corresponding to each special effect pixel point in the first special effect image in the area to be scratched in the second special effect image is determined according to the area position information.
S350, fusing the pixel value of the pixel to be processed with the pixel value of the special effect pixel corresponding to the pixel to be processed for each pixel to be processed to generate a target special effect image.
Specifically, after each pixel point to be processed is determined, each pixel point to be processed corresponding to each special effect pixel point in the first special effect image is determined, the pixel value of each pixel point to be processed and the corresponding pixel value of each special effect pixel point are obtained, and the pixel value of the pixel point to be processed and the pixel value of the special effect pixel point corresponding to the pixel point to be processed are fused to generate the target special effect image.
Optionally, in another embodiment of the present invention, the fusing the pixel value of the pixel to be processed with the pixel value of the special effect pixel corresponding to the pixel to be processed includes: acquiring a target weighted image corresponding to the first special effect image, wherein the pixel value of each pixel point in the target weighted image is the fusion weight of each special effect pixel point in the first special effect image; and carrying out weighting processing on the pixel value of the pixel point to be processed based on the fusion weight, and fusing the pixel value after the weighting processing with the pixel value of the special effect pixel point corresponding to the pixel point to be processed.
Specifically, a first special effect image is obtained, the fusion weight of each special effect pixel point in the first special effect image is determined according to the first special effect image, the fusion weight of each special effect pixel point in the first special effect image is used as the pixel value of each corresponding pixel point in a target weighted image, the target weighted image is further generated, the pixel value of each pixel point of the target weighted image is used as the fusion weight to carry out weighted processing on the pixel value of the pixel point to be processed, and then fusion is carried out according to the pixel value after weighted processing and the pixel value of the special effect pixel point corresponding to each pixel point to be processed. The method of fusing the weighted pixel value with the pixel value of the special effect pixel point corresponding to the pixel point to be processed may be adding or multiplying the weighted pixel value with the pixel value of the special effect pixel point corresponding to the pixel point to be processed.
S360, when the fact that the preset display condition is met is detected, displaying a target special effect image corresponding to the original image; the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style.
According to the embodiment of the disclosure, a first image area of an original image is segmented, and then a first segmented image generated by segmentation of the first image area is subjected to style conversion through a style conversion model to generate a first special effect image; and carrying out image segmentation on a second image area of the original image according to the first image area, processing the second segmented image through a style conversion algorithm to generate a second special effect image, determining pixel points to be processed in the second special effect image according to area position information corresponding to the first special effect image, determining a target weighted graph of the first special effect image, weighting the pixel value of each pixel point in the target weighted graph as a fusion weight, and fusing the weighted pixel value with the pixel value of the special effect pixel point corresponding to the pixel point to be processed. The pixel value of each pixel point of the target weighted image is used as the fusion weight, so that the transition region of the fusion image can be prevented from excessively large difference due to different transparency, the transition region of the fusion image can be caused to naturally transition, and the display effect of the special effect image is further improved.
Fig. 4 is a flowchart of an alternative example of a method for displaying a special effect image according to an embodiment of the disclosure, as shown in fig. 4, where the method includes:
s410, responding to the special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area.
S420, dividing a first image area in the original image to obtain a first divided image, and converting the first divided image into a first special effect image of a first image style.
Specifically, after an original image is obtained, detecting and sampling the original image to determine the UV coordinates of each pixel point of a first image area in the original image, determining the UV coordinates of each pixel point as area key points, determining the position of the first image area in the original image according to all area key points, obtaining the RGB three-channel value of the pixel point corresponding to each area key point in the first image area, and further dividing the original image according to the RGB three-channel value of each pixel point corresponding to each area key point in the first image area to obtain a first divided image corresponding to the first image area.
Further, the first segmentation image is input into a pre-trained style conversion model and converted into an initial conversion image of a first image style, the value of alpha in an alpha channel in the initial conversion image is obtained, transparency is given to the initial conversion image according to the value of alpha, and then the first special effect image is obtained.
S430, converting the original image into a second-style image of a second image style based on a style conversion algorithm corresponding to the second image style.
Specifically, the style conversion algorithm comprises bilateral filtering processing, edge detection processing, color block quantization processing and filter processing; after an original image is input, carrying out bilateral filtering treatment on the original image, generating a transverse blurred image through transverse blurring, and then carrying out vertical blurring on the transverse blurred image to generate a blurred image; performing edge detection on the blurred image to generate an image edge picture; performing color block quantization processing on the image edge picture and the blurred image to generate a color block picture; and finally, carrying out filter brightening treatment on the color block picture, and carrying out second-style image of a second image style.
S440, acquiring region position information corresponding to the first special effect image, determining a region to be scratched in the second style image according to the region position information, and setting the pixel value of each pixel point in the region to be scratched to 0 respectively to obtain a second special effect image of a second image style corresponding to the second segmentation region.
As shown in fig. 5, specifically, before the filter brightening process is performed on the color block picture, region position information corresponding to the first special effect image is obtained, a region to be scratched in the second style image is determined according to the region position information, pixel values of each pixel point in the region to be scratched are set to preset values respectively, and then the filter brightening process is performed on the color block picture to obtain the second special effect image of the second image style corresponding to the second segmentation region.
S450, determining to-be-processed pixel points corresponding to each special effect pixel point in the first special effect image in the to-be-scratched area of the second special effect image according to the area position information corresponding to the first special effect image.
S460, acquiring a target weighted image corresponding to the first special effect image, wherein the pixel value of each pixel point in the target weighted image is the fusion weight of each special effect pixel point in the first special effect image.
In the disclosed embodiment, as shown in fig. 6, the target weighted image may be a gray scale image, which may be used to blur the boundary between the first effect image and the second effect image, such that the image stitching naturally transitions.
And S470, for each pixel to be processed, carrying out weighting processing on the pixel value of the pixel to be processed based on the fusion weight, and fusing the pixel value after the weighting processing with the pixel value of the special effect pixel corresponding to the pixel to be processed.
S480, when the fact that the preset display condition is met is detected, displaying a target special effect image corresponding to the original image; the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style.
According to the method, an original image is obtained by responding to special effect triggering operation, the original image comprises a first image area and a second image area, corresponding special effect rendering is conducted on the first image area and the second image area in the original image respectively, feature rendering is conducted on the first image area in a first image style, special effect rendering is conducted on the second image area in a second image style, and when a preset display condition is detected to be achieved, a target special effect image is displayed in the first image style in the first image area and is displayed in the second image style in the second image area. The technical problem that obvious segmentation sense exists in exposure of different areas of the rendered special effect image due to rendering superposition of different display styles in the prior art is solved, the fact that details and styles of images of each area are reserved completely and are transited mutually is achieved, fusion display is conducted on different display styles, and the display effect of the special effect image is improved.
Fig. 7 is a schematic structural diagram of a display device for special effect images according to an embodiment of the present disclosure, where, as shown in fig. 7, the device includes: an original image acquisition module 510 and a special effects image presentation module 520.
The original image obtaining module 510 is configured to obtain an original image in response to a special effect triggering operation, where the original image includes at least a first image area and a second image area;
and the special effect image display module 520 is configured to display a target special effect image corresponding to the original image when a preset display condition is detected, where the first image area in the target special effect image is displayed in a first image style and the second image area is displayed in a second image style.
According to the method, an original image is obtained by responding to special effect triggering operation, the original image comprises a first image area and a second image area, corresponding special effect rendering is conducted on the first image area and the second image area in the original image respectively, feature rendering is conducted on the first image area in a first image style, special effect rendering is conducted on the second image area in a second image style, and when a preset display condition is detected to be achieved, a target special effect image is displayed in the first image style in the first image area and is displayed in the second image style in the second image area. The technical problem that obvious segmentation sense exists in exposure of different areas of the rendered special effect image due to rendering superposition of different display styles in the prior art is solved, the fact that details and styles of images of each area are reserved completely and are transited mutually is achieved, fusion display is conducted on different display styles, and the display effect of the special effect image is improved.
Optionally, the special effects image display module 520 further includes:
the first image segmentation conversion module is used for segmenting a first image area in the original image to obtain a first segmented image and converting the first segmented image into a first special effect image in a first image style;
the second image segmentation conversion module is used for generating a second special effect image of a second image style corresponding to the second segmentation area;
and the target special effect image generation module is used for generating a target special effect image according to the first special effect image and the second special effect image.
Optionally, the first image segmentation conversion module is specifically configured to:
and carrying out image segmentation on the original image according to the region key points of the first image region to obtain a first segmented image corresponding to the first image region.
Optionally, the first image segmentation conversion module is specifically further configured to:
and converting the first segmentation image into an initial conversion image of a first image style based on a pre-trained style conversion model, and endowing the initial conversion image with transparency to obtain a first special effect image.
Optionally, the second image segmentation conversion module is specifically configured to:
Dividing a second image area in the original image to obtain a second divided image, and converting the second divided image into a second special effect image in a second image style; or,
and converting the original image into a second-style image of a second image style, and determining a second special effect image of the second image style corresponding to the second segmentation area according to the second-style image.
Optionally, the second image segmentation conversion module is specifically further configured to:
the original image is converted into a second-style image of a second image style based on a style conversion algorithm corresponding to the second image style, wherein the style conversion algorithm includes at least one of bilateral filtering processing, edge detection processing, color block quantization processing, and filter processing.
Optionally, the second image segmentation conversion module is specifically further configured to:
the determining a second special effect image of a second image style corresponding to the second segmentation area according to the second style image comprises the following steps:
and acquiring region position information corresponding to the first special effect image, determining a region to be scratched in the second style image according to the region position information, and setting the pixel value of each pixel point in the region to be scratched to be a preset value respectively to obtain a second special effect image of a second image style corresponding to the second segmentation region.
Optionally, the target special effect image generating module is specifically configured to:
determining to-be-processed pixel points corresponding to each special effect pixel point in the first special effect image in a to-be-scratched area of the second special effect image according to the area position information corresponding to the first special effect image;
and fusing the pixel value of the pixel point to be processed with the pixel value of the special effect pixel point corresponding to the pixel point to be processed aiming at each pixel point to be processed so as to generate a target special effect image.
Optionally, the target special effect image generating module is specifically further configured to:
acquiring a target weighted image corresponding to the first special effect image, wherein the pixel value of each pixel point in the target weighted image is the fusion weight of each special effect pixel point in the first special effect image;
and carrying out weighting processing on the pixel value of the pixel point to be processed based on the fusion weight, and fusing the pixel value after the weighting processing with the pixel value of the special effect pixel point corresponding to the pixel point to be processed.
The special effect image display device provided by the embodiment of the disclosure can execute the special effect image display method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that each unit and module included in the above apparatus are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present disclosure.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now to fig. 8, a schematic diagram of an electronic device (e.g., a terminal device or server in fig. 8) 500 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An edit/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 7 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The electronic device provided by the embodiment of the present disclosure and the method for displaying a special effect image provided by the foregoing embodiment belong to the same inventive concept, and technical details not described in detail in the present embodiment may be referred to the foregoing embodiment, and the present embodiment has the same beneficial effects as the foregoing embodiment.
The embodiment of the present disclosure provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method for displaying a special effect image provided by the above embodiment.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to a special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area;
when the preset display condition is detected to be reached, displaying a target special effect image corresponding to the original image;
the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a method of displaying a special effect image, the method including:
responding to a special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area;
when the preset display condition is detected to be reached, displaying a target special effect image corresponding to the original image;
the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style.
According to one or more embodiments of the present disclosure, there is provided a method for displaying a special effect image, the method including:
optionally, before the displaying the special effect image corresponding to the original image, the method further includes:
dividing a first image area in the original image to obtain a first divided image, and converting the first divided image into a first special effect image in a first image style;
generating a second special effect image of a second image style corresponding to the second segmentation region;
and generating a target special effect image according to the first special effect image and the second special effect image.
According to one or more embodiments of the present disclosure, there is provided a method of displaying a special effect image, the method including:
optionally, the segmenting the first image area in the original image to obtain a first segmented image includes:
and carrying out image segmentation on the original image according to the region key points of the first image region to obtain a first segmented image corresponding to the first image region.
According to one or more embodiments of the present disclosure, there is provided a method for displaying a special effect image, the method including:
optionally, the converting the first segmented image into the first special effect image of the first image style includes:
and converting the first segmentation image into an initial conversion image of a first image style based on a pre-trained style conversion model, and endowing the initial conversion image with transparency to obtain a first special effect image.
According to one or more embodiments of the present disclosure, there is provided a method for displaying a special effect image, the method including:
optionally, the generating the second special effect image of the second image style corresponding to the second segmentation area includes:
Dividing a second image area in the original image to obtain a second divided image, and converting the second divided image into a second special effect image in a second image style; or,
and converting the original image into a second-style image of a second image style, and determining a second special effect image of the second image style corresponding to the second segmentation area according to the second-style image.
According to one or more embodiments of the present disclosure, there is provided a method for displaying a special effect image, the method including:
optionally, the converting the original image into the second style image of the second image style includes:
the original image is converted into a second-style image of a second image style based on a style conversion algorithm corresponding to the second image style, wherein the style conversion algorithm includes at least one of bilateral filtering processing, edge detection processing, color block quantization processing, and filter processing.
According to one or more embodiments of the present disclosure, there is provided a method of displaying a special effect image, the method including:
optionally, the second image area is an image area except the first image area in the original image;
The determining a second special effect image of a second image style corresponding to the second segmentation area according to the second style image comprises the following steps:
and acquiring region position information corresponding to the first special effect image, determining a region to be scratched in the second style image according to the region position information, and setting the pixel value of each pixel point in the region to be scratched to be a preset value respectively to obtain a second special effect image of a second image style corresponding to the second segmentation region.
According to one or more embodiments of the present disclosure, there is provided a method for displaying a special effect image, the method including:
optionally, the generating the target special effect image according to the first special effect image and the second special effect image includes:
determining to-be-processed pixel points corresponding to each special effect pixel point in the first special effect image in a to-be-scratched area of the second special effect image according to the area position information corresponding to the first special effect image;
and fusing the pixel value of the pixel point to be processed with the pixel value of the special effect pixel point corresponding to the pixel point to be processed aiming at each pixel point to be processed so as to generate a target special effect image.
According to one or more embodiments of the present disclosure, there is provided a method for displaying a special effect image, the method including:
optionally, the fusing the pixel value of the pixel to be processed with the pixel value of the special effect pixel corresponding to the pixel to be processed includes:
acquiring a target weighted image corresponding to the first special effect image, wherein the pixel value of each pixel point in the target weighted image is the fusion weight of each special effect pixel point in the first special effect image;
and carrying out weighting processing on the pixel value of the pixel point to be processed based on the fusion weight, and fusing the pixel value after the weighting processing with the pixel value of the special effect pixel point corresponding to the pixel point to be processed.
According to one or more embodiments of the present disclosure, there is provided a display apparatus of a special effect image [ example ten ], the apparatus including:
the original image acquisition module is used for responding to the special effect triggering operation to acquire an original image, wherein the original image at least comprises a first image area and a second image area;
and the special effect image display module is used for displaying a target special effect image corresponding to the original image when the preset display condition is detected, wherein the first image area in the target special effect image is displayed in a first image style and the second image area is displayed in a second image style.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (12)

1. The display method of the special effect image is characterized by comprising the following steps of:
responding to a special effect triggering operation, and acquiring an original image, wherein the original image at least comprises a first image area and a second image area;
when the preset display condition is detected to be reached, displaying a target special effect image corresponding to the original image;
the first image area in the target special effect image is displayed in a first image style, and the second image area in the target special effect image is displayed in a second image style.
2. The method for displaying a special effect image according to claim 1, further comprising, before said displaying a special effect image corresponding to said original image:
dividing a first image area in the original image to obtain a first divided image, and converting the first divided image into a first special effect image in a first image style;
Generating a second special effect image of a second image style corresponding to the second segmentation region;
and generating a target special effect image according to the first special effect image and the second special effect image.
3. The method for displaying a special effect image according to claim 2, wherein the segmenting the first image area in the original image to obtain a first segmented image includes:
and carrying out image segmentation on the original image according to the region key points of the first image region to obtain a first segmented image corresponding to the first image region.
4. The method for displaying a special effect image according to claim 2, wherein the converting the first divided image into a first special effect image of a first image style comprises:
and converting the first segmentation image into an initial conversion image of a first image style based on a pre-trained style conversion model, and endowing the initial conversion image with transparency to obtain a first special effect image.
5. The method for displaying a special effect image according to claim 2, wherein generating a second special effect image of a second image style corresponding to the second division region comprises:
dividing a second image area in the original image to obtain a second divided image, and converting the second divided image into a second special effect image in a second image style; or,
And converting the original image into a second-style image of a second image style, and determining a second special effect image of the second image style corresponding to the second segmentation area according to the second-style image.
6. The method for displaying a special effect image according to claim 5, wherein converting the original image into a second style image of a second image style comprises:
the original image is converted into a second-style image of a second image style based on a style conversion algorithm corresponding to the second image style, wherein the style conversion algorithm includes at least one of bilateral filtering processing, edge detection processing, color block quantization processing, and filter processing.
7. The method for displaying a special effect image according to claim 5, wherein the second image area is an image area other than the first image area in the original image;
the determining a second special effect image of a second image style corresponding to the second segmentation area according to the second style image comprises the following steps:
and acquiring region position information corresponding to the first special effect image, determining a region to be scratched in the second style image according to the region position information, and setting the pixel value of each pixel point in the region to be scratched to be a preset value respectively to obtain a second special effect image of a second image style corresponding to the second segmentation region.
8. The method for displaying a special effect image according to claim 7, wherein generating a target special effect image from the first special effect image and the second special effect image comprises:
determining to-be-processed pixel points corresponding to each special effect pixel point in the first special effect image in a to-be-scratched area of the second special effect image according to the area position information corresponding to the first special effect image;
and fusing the pixel value of the pixel point to be processed with the pixel value of the special effect pixel point corresponding to the pixel point to be processed aiming at each pixel point to be processed so as to generate a target special effect image.
9. The method for displaying a special effect image according to claim 8, wherein the fusing the pixel value of the pixel to be processed with the pixel value of the special effect pixel corresponding to the pixel to be processed includes:
acquiring a target weighted image corresponding to the first special effect image, wherein the pixel value of each pixel point in the target weighted image is the fusion weight of each special effect pixel point in the first special effect image;
and carrying out weighting processing on the pixel value of the pixel point to be processed based on the fusion weight, and fusing the pixel value after the weighting processing with the pixel value of the special effect pixel point corresponding to the pixel point to be processed.
10. A display device for a special effect image, comprising:
the original image acquisition module is used for responding to the special effect triggering operation to acquire an original image, wherein the original image at least comprises a first image area and a second image area;
and the special effect image display module is used for displaying a target special effect image corresponding to the original image when the preset display condition is detected, wherein the first image area in the target special effect image is displayed in a first image style and the second image area is displayed in a second image style.
11. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of presenting a special effects image as claimed in any one of claims 1-9.
12. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the method of presentation of the effect image of any of claims 1-9.
CN202210750504.XA 2022-06-28 2022-06-28 Display method and device of special effect image, electronic equipment and storage medium Pending CN117372240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210750504.XA CN117372240A (en) 2022-06-28 2022-06-28 Display method and device of special effect image, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210750504.XA CN117372240A (en) 2022-06-28 2022-06-28 Display method and device of special effect image, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117372240A true CN117372240A (en) 2024-01-09

Family

ID=89391528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210750504.XA Pending CN117372240A (en) 2022-06-28 2022-06-28 Display method and device of special effect image, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117372240A (en)

Similar Documents

Publication Publication Date Title
CN112637517B (en) Video processing method and device, electronic equipment and storage medium
CN110796664B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110070063B (en) Target object motion recognition method and device and electronic equipment
CN110349107B (en) Image enhancement method, device, electronic equipment and storage medium
CN114331820A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN112381717A (en) Image processing method, model training method, device, medium, and apparatus
CN114245028B (en) Image display method and device, electronic equipment and storage medium
CN111967397A (en) Face image processing method and device, storage medium and electronic equipment
CN116934577A (en) Method, device, equipment and medium for generating style image
CN112906553B (en) Image processing method, apparatus, device and medium
CN111783632B (en) Face detection method and device for video stream, electronic equipment and storage medium
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN114640796B (en) Video processing method, device, electronic equipment and storage medium
CN114422698B (en) Video generation method, device, equipment and storage medium
CN115358919A (en) Image processing method, device, equipment and storage medium
CN111696041B (en) Image processing method and device and electronic equipment
CN114913061A (en) Image processing method and device, storage medium and electronic equipment
CN117372240A (en) Display method and device of special effect image, electronic equipment and storage medium
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN112651909B (en) Image synthesis method, device, electronic equipment and computer readable storage medium
CN110717467A (en) Head pose estimation method, device, equipment and storage medium
CN111353929A (en) Image processing method and device and electronic equipment
CN111833459B (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination