CN110689478B - Image stylization processing method and device, electronic equipment and readable medium - Google Patents

Image stylization processing method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN110689478B
CN110689478B CN201910910696.4A CN201910910696A CN110689478B CN 110689478 B CN110689478 B CN 110689478B CN 201910910696 A CN201910910696 A CN 201910910696A CN 110689478 B CN110689478 B CN 110689478B
Authority
CN
China
Prior art keywords
image
background
processed
stylized
complexity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910910696.4A
Other languages
Chinese (zh)
Other versions
CN110689478A (en
Inventor
贾靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910910696.4A priority Critical patent/CN110689478B/en
Publication of CN110689478A publication Critical patent/CN110689478A/en
Priority to PCT/CN2020/113974 priority patent/WO2021057463A1/en
Application granted granted Critical
Publication of CN110689478B publication Critical patent/CN110689478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure discloses an image stylization processing method, an image stylization processing device, electronic equipment and a readable medium. The method comprises the following steps: if the background complexity of the image to be processed is determined to be lower than the complexity threshold, replacing the original background of the image to be processed with a template background to obtain a target image; inputting the target image into a stylized processing model to obtain a preliminary stylized processing image; and replacing the background of the preliminary stylized image with the original background to obtain a final stylized image. According to the scheme of the embodiment of the disclosure, when the image is stylized, the complexity of the image background is considered, and the fineness and the attractiveness of the image stylized result are improved.

Description

Image stylization processing method and device, electronic equipment and readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of internet, in particular to an image stylization processing method, an image stylization processing device, electronic equipment and a readable medium.
Background
The existing terminal equipment is mostly provided with a photographing function, and in order to meet the personalized requirements of the terminal user, the photographing function of the terminal equipment is generally provided with an image processing function, so that stylized processing is carried out on images photographed by the terminal user.
At present, when the terminal equipment performs the stylization processing on the image, the adopted method is to directly perform the stylization processing on the whole image by adopting a stylization processing algorithm through a neural network model. However, since the content complexity of different images is different, when the conventional method actually performs the stylization processing, it is difficult to satisfy the stylization processing requirements of the images with different complexity, and the situation that the result of the stylization processing is rough and the aesthetic degree is affected easily occurs, and improvement is needed.
Disclosure of Invention
The embodiment of the disclosure provides an image stylization processing method, an image stylization processing device, electronic equipment and a readable medium, wherein the complexity of an image background is considered when the image is stylized, and the fineness and the attractiveness of an image stylization processing result are improved.
In a first aspect, an embodiment of the present disclosure provides an image stylization processing method, including:
if the background complexity of the image to be processed is determined to be lower than the complexity threshold, replacing the original background of the image to be processed with a template background to obtain a target image;
inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
And replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
In a second aspect, an embodiment of the present disclosure further provides an image stylized processing apparatus, including:
the background replacing module is used for replacing the original background of the image to be processed with the template background if the background complexity of the image to be processed is lower than the complexity threshold value, so as to obtain a target image;
the stylized processing module is used for inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
the background replacing module is further configured to replace the background of the preliminary stylized image with the original background, so as to obtain a final stylized image.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image stylization processing method as described in any embodiment of the present disclosure.
In a fourth aspect, the disclosed embodiments provide a readable medium having stored thereon a computer program which, when executed by a processor, implements the image stylization processing method according to any of the embodiments of the disclosure.
The embodiment of the disclosure provides an image stylization processing method, an image stylization processing device, electronic equipment and a readable medium, which are used for judging whether the background complexity of an image to be processed is lower than a complexity threshold value, if yes, replacing the original background of the image to be processed with a preset template background to obtain a target image; and after the target image is subjected to stylization processing by the stylization processing model, replacing the background of the preliminary stylized processing image obtained by processing with the original background to obtain a final stylized processing image. According to the scheme of the embodiment of the disclosure, when the image is subjected to the stylizing treatment, the complexity of the image background is considered, when the complexity of the image background to be treated is low, the method of changing the template background for the image to be treated and then performing the stylizing treatment is adopted, the condition that the processing result is rough when the existing stylizing treatment method is adopted for the image to be treated with low complexity of the background is avoided, and the fineness and the attractiveness of the image stylizing treatment result are improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIGS. 1A-1B are schematic diagrams showing the effect of prior art stylizing an image to be processed;
FIG. 2A is a flowchart of an image stylization processing method according to an embodiment of the present disclosure;
fig. 2B to 2E are schematic views showing the effect of performing a stylized process on an image to be processed according to the first embodiment of the disclosure;
fig. 3 is a flowchart illustrating another image stylization processing method according to the second embodiment of the present disclosure;
FIG. 4 is a flowchart showing another image stylization processing method provided in the third embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image stylized processing device according to a fourth embodiment of the disclosure;
fig. 6 shows a schematic structural diagram of an electronic device according to a fifth embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise. The names of messages or information interacted between the parties in the disclosed embodiments are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before describing the embodiments of the present disclosure, description is made of an application scenario of the image stylization processing method, the device, the electronic apparatus, and the readable medium of the embodiments of the present disclosure. Specifically, the embodiment of the disclosure is suitable for a process of performing stylizing processing on images shot by a user through electronic equipment (such as a mobile phone, a tablet computer, a wearable device, a camera and the like) and stored images in a gallery of the electronic equipment so as to generate personalized images for the user. When the image to be processed is subjected to the stylization processing, the whole image is usually subjected to the stylization processing by a pre-trained neural network model, and at the moment, when the complexity of the background part of the image to be processed is relatively low, the situation that the result of the stylization processing is rough is easy to occur. For example, fig. 1A is an image to be processed with a low background image complexity, and fig. 1B is an effect diagram of the image to be processed shown in fig. 1A after a cartoon style is processed according to a conventional method, where the overall effect of the effect diagram is poor, and the aesthetic degree of the cartoon style processing is seriously affected. Therefore, there is a need to improve the existing image stylization processing method.
The following describes in detail an image stylization processing method, an apparatus, an electronic device, and a readable medium provided in the following embodiments of the present disclosure.
Example 1
FIG. 2A is a flowchart of an image stylization processing method according to an embodiment of the present disclosure; fig. 2B to 2E are schematic diagrams showing the effect of the stylized processing of the image to be processed according to the first embodiment of the present disclosure. The embodiment is applicable to the case of performing a stylized process on an image to be processed, and is particularly applicable to the case of performing a stylized process on an image to be processed with a background complexity lower than a complexity threshold. The method may be performed by an image stylization processing apparatus or an electronic device, the apparatus may be implemented in software and/or hardware, and the apparatus may be configured in the electronic device. Optionally, the electronic device may be a mobile phone, a tablet computer, a wearable device, a camera, or other terminal devices with an image processing function.
Optionally, as shown in fig. 2A, the method in this embodiment may include the following steps:
s201, if the background complexity of the image to be processed is determined to be lower than the complexity threshold, replacing the original background of the image to be processed with the template background to obtain the target image.
The image to be processed may be an image to be stylized in this embodiment, which may be an image currently captured by a camera on the electronic device, or may be a stored image selected from a local gallery of the electronic device according to a click operation of a user. The original background of the image to be processed may be the original background of the image to be processed itself, for example, the image to be processed shown in fig. 2B, and the original background is a gray background. The template background is a preset background image with high complexity, optionally, the number of the template backgrounds in the embodiment can be one or more, and when the template backgrounds are multiple, a plurality of background images with different complexity can be set as the template backgrounds. The complexity threshold may be a preset criterion for determining whether to perform background replacement on the image to be processed. The complexity threshold may be a threshold set for background complexity. The background complexity may be the complexity of a background area in the image to be processed, for example, when the background area of the image to be processed is a simple solid color background such as a wall, a blue sky, a sea, etc., the background complexity is low; when the background area of the image to be processed is a cluttered background such as a road, a park and the like, the background complexity is high.
Optionally, in an implementation manner of determining whether the background complexity of the image to be processed is lower than the complexity threshold, the step may be that pixel gray histogram statistics is performed on a background area of the image to be processed, and if the gray value distribution range in the histogram is wider, the background complexity is higher. A second possible implementation may be to extract texture features of the original background of the image to be processed and determine, based on the texture features, whether the background complexity of the image to be processed is below a complexity threshold. Specifically, a texture feature extraction algorithm is firstly adopted to extract texture features from the original background of the image to be processed, wherein the texture feature extraction algorithm can include, but is not limited to, a statistical method, a geometric method, a model method, a structural analysis method, a signal processing method and the like. After the texture features of the original background of the image to be processed are extracted, the number of pixel points corresponding to the texture features or the ratio of the pixel points corresponding to the texture features to the total pixel points of the original background can be used as the background complexity of the image to be processed, and then whether the background complexity is lower than a preset complexity threshold value is judged.
Preferred embodiment, a preferred implementation method of this step in implementing the second embodiment may be: performing edge detection on the original background of the image to be processed to obtain texture features of the original background; determining the pixel duty ratio of the texture feature in the original background as the background complexity of the image to be processed; it is determined whether the background complexity of the image to be processed is below a complexity threshold. Specifically, the method can be to use the edge detection result as the texture feature of the original image by carrying out edge detection on the original background content of the image to be processed. The edge detection algorithm may include, but is not limited to, a canny algorithm, a Roberts algorithm, a Sobel algorithm, and the like. And then determining the pixel ratio of the number of the pixel points corresponding to the edge result in the total number of the pixel points of the original background, taking the pixel ratio as the background complexity of the image to be processed, comparing the background complexity with a preset complexity threshold, and judging whether the background complexity of the image to be processed is lower than the complexity threshold.
Alternatively, in general, when the image to be processed is stylized, if the background complexity is low, the processing result will be rough. Therefore, the background complexity of the image to be processed can be described as high complexity when the background complexity of the image to be processed is higher than or equal to the complexity threshold, and the uniform stylized processing method can accurately complete the stylized processing operation of the image to be processed at the moment, namely, the image to be processed can be directly input into the stylized processing model, and the result output by the stylized processing model is obtained as the stylized processing image of the image to be processed. When the background complexity of the image to be processed is lower than the complexity threshold, the background complexity of the image to be processed is lower, the condition that the processing result is rough easily occurs is indicated, the background replacement of the image to be processed is needed, and the original background of the image to be processed is replaced by a preset template background, so that the target image is obtained. For example, fig. 2B is an image to be processed in the present embodiment, where the original background of the image to be processed is a pure gray background with low complexity, and if the background complexity of the image to be processed is lower than the complexity threshold, the gray original background of the image to be processed may be replaced by a peach blossom source template background with high complexity, which is preset in fig. 2B, so as to obtain the target image shown in fig. 2C.
Optionally, whether the background complexity of the image to be processed is determined or the background replacement operation is performed, the background area and the foreground area of the image to be processed need to be determined first, and the specific determination method may be that the image to be processed is subjected to area segmentation by adopting an image recognition or image clustering algorithm to obtain the background area and the foreground area of the image to be processed; the background segmentation model can be trained in advance, and the background segmentation model is used for carrying out region segmentation on the image to be processed to obtain a background region and a foreground region of the image to be processed; it is also possible that the background area and the foreground area of the image to be processed are manually selected by the user. This embodiment is not limited.
S202, inputting the target image into a stylized processing model to obtain a preliminary stylized processing image.
The stylized processing model can be trained in advance, and a neural network model for performing stylized processing on the content in the image can be realized. The specific training process of the stylized processing model will be described in detail in the following embodiments.
Optionally, in this step, the target image obtained after the background replacement of the image to be processed in S201 may be used as an input parameter, and a program code of a pre-trained stylized processing model is called and run, where the stylized processing model performs the stylized processing on the content in the input target image based on the stylized processing algorithm during training, and the stylized processed image, that is, the preliminary stylized processed image, is output.
For example, the target image shown in fig. 2C is used as input data of a stylized processing model, and program codes of the stylized processing model trained in advance are invoked and run, and at this time, the stylized processing model performs cartoon stylized processing on the target image shown in fig. 2C based on a cartoon style processing algorithm during training, so as to obtain a preliminary stylized processing image shown in fig. 2D.
Alternatively, the stylized operation performed by the stylized processing model of the present step on the target image may include, but is not limited to: and stylized processing in various filter forms such as cartoon style processing, pixelized style processing, sketch style processing, oil painting style processing and the like.
And S203, replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
Optionally, after the preliminary stylized image is obtained, since the background of the preliminary stylized image is not the original image of the image to be processed, in order to ensure consistency between the processed image and the original image, the current background of the preliminary stylized image needs to be replaced with the original background of the image to be processed. The specific execution process may be that the background area recognition is performed on the preliminary stylized image first, the current background of the preliminary stylized image is determined (the specific determination method is the same as the process of determining the original background from the image to be processed, which is not described in detail), and the original background of the image to be processed is replaced with the current background of the preliminary stylized image, so as to obtain the final stylized image after the stylized processing of the image to be processed. Illustratively, the current background of the preliminary stylized image shown in fig. 2D is determined, and the original background of the image to be processed shown in fig. 2B is replaced with the current background in fig. 2D, resulting in the final stylized image shown in fig. 2E.
Optionally, in this step, if the original background of the image to be processed is directly used to replace the background of the preliminary stylized image, a final stylized image is generated, and the stylized operation is not performed on the background portion of the final image. In order to improve consistency and attractiveness of stylized processing of an image to be processed, when the background of the preliminary stylized processing image is replaced by the original background, the original background can be input into a background processing model to obtain the processed original background; and replacing the background of the preliminary stylized image with the processed original background. The background processing model is a pre-trained neural network model specially used for stylizing the original background of the image to be processed. The type of the stylization processing performed by the background processing model is the same as the type of the stylization processing performed by the stylization processing model in S202, and the training processes of the two models are similar, except that the algorithm parameters of each trained model are different when the stylization processing is specifically performed. The specific execution process may be that the original background replaced by S201 is used as input data of a background processing model, and a program code of the background processing model is called and run, at this time, the background processing model performs a stylized processing operation on the input original background based on a stylized processing algorithm during training, so as to obtain a stylized processed original background, and then, the original background after the stylized processing is replaced by the original background after the stylized processing, so as to obtain a final stylized processing image.
The embodiment of the disclosure provides an image stylization processing method, which is used for judging whether the background complexity of an image to be processed is lower than a complexity threshold value, if yes, replacing the original background of the image to be processed with a preset template background to obtain a target image; and after the target image is subjected to stylization processing by the stylization processing model, replacing the background of the preliminary stylized processing image obtained by processing with the original background to obtain a final stylized processing image. According to the scheme of the embodiment of the disclosure, when the image is subjected to the stylizing treatment, the complexity of the image background is considered, when the complexity of the image background to be treated is low, the method of changing the template background for the image to be treated and then performing the stylizing treatment is adopted, the condition that the processing result is rough when the existing stylizing treatment method is adopted for the image to be treated with low complexity of the background is avoided, and the fineness and the attractiveness of the image stylizing treatment result are improved.
Example two
Fig. 3 is a flowchart illustrating another method for providing video learning materials according to the second embodiment of the present disclosure; the embodiment optimizes the alternatives provided in the above embodiment, and specifically provides a detailed description of how to train the stylized processing model.
Optionally, as shown in fig. 3, the method in this embodiment may include the following steps:
s301, constructing an initial network model.
Optionally, in this step, a convolutional neural network with a preset layer number may be pre-built according to the requirement of actual stylization processing, and corresponding initial parameters are set for each layer, where the initial parameters may be an initial channel number, a loss weight value, a channel parameter, a stylization processing algorithm, an algorithm parameter thereof, and the like, so that the initial network model is built, and the initial network model built in this step cannot be directly used for performing the stylization processing operation on the image to be processed, and needs to train the built initial network model first.
S302, performing stylized processing training on the initial network model by adopting sample image data.
The sample image data may be training data required for training the initial network model, and may be composed of a plurality of sets of original images and stylized images of the original images.
Optionally, in this step, when performing the stylized training on the initial network model constructed in S301, each set of original images and the stylized image of the original images in the sample image data may be sequentially used as a set of input data, and input into the initial network model, to train the relevant initial parameters set in the initial network model. The specific training process may be similar to the training manner of the neural network model for performing the image stylization process in the prior art, and this embodiment is not limited.
S303, verifying whether the attribute parameters of the output image of the initial network model after training are larger than the parameter threshold by using the verification image data, if not, executing S304, and if so, executing S305.
The verification image data may be verification data for verifying whether the trained initial network model can perform the stylized processing operation with high quality. Alternatively, the verification image data may be that in the process of acquiring the sample image, a certain proportion (for example, 80%) of image data in the acquired image is taken as sample image data, and the remaining proportion (for example, 20%) of image data is taken as verification image data; but also image data under various shooting scenes which are specially selected. The attribute parameters of the output image may be evaluation parameters for verifying the effect of the stylized processing of the output image, wherein the attribute parameters may include, but are not limited to, at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter, and the attribute parameters for each dimension thereof may be determined by a fixed formula or algorithm.
Optionally, after the initial network model is trained by using one or more sets of sample image data, the step may be to use verification image data to verify whether the initial network model obtained by training in S302 can perform the stylization operation with high quality, that is, verify whether the initial network model obtained by training in S302 can be used as a stylized model that can be actually used. The specific verification process may be to use the verification image as input data, call and run the program code of the trained initial network model, and at this time, the trained initial network model performs the stylized processing operation on the input verification image according to the algorithm during training, and output the stylized processing result, that is, the verification stylized processing image. Alternatively, when determining the attribute parameter for the verification stylized image, the saturation of the image may be calculated as the saturation parameter; calculating the smoothness of the edge contour in the image as an edge smoothness parameter; the smoothness of other areas outside the image edge contour is calculated as a shading parameter. When judging whether the calculated attribute parameters are larger than the parameter threshold values, setting the corresponding parameter threshold value for each parameter in the attribute parameters, and judging whether the attribute parameters are larger than the corresponding parameter threshold values. If each parameter value in the attribute parameters of the output image of the trained initial network model is greater than the corresponding parameter threshold, the initial network model is trained, and S305 may be executed, where the trained initial network model is used as the stylized processing model. Otherwise, it is indicated that the effect of performing the stylization processing on the initial network model at this time is not very good, and further optimization is required, and at this time, S304 is executed to perform adjustment optimization on the model parameters of the initial network model.
Optionally, in this embodiment, in order to ensure that the trained initial network model is accurately verified, the operation of selecting at least two sets of verification image data to perform this step on the trained initial network model may be performed. And when the result of the verification of at least two groups of verification image data is that the attribute parameters of the output image are larger than the parameter threshold value, executing S305, and taking the trained initial network model as the stylized processing model.
And S304, if not, adjusting the model parameters of the initial network model within a preset modification range, and returning the initial network model after the model parameters are adjusted to execute the operation of S302.
Optionally, in the step of adjusting the model parameters of the initial network model after training in S302, at least one dimension of the number of layers, the number of channels of each layer, and the loss weight value of the data processing operation performed from inside the model may be adjusted. Optionally, when the model parameters of the initial network model are adjusted, a priority may be set for the number of layers, the number of channels and the loss weight value, the model parameters are adjusted in a preset modification range corresponding to the model parameters according to the priority for each dimension, the model parameters of one dimension are modified, S302-S303 are executed again after the model parameters of one dimension are modified, the initial network model after the model parameters are adjusted is subjected to stylized training again by using sample image data, and the operation of verifying whether the attribute parameters of the output image of the trained initial network model are greater than the parameter threshold by using verification image data is performed. Preferably, if the priority of each model parameter is that the number of layers is higher than the number of channels and the number of channels is higher than the loss weight value, the number of channels may be adjusted first, the number of channels is increased by 1, then the operation is returned to S302-S303, the initial network model after the number of layers is increased is subjected to stylized training again by using the sample image data, whether the attribute parameter of the output image of the output initial network model after training is greater than the parameter threshold is verified, if not, the number of channels of the initial network model is adjusted again, the number of channels is increased by 1.2 times, and then the operation is returned to S302-S303. Optionally, when the number of channels of the initial network model is adjusted, the step may preferably select to adjust the number of channels of the group convolution layer of the initial network model.
It should be noted that, the preset modification range may be a first modification range preset; the method can also be at least two preset optional modification values, and preferably, a modification range can be set for the loss weight value and the layer number; a number of selectable modification values of 1.2 times, 1.5 times, 1.6 times, etc. can be set for the number of channels.
Optionally, when the model parameters of the initial network model are adjusted, the step may adjust at least two model parameters at a time in addition to one model parameter at a time, which is not limited.
And S305, if yes, taking the trained initial network model as a stylized processing model.
Optionally, if the attribute parameter of the output image of the initial network model after verification training by using the verification image data is greater than the parameter threshold, the initial network model is completely trained at this time, and the program code of the initial network model at this time is packaged and then used as the program code of the stylized processing model.
And S306, if the background complexity of the image to be processed is determined to be lower than the complexity threshold, replacing the original background of the image to be processed with the template background to obtain the target image.
S307, inputting the target image into the stylized processing model to obtain a preliminary stylized processing image.
And S308, replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
The embodiment of the disclosure provides an image stylizing processing method, after training a built initial network model by adopting sample image data, if the multi-dimensional attribute parameter of a stylizing processing image output by the initial network model after training is verified by adopting verification image data is not larger than a parameter threshold value, the multi-dimensional model parameter of the initial network model is adjusted and then retrained, otherwise, the training of the initial network model is completed, and a stylizing processing model is obtained. In the process of training the initial network model, the embodiment greatly improves the quality of the processing effect of the stylized processing model by verifying the multidimensional attribute parameters and adjusting the multidimensional model parameters, and lays a foundation for the subsequent stylized processing operation of the image to be processed. When the stylized processing is carried out on the image to be processed, if the background complexity of the image to be processed is lower than the complexity threshold, the original background of the image to be processed is replaced by a preset template background, then the stylized processing operation is carried out by adopting a stylized processing model, the obtained background of the preliminary stylized processing image is replaced by the original background, and the final stylized processing image is obtained, so that the background complexity of the image to be processed is considered, and the fineness and the attractiveness of the image stylized processing result are further improved.
Example III
Fig. 4 is a flowchart illustrating another method for providing video learning materials according to the third embodiment of the present disclosure; the present embodiment optimizes the foregoing embodiments on the basis of the various alternatives provided in the foregoing embodiments, and specifically provides a detailed description of inputting the target image into a stylized processing model to obtain a preliminary stylized processing image.
Optionally, as shown in fig. 4, the method in this embodiment may include the following steps:
s401, if the background complexity of the image to be processed is determined to be lower than the complexity threshold, replacing the original background of the image to be processed with the template background to obtain the target image.
S402, determining a region to be processed according to the received region selection instruction.
Optionally, in this embodiment, in order to increase the personalized requirement of the user for performing the stylizing process on the image to be processed, the stylizing process may be performed only on the area selected by the user according to the selection of the user. Specifically, after the user selects the image to be processed, a region selection instruction is triggered through a sliding operation on the image to be processed, wherein the region selection instruction comprises a region to be processed selected by the user. Optionally, if the region selection instruction is triggered by a non-closed sliding operation of the user on the image to be processed, the region corresponding to the sliding operation of the user may be used as the region to be processed, and the position coordinates corresponding to the region to be processed are added to the region selection instruction and transmitted to the electronic device; if the region selection instruction is triggered by the closing sliding operation of the user on the image to be processed, the closing frame selection region corresponding to the sliding operation of the user is taken as the region to be processed, and the position coordinates corresponding to the region to be processed are added into the region selection instruction and transmitted to the electronic equipment. After receiving an area selection instruction triggered by a user, the electronic equipment acquires a specific coordinate position of an area to be processed from the area selection instruction.
S403, inputting the target image and the region to be processed into a stylized processing model, and controlling the stylized processing model to perform stylized processing on the content of the region to be processed of the target image, so as to obtain a preliminary stylized processing image.
Alternatively, the step may be to call and run the program code of the stylized processing model by using the target image obtained in S401 and the region to be processed determined in S402 together as input data, where the stylized processing model performs the stylized processing operation only on the content of the region to be processed of the target image according to the algorithm during training based on the input target image and the region to be processed, and only the effect of the stylized processing is displayed in the region to be processed in the obtained preliminary stylized processing image, and the remaining regions are not subjected to the effect of the stylized processing.
S404, replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
Alternatively, if the area to be processed determined in S402 does not include a background area, the step may be to replace the background of the initial stylized image obtained in S403 with the original background of the image to be processed, so as to obtain a final stylized image. If the region to be processed determined in S402 includes a background region, the step may be to determine a position coordinate of the region to be processed corresponding to the background region, and then input the position coordinates of the original background and the region to be processed corresponding to the background region together as input data into a background processing model, and control the background processing model to perform a stylized processing operation only on the content of the position coordinate region in the original background, so as to obtain the processed original background; and finally, replacing the background of the initial stylized image obtained in the step S403 with the processed original background to obtain a final stylized image.
The embodiment of the disclosure provides an image stylizing processing method, if a user triggers a region selection instruction when selecting an image to be processed, when the background complexity of the image to be processed is lower than a complexity threshold, the original background of the image to be processed is replaced with a preset template background, after a target image is obtained, a region to be processed is determined according to the region selection instruction triggered by the user, a stylizing processing model is controlled to perform stylizing processing on only the region to be processed of the target image, and after the preliminary stylized processing image background obtained by processing is replaced with the original background, a final stylized processing image is obtained. When the scheme of the embodiment of the disclosure is used for carrying out stylizing processing on the image to be processed, the complexity of the image background is considered, and the fineness and the attractiveness of the image stylizing processing result are improved; the method can also generate personalized stylized processing effects for the user according to the region selection instruction of the user, and improves the interestingness of image stylized processing.
Example IV
Fig. 5 is a schematic structural diagram of an image stylizing device according to a fourth embodiment of the present disclosure, where the embodiment of the present disclosure may be applicable to a case of performing a stylizing process on an image to be processed, and in particular, a case of performing a stylizing process on an image to be processed with a background complexity lower than a complexity threshold. The apparatus may be implemented in software and/or hardware and integrated in an electronic device for performing the method, as shown in fig. 5, the apparatus may include:
The background replacing module 501 is configured to replace an original background of the image to be processed with a template background if it is determined that the background complexity of the image to be processed is lower than a complexity threshold, so as to obtain a target image;
the stylized processing module 502 is configured to input the target image into a stylized processing model to obtain a preliminary stylized processing image;
the background replacing module 501 is further configured to replace the background of the preliminary stylized image with the original background, so as to obtain a final stylized image.
The embodiment of the disclosure provides an image stylization processing device, which is used for judging whether the background complexity of an image to be processed is lower than a complexity threshold value, if yes, replacing the original background of the image to be processed with a preset template background to obtain a target image; and after the target image is subjected to stylization processing by the stylization processing model, replacing the background of the preliminary stylized processing image obtained by processing with the original background to obtain a final stylized processing image. According to the scheme of the embodiment of the disclosure, when the image is subjected to the stylizing treatment, the complexity of the image background is considered, when the complexity of the image background to be treated is low, the method of changing the template background for the image to be treated and then performing the stylizing treatment is adopted, the condition that the processing result is rough when the existing stylizing treatment method is adopted for the image to be treated with low complexity of the background is avoided, and the fineness and the attractiveness of the image stylizing treatment result are improved.
Further, the apparatus further includes a complexity determination model for, when performing the determination that the background complexity of the image to be processed is below a complexity threshold:
and extracting texture features of the original background of the image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold according to the texture features.
Further, the complexity judgment model is specifically used for:
performing edge detection on the original background of the image to be processed to obtain texture features of the original background;
determining the pixel duty ratio of the texture feature in the original background as the background complexity of the image to be processed;
determining whether the background complexity of the image to be processed is below a complexity threshold.
Further, the device further comprises: a model training module, the model training module comprising:
the training unit is used for performing stylized processing training on the constructed initial network model by adopting sample image data;
the verification unit is used for verifying whether the attribute parameters of the output image of the initial network model after training are larger than a parameter threshold value by adopting verification image data; wherein the attribute parameters include: at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter;
And the model determining unit is used for taking the trained initial network model as the stylized processing model if the attribute parameters of the trained initial network model output image are larger than the parameter threshold.
Further, the model training module further includes:
the parameter adjusting unit is used for adjusting the model parameters of the initial network model in a preset modification range if the attribute parameters of the trained initial network model output image are smaller than or equal to the parameter threshold; the model parameters comprise at least one of the layer number, the channel number and the loss weight value of the initial network model;
the training unit is also used for performing stylization processing training on the initial network model after the model parameters are adjusted by adopting the sample image data.
Further, the stylization processing module 502 is specifically configured to:
determining a region to be processed according to the received region selection instruction;
inputting the target image and the region to be processed into a stylized processing model, and controlling the stylized processing model to perform stylized processing on the content of the region to be processed of the target image to obtain a preliminary stylized processing image.
Further, the background replacing module 501 is specifically configured to, when performing the replacing of the background of the preliminary stylized image with the original background:
Inputting the original background into a background processing model to obtain a processed original background;
and replacing the background of the preliminary stylized image with the processed original background.
The image stylizing device provided by the embodiment of the present disclosure belongs to the same inventive concept as the image stylizing method provided by the above embodiments, technical details not described in detail in the embodiment of the present disclosure may be referred to the above embodiments, and the embodiment of the present disclosure has the same beneficial effects as the above embodiments.
Example five
Referring now to fig. 6, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), an in-vehicle terminal (e.g., in-vehicle navigation terminal), etc., and a stationary terminal such as a digital TV, a desktop computer, etc. The electronic device 600 shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the electronic device may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device internal process to perform: if the background complexity of the image to be processed is determined to be lower than the complexity threshold, replacing the original background of the image to be processed with a template background to obtain a target image; inputting the target image into a stylized processing model to obtain a preliminary stylized processing image; and replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
An image stylization processing method provided according to one or more embodiments of the present disclosure includes:
if the background complexity of the image to be processed is determined to be lower than the complexity threshold, replacing the original background of the image to be processed with a template background to obtain a target image;
inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
and replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
According to one or more embodiments of the present disclosure, in the above method, determining that the background complexity of the image to be processed is below the complexity threshold includes:
and extracting texture features of the original background of the image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold according to the texture features.
According to one or more embodiments of the present disclosure, in the above method, extracting texture features of an original background of an image to be processed, and determining whether a background complexity of the image to be processed is lower than a complexity threshold according to the texture features includes:
performing edge detection on the original background of the image to be processed to obtain texture features of the original background;
Determining the pixel duty ratio of the texture feature in the original background as the background complexity of the image to be processed;
determining whether the background complexity of the image to be processed is below a complexity threshold.
According to one or more embodiments of the present disclosure, in the foregoing method, before inputting the target image into the stylized processing model, the method further includes:
performing stylized processing training on the constructed initial network model by adopting sample image data;
verifying whether the attribute parameters of the output image of the initial network model after training are larger than a parameter threshold value by adopting verification image data; wherein the attribute parameters include: at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter;
if yes, taking the trained initial network model as the stylized processing model.
According to one or more embodiments of the present disclosure, after verifying whether the attribute parameter of the trained initial network model output image is greater than the parameter threshold using the verification image data, the method further includes:
if not, adjusting the model parameters of the initial network model in a preset modification range; the model parameters comprise at least one of the layer number, the channel number and the loss weight value of the initial network model;
And re-performing stylized processing training on the initial network model after the model parameters are adjusted by adopting sample image data.
According to one or more embodiments of the present disclosure, in the above method, inputting the target image into a stylized processing model, obtaining a preliminary stylized processing image includes:
determining a region to be processed according to the received region selection instruction;
inputting the target image and the region to be processed into a stylized processing model, and controlling the stylized processing model to perform stylized processing on the content of the region to be processed of the target image to obtain a preliminary stylized processing image.
According to one or more embodiments of the present disclosure, in the above method, replacing the background of the preliminary stylized image with the original background includes:
inputting the original background into a background processing model to obtain a processed original background;
and replacing the background of the preliminary stylized image with the processed original background.
An image stylization processing apparatus provided according to one or more embodiments of the present disclosure, the apparatus including:
the background replacing module is used for replacing the original background of the image to be processed with the template background if the background complexity of the image to be processed is lower than the complexity threshold value, so as to obtain a target image;
The stylized processing module is used for inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
the background replacing module is further configured to replace the background of the preliminary stylized image with the original background, so as to obtain a final stylized image.
According to one or more embodiments of the present disclosure, the above apparatus further includes a complexity determination model for, when performing the determination that the background complexity of the image to be processed is below the complexity threshold:
and extracting texture features of the original background of the image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold according to the texture features.
According to one or more embodiments of the present disclosure, the complexity determination model in the above apparatus is specifically configured to:
performing edge detection on the original background of the image to be processed to obtain texture features of the original background;
determining the pixel duty ratio of the texture feature in the original background as the background complexity of the image to be processed;
determining whether the background complexity of the image to be processed is below a complexity threshold.
According to one or more embodiments of the present disclosure, the above-described apparatus further includes: a model training module, the model training module comprising:
The training unit is used for performing stylized processing training on the constructed initial network model by adopting sample image data;
the verification unit is used for verifying whether the attribute parameters of the output image of the initial network model after training are larger than a parameter threshold value by adopting verification image data; wherein the attribute parameters include: at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter;
and the model determining unit is used for taking the trained initial network model as the stylized processing model if the attribute parameters of the trained initial network model output image are larger than the parameter threshold.
According to one or more embodiments of the present disclosure, the model training module in the above apparatus further includes:
the parameter adjusting unit is used for adjusting the model parameters of the initial network model in a preset modification range if the attribute parameters of the trained initial network model output image are smaller than or equal to the parameter threshold; the model parameters comprise at least one of the layer number, the channel number and the loss weight value of the initial network model;
the training unit is also used for performing stylization processing training on the initial network model after the model parameters are adjusted by adopting the sample image data.
According to one or more embodiments of the present disclosure, the stylized processing module in the foregoing apparatus is specifically configured to:
determining a region to be processed according to the received region selection instruction;
inputting the target image and the region to be processed into a stylized processing model, and controlling the stylized processing model to perform stylized processing on the content of the region to be processed of the target image to obtain a preliminary stylized processing image.
According to one or more embodiments of the present disclosure, the background replacing module in the above apparatus is specifically configured, when performing the replacement of the background of the preliminary stylized image with the original background:
inputting the original background into a background processing model to obtain a processed original background;
and replacing the background of the preliminary stylized image with the processed original background.
An electronic device provided according to one or more embodiments of the present disclosure, the electronic device including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image stylization processing method as described in any embodiment of the present disclosure.
A readable medium provided according to one or more embodiments of the present disclosure has stored thereon a computer program which, when executed by a processor, implements an image stylization processing method according to any embodiment of the present disclosure.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (8)

1. An image stylization processing method, comprising:
if the background complexity of the image to be processed is determined to be lower than a complexity threshold, replacing the original background of the image to be processed with a template background to obtain a target image, wherein the background complexity represents the complexity of a background area in the image to be processed, and the complexity threshold is a preset judgment standard for judging whether the image to be processed is subjected to background replacement or not;
inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
replacing the background of the preliminary stylized image with the original background to obtain a final stylized image;
the determining that the background complexity of the image to be processed is lower than the complexity threshold comprises:
extracting texture features of an original background of an image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold according to the texture features;
The extracting the texture features of the original background of the image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold according to the texture features comprises the following steps:
performing edge detection on the original background of the image to be processed to obtain texture features of the original background;
determining the pixel duty ratio of the texture feature in the original background as the background complexity of the image to be processed;
determining whether the background complexity of the image to be processed is below a complexity threshold.
2. The method of claim 1, further comprising, prior to inputting the target image into a stylized processing model:
performing stylized processing training on the constructed initial network model by adopting sample image data;
verifying whether the attribute parameters of the output image of the initial network model after training are larger than a parameter threshold value by adopting verification image data; wherein the attribute parameters include: at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter;
if yes, taking the trained initial network model as the stylized processing model.
3. The method of claim 2, further comprising, after verifying whether the attribute parameter of the trained initial network model output image is greater than the parameter threshold using the verification image data:
If not, adjusting the model parameters of the initial network model in a preset modification range; the model parameters comprise at least one of the layer number, the channel number and the loss weight value of the initial network model;
and re-performing stylized processing training on the initial network model after the model parameters are adjusted by adopting sample image data.
4. The method of claim 1, wherein inputting the target image into a stylized processing model results in a preliminary stylized processing image, comprising:
determining a region to be processed according to the received region selection instruction;
inputting the target image and the region to be processed into a stylized processing model, and controlling the stylized processing model to perform stylized processing on the content of the region to be processed of the target image to obtain a preliminary stylized processing image.
5. The method of claim 1, wherein replacing the background of the preliminary stylized image with the original background comprises:
inputting the original background into a background processing model to obtain a processed original background;
and replacing the background of the preliminary stylized image with the processed original background.
6. An image stylization processing apparatus, comprising:
the background replacing module is used for replacing the original background of the image to be processed with a template background to obtain a target image if the background complexity of the image to be processed is lower than a complexity threshold, wherein the background complexity represents the complexity of a background area in the image to be processed, and the complexity threshold is a preset judging standard for judging whether the image to be processed is subjected to background replacement or not;
the stylized processing module is used for inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
the background replacing module is further configured to replace the background of the preliminary stylized image with the original background to obtain a final stylized image;
the complexity judgment model is used for extracting texture features of an original background of the image to be processed and determining whether the background complexity of the image to be processed is lower than a complexity threshold value according to the texture features;
the complexity judgment model is specifically used for carrying out edge detection on the original background of the image to be processed to obtain texture features of the original background;
Determining the pixel duty ratio of the texture feature in the original background as the background complexity of the image to be processed;
determining whether the background complexity of the image to be processed is below a complexity threshold.
7. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement the image stylization method of any of claims 1-5.
8. A readable medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the image stylization method according to any one of claims 1-5.
CN201910910696.4A 2019-09-25 2019-09-25 Image stylization processing method and device, electronic equipment and readable medium Active CN110689478B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910910696.4A CN110689478B (en) 2019-09-25 2019-09-25 Image stylization processing method and device, electronic equipment and readable medium
PCT/CN2020/113974 WO2021057463A1 (en) 2019-09-25 2020-09-08 Image stylization processing method and apparatus, and electronic device and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910910696.4A CN110689478B (en) 2019-09-25 2019-09-25 Image stylization processing method and device, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN110689478A CN110689478A (en) 2020-01-14
CN110689478B true CN110689478B (en) 2023-12-01

Family

ID=69110569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910910696.4A Active CN110689478B (en) 2019-09-25 2019-09-25 Image stylization processing method and device, electronic equipment and readable medium

Country Status (2)

Country Link
CN (1) CN110689478B (en)
WO (1) WO2021057463A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689478B (en) * 2019-09-25 2023-12-01 北京字节跳动网络技术有限公司 Image stylization processing method and device, electronic equipment and readable medium
CN111445383B (en) * 2020-03-20 2024-02-13 北京迈格威科技有限公司 Image parameter adjusting method, device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154518A (en) * 2017-12-11 2018-06-12 广州华多网络科技有限公司 A kind of method, apparatus of image procossing, storage medium and electronic equipment
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109344724A (en) * 2018-09-05 2019-02-15 深圳伯奇科技有限公司 A kind of certificate photo automatic background replacement method, system and server
CN109523460A (en) * 2018-10-29 2019-03-26 北京达佳互联信息技术有限公司 Moving method, moving apparatus and the computer readable storage medium of image style
CN110211017A (en) * 2019-05-15 2019-09-06 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794454B (en) * 2010-04-08 2012-01-04 西安交通大学 Oil painting stylizing method based on image
CN104869346A (en) * 2014-02-26 2015-08-26 中国移动通信集团公司 Method and electronic equipment for processing image in video call
CN105989594B (en) * 2015-02-12 2019-02-12 阿里巴巴集团控股有限公司 A kind of image region detection method and device
US9672414B2 (en) * 2015-11-11 2017-06-06 Adobe Systems Incorporated Enhancement of skin, including faces, in photographs
CN106920212A (en) * 2015-12-24 2017-07-04 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment for sending stylized video
CN106327448A (en) * 2016-08-31 2017-01-11 上海交通大学 Picture stylization processing method based on deep learning
US10165259B2 (en) * 2017-02-15 2018-12-25 Adobe Systems Incorporated Generating novel views of a three-dimensional object based on a single two-dimensional image
US10318889B2 (en) * 2017-06-26 2019-06-11 Konica Minolta Laboratory U.S.A., Inc. Targeted data augmentation using neural style transfer
CN107507173B (en) * 2017-08-15 2021-07-27 上海交通大学 No-reference definition evaluation method and system for full-slice image
CN110689478B (en) * 2019-09-25 2023-12-01 北京字节跳动网络技术有限公司 Image stylization processing method and device, electronic equipment and readable medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154518A (en) * 2017-12-11 2018-06-12 广州华多网络科技有限公司 A kind of method, apparatus of image procossing, storage medium and electronic equipment
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109344724A (en) * 2018-09-05 2019-02-15 深圳伯奇科技有限公司 A kind of certificate photo automatic background replacement method, system and server
CN109523460A (en) * 2018-10-29 2019-03-26 北京达佳互联信息技术有限公司 Moving method, moving apparatus and the computer readable storage medium of image style
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110211017A (en) * 2019-05-15 2019-09-06 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨常青等.基于纹理的图像复杂度描述.《舰基图像处理技术原理与应用》.2015,电子文档第1-7页. *

Also Published As

Publication number Publication date
CN110689478A (en) 2020-01-14
WO2021057463A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN108234882B (en) Image blurring method and mobile terminal
CN110363753B (en) Image quality evaluation method and device and electronic equipment
CN110070551B (en) Video image rendering method and device and electronic equipment
JP2022531639A (en) How to embed information in video, computer equipment and computer programs
US20200294249A1 (en) Network module and distribution method and apparatus, electronic device, and storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110689478B (en) Image stylization processing method and device, electronic equipment and readable medium
CN112308797A (en) Corner detection method and device, electronic equipment and readable storage medium
CN110211017B (en) Image processing method and device and electronic equipment
CN109615620B (en) Image compression degree identification method, device, equipment and computer readable storage medium
CN113689372B (en) Image processing method, apparatus, storage medium, and program product
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN114630057A (en) Method and device for determining special effect video, electronic equipment and storage medium
CN113538304B (en) Training method and device for image enhancement model, and image enhancement method and device
US20230419452A1 (en) Method and device for correcting image on basis of compression quality of image in electronic device
CN113658065A (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN110619602B (en) Image generation method and device, electronic equipment and storage medium
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN111784726A (en) Image matting method and device
CN111160240A (en) Image object recognition processing method and device, intelligent device and storage medium
CN115953597B (en) Image processing method, device, equipment and medium
US20230036366A1 (en) Image attribute classification method, apparatus, electronic device, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant