CN111652830A - Image processing method and device, computer readable medium and terminal equipment - Google Patents

Image processing method and device, computer readable medium and terminal equipment Download PDF

Info

Publication number
CN111652830A
CN111652830A CN202010601380.XA CN202010601380A CN111652830A CN 111652830 A CN111652830 A CN 111652830A CN 202010601380 A CN202010601380 A CN 202010601380A CN 111652830 A CN111652830 A CN 111652830A
Authority
CN
China
Prior art keywords
image
matrix
processing
style migration
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010601380.XA
Other languages
Chinese (zh)
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010601380.XA priority Critical patent/CN111652830A/en
Publication of CN111652830A publication Critical patent/CN111652830A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure relates to the field of multimedia data processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable medium, and a terminal device. The method comprises the following steps: acquiring an original image and a user-defined style migration configuration parameter; the user-defined style migration configuration parameters comprise a to-be-processed area of an original image and a reference image corresponding to the to-be-processed area; calculating a corresponding characteristic transformation matrix according to the characteristic matrix of the original image and the characteristic matrix of the reference image; acquiring a style migration image based on a first mean processing parameter of the original image feature matrix, the feature transformation matrix and a second mean processing parameter of the reference image feature matrix; and carrying out image fusion processing on the style migration image and the original image to obtain a target image which accords with the user-defined style migration configuration data. The method can perform customized style transformation on the original image by the customized reference image.

Description

Image processing method and device, computer readable medium and terminal equipment
Technical Field
The present disclosure relates to the field of multimedia data processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable medium, and a terminal device.
Background
With the continuous development of image processing technology, on an intelligent terminal device, a user can utilize an application program to perform various styles, colors or content transformation on an image or a video. For example, the way in which the filters are applied to the picture at the same time.
In the prior art, when an image is subjected to filter processing, a pixel mapping mode is generally adopted, and different filter effects are applied to the image by changing pixels through a preset mapping table. Moreover, most algorithms apply a filter effect to the image and cannot distinguish between foreground and background. The user cannot perform image style conversion for the contents specified in the image. Moreover, the existing algorithm is complex, so that the processing speed of the image is slow, and the time consumption is long.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable medium, and a terminal device, which are capable of performing customized style transformation of an original image from a reference image.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided an image processing method including:
acquiring an original image and a user-defined style migration configuration parameter; the user-defined style migration configuration parameters comprise a to-be-processed area of the original image and a reference image corresponding to the to-be-processed area;
calculating a corresponding characteristic transformation matrix according to the characteristic matrix of the original image and the characteristic matrix of the reference image;
acquiring a style migration image based on a first mean processing parameter of the original image feature matrix, the feature transformation matrix and a second mean processing parameter of the reference image feature matrix;
and carrying out image fusion processing on the style migration image and the original image to obtain a target image which accords with the user-defined style migration configuration data.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising:
the image acquisition module is used for acquiring an original image and customizing the style migration configuration parameters; the user-defined style migration configuration parameters comprise a to-be-processed area of the original image and a reference image corresponding to the to-be-processed area;
the characteristic transformation matrix calculation module is used for calculating a corresponding characteristic transformation matrix according to the characteristic matrix of the original image and the characteristic matrix of the reference image;
the style migration image acquisition module is used for acquiring a style migration image based on a first mean value processing parameter of the original image feature matrix, the feature transformation matrix and a second mean value processing parameter of the reference image feature matrix;
and the image fusion processing module is used for carrying out image fusion processing on the style migration image and the original image so as to obtain a target image which accords with the user-defined style migration configuration data.
According to a third aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the image processing method described above.
According to a fourth aspect of the present disclosure, there is provided a terminal device comprising:
one or more processors;
a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image processing method described above.
According to the image processing method provided by the embodiment of the disclosure, the reference image is customized for the original image in advance, so that the feature conversion matrix can be constructed according to the original image and the reference image, the original image, the reference image and the feature conversion matrix are used for calculation to obtain the style migration image, and the style migration image and the original image are fused to obtain the target image meeting the customized style migration configuration, so that the region to be processed in the original image is converted into the style of the reference image. Different region images in the original image can be simultaneously converted into different styles.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 schematically illustrates a flow diagram of an image processing method in an exemplary embodiment of the disclosure;
FIG. 2 schematically illustrates a diagram of an original image in an exemplary embodiment of the disclosure;
FIG. 3 is a schematic diagram illustrating a subject template corresponding to an original image in an exemplary embodiment of the disclosure;
FIG. 4 is a diagram schematically illustrating a background template corresponding to an original image in an exemplary embodiment of the disclosure;
FIG. 5 schematically illustrates a schematic diagram of a reference image in an exemplary embodiment of the disclosure;
FIG. 6 is a schematic diagram illustrating a style migration image corresponding to an original image in an exemplary embodiment of the present disclosure;
FIG. 7 schematically illustrates a diagram of a fused image in an exemplary embodiment of the disclosure;
FIG. 8 is a schematic diagram that schematically illustrates a method of training a style migration model, in an exemplary embodiment of the present disclosure;
fig. 9 schematically illustrates a composition diagram of an image processing apparatus in an exemplary embodiment of the present disclosure;
fig. 10 schematically illustrates an electronic device structure diagram of a terminal device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Among the current intelligent terminal equipment, for example cell-phone and panel computer, in order to promote the user experience of shooing, increase the diversification of image, generally can add the filter effect to the image. Generally speaking, different color processing modes are applied to a target area and a background area of an image, so that the brightness or the chroma of the target area is higher than that of the background area, a theme corresponding to the target area is more remarkably highlighted, and a special movie effect is achieved when an end user takes a picture or a video. However, most solutions do not support user-defined processing of the filter effect. Also, one image can be processed using only one filter effect.
In view of the above-described drawbacks and deficiencies of the prior art, an image processing method is provided in the present exemplary embodiment. Referring to fig. 1, the image processing method described above may include the steps of:
s11, acquiring an original image and a custom style migration configuration parameter; the user-defined style migration configuration parameters comprise a to-be-processed area of the original image and a reference image corresponding to the to-be-processed area;
s12, calculating a corresponding feature transformation matrix according to the feature matrix of the original image and the feature matrix of the reference image;
s13, obtaining a style migration image based on the first mean processing parameter of the original image feature matrix, the feature transformation matrix and the second mean processing parameter of the reference image feature matrix;
and S14, performing image fusion processing on the style migration image and the original image to obtain a target image which accords with the custom style migration configuration data.
In the image processing method provided by the present exemplary embodiment, on one hand, a user may define a style to be converted by configuring a reference image in advance, and may configure a region to be processed in an original image corresponding to the reference image; on the other hand, the original image, the reference image and the feature conversion matrix are used for calculating to obtain the style migration image, and then the style migration image and the original image are fused to obtain the target image meeting the customized style migration configuration, so that the region to be processed in the original image is converted into the style of the reference image. And different regions in the original image can be simultaneously converted into different styles.
Hereinafter, each step of the image processing method in the present exemplary embodiment will be described in more detail with reference to the drawings and examples.
In step S11, acquiring an original image and a custom style migration configuration parameter; the user-defined style migration configuration parameters comprise a to-be-processed area of the original image and a reference image corresponding to the to-be-processed area;
in this exemplary embodiment, the image processing method may be applied to an intelligent terminal device such as a mobile phone, a tablet computer, or a notebook computer. The smart terminal device may be configured with an imaging unit that may be configured to output a camera preview image or an image stream of video capture (image stream of a certain frame rate) to the processor. The intelligent terminal equipment can also be provided with a memory and a processor. The memory can be used for storing the shot images or videos and can be acquired and processed by the processor; the processor may generally comprise one or more of a CPU, a GPU, and a DSP, and the image processing method described above may be run on the GPU or the DSP processor. The intelligent terminal device can be further provided with input and output devices, including but not limited to a display, a touch screen and the like, and a user interacts through the input and output devices, including configuring the style migration configuration parameters for the original image, displaying the image on an interactive interface and the like.
For example, an interactive interface may be provided at the smart terminal device, and a user may input an original image to be processed and a corresponding reference image on the interactive interface.
Specifically, the original image may be a real-time camera preview image from an imaging unit of the terminal device, a captured image from a memory, an image stream at a certain frame rate at the time of video capture by the imaging unit, or a captured video from the memory. When the input is a video, the video can be segmented to obtain continuous multi-frame images as original images.
After the terminal acquires the original image input by the user, the segmentation processing model can be called to perform image segmentation processing on the original image to obtain a corresponding foreground image and a corresponding background image, and a main body template corresponding to the foreground image and a background template corresponding to the background image. And automatically configuring the foreground image and the foreground template into a region to be processed, and configuring the reference image into a style migration reference image of the foreground image region. Or, the user may customize the foreground image or the background image as the region to be processed, and configure the corresponding reference image.
For example, the original image may be segmented using an image segmentation model based on a deep neural network. If the image segmentation model adopts a portrait subject detection and segmentation model, a portrait subject template and a unique background template corresponding to the original image can be output. If the image segmentation model adopts a main body detection and segmentation model, a unique main body template and a unique background template can be output. If the image segmentation model adopts a semantic segmentation model, a plurality of semantic subject templates and a unique background template can be output. For example, referring to the original image shown in fig. 2, an image segmentation process may be performed thereon to obtain the corresponding main body template and background template shown in fig. 3 and 4. The template format may be a gray scale map, and the size of the gray scale map may be consistent with the input image. If the pixel value in the corresponding region is 255, otherwise, the pixel value is 0. Specifically, if the template is a main body template, the pixel value in the main body region is 255, and the remaining pixel values are 0; if the template is a background template, the pixel value in the background area is 255, and the remaining pixel values are 0.
In step S12, a corresponding feature transformation matrix is calculated from the feature matrix of the original image and the feature matrix of the reference image.
In this example embodiment, a style migration model may be trained in advance, and is used to perform style migration on a reference image that is self-defined by reference to an image. After the original image and the corresponding reference image are obtained, the original image and the corresponding reference image can be used as input parameters of the style migration model. First, a corresponding feature transformation matrix may be computed. Specifically, the method may include:
step S21, encoding the original image and obtaining a corresponding first feature matrix, performing convolution processing on the first feature matrix to obtain a first convolution result, calculating a first covariance matrix of a channel dimension for the first convolution result, and performing full join processing on the first covariance matrix to obtain a first preprocessing matrix; and
step S22, encoding the reference image and acquiring a corresponding second feature matrix, performing convolution processing on the second feature matrix to acquire a second convolution result, calculating a second covariance matrix of channel dimensions for the second convolution result, and performing full-connection processing on the second covariance matrix to acquire a second preprocessing matrix;
step S23, multiplying the first pre-processing matrix and the second pre-processing matrix to obtain the feature transformation matrix.
Specifically, the original image and the reference image may be encoded using an encoder network model to obtain corresponding feature matrices. For example, the encoder network may use a shallow part of the VGG network, the number of network layers reaches relu3_1, and the network structure is a full convolutional layer structure. The encoder network model may be trained in advance with public data sets and the model parameters fixed. Specifically, the original image and the reference image can be used as input of the encoder network model, and the output is a feature matrix output by the relu3_1 feature layer, and the dimension is (W, H, C); where W is the feature layer channel width, H is the feature layer channel height, C is the number of channels, and W, H is related to the size of the input image. Performing convolution processing on the first characteristic matrix and the second characteristic matrix through a convolution layer network with 3 layers respectively; calculating a covariance matrix of channel dimensions according to the convolution result, wherein the dimensions are (C, C); at this time C is independent of the size of the input image. And then, matrix multiplication is carried out on a first preprocessing matrix corresponding to the original image with the dimensionality of (C, C) output by the full connection layer and a second preprocessing matrix corresponding to the reference image through the full connection layer with the number of layers being 1, and finally a characteristic conversion matrix corresponding to the current original image and the reference image is obtained, wherein the dimensionality is (C, C).
In addition, the above steps S21 and S22 may be performed synchronously, so that the first pre-processing matrix corresponding to the original image and the second pre-processing matrix corresponding to the reference image may be acquired simultaneously.
Based on the above, in other exemplary embodiments of the present disclosure, when calculating the feature transformation matrix, the feature matrix output by the encoder may be downsampled first, so as to implement the dimension reduction processing on the feature matrix. Therefore, the dimensionality (C, C) of the characteristic conversion matrix is reduced, namely the value of C is reduced, so that the calculation amount of the related convolution layer and full connection layer in the calculation process of the characteristic conversion matrix is further reduced, and the calculation efficiency of the characteristic conversion matrix is improved.
Based on the above, in other exemplary embodiments of the present disclosure, for each reference image, the corresponding second pre-processing matrix may be stored, and the corresponding identification information may be configured. When the image is used as the reference image again, the corresponding second preprocessing matrix can be directly called from the database according to the identification information for use in calculating the current feature conversion matrix, so that repeated calculation is avoided, and the operation efficiency is improved.
In step S13, a style transition image is obtained based on the first mean processing parameter of the original image feature matrix, the feature transformation matrix, and the second mean processing parameter of the reference image feature matrix.
In this exemplary embodiment, specifically, as the decoder, after obtaining the feature transformation matrix, the product of the feature transformation matrix and the first averaging parameter after the original image feature matrix is subjected to the averaging processing may be summed with the second averaging parameter after the averaging operation of the reference image feature matrix, so as to obtain the style transition image corresponding to the original image. In this case, the style migration image is obtained by performing style migration on all the contents of the original image; specifically, the formula may include:
F_dst=T*F_ori_relu3_1_zero_mean+mean(F_style_relu3_1)
the output of the feature matrix conversion module is recorded as F _ dst; the feature matrix corresponding to the original image is recorded as F _ ori _ relu3_1, and the feature matrix after the mean value is removed is recorded as F _ ori _ relu3_1_ zero _ mean, and the dimension is processed as (C, W × H); the feature matrix corresponding to the reference image is marked as F _ style _ relu3_1, and the dimension is processed as (C, W × H); mean (F _ style _ relu3_1) represents an averaging operation on the second dimension (W × H) parameters of F _ style _ relu3_1, and the feature matrix dimension after the averaging operation is treated as (C). The feature transformation matrix is denoted as T and has a dimension of (C × C).
In step S14, image fusion processing is performed on the style migration image and the original image to obtain a target image that conforms to the custom style migration configuration data.
In this exemplary embodiment, after the style transition image is acquired, the style transition image and the original image may be fused according to the template information of the selected to-be-processed area, so that the stylization of the selected area is realized and the original image effect is maintained in the non-selected area.
For example, for the original image shown in fig. 2, the corresponding reference image selected is as shown in fig. 5. The style transition image corresponding to the original image obtained by the calculation in the above steps is shown in fig. 6. Then, a background area is determined according to the original image and the background template, a foreground area after style migration can be determined according to the style migration image and the foreground template, and then the foreground area and the background area after transformation are subjected to fusion processing, so that the image style of the selected area to be processed is transformed, and the original image effect of the non-selected area is maintained, and a target image is obtained, as shown in fig. 7.
Based on the above, in other exemplary embodiments of the present disclosure, for the step S12, when the feature transformation matrix is calculated, the foreground image or the background image to be transformed and the corresponding reference image may also be used as input. That is, when the region to be converted is the foreground image region of the original image, the above-mentioned encoding, convolution, covariance matrix for calculating the channel number dimension, full connection, and matrix multiplication may be performed on the foreground region obtained after the image segmentation processing and the reference image, so that the feature conversion matrix for the foreground image and the reference image of the migration style may be output. So that the feature transformation matrix can be directly applied to the foreground image; the characteristic transformation matrix can have pertinence, and the performance of the characteristic transformation matrix is improved. Based on this, in step S13 described above, when calculating the style transition image, the feature transformation matrix and the foreground image may be directly used, so as to acquire the style transition image including only the foreground content after the style transformation. Further, in step S14, the calculation amount can be saved and the speed of image processing can be increased.
Furthermore, in other exemplary embodiments of the present disclosure, different reference images may be selected for different regions in the original image while performing different styles of conversion. Specifically, on the terminal side, the method described above may further include:
step S21, receiving the original image input by the user and one or more reference images;
step S22, carrying out segmentation processing on the original image, and displaying a main body template and a background template after the segmentation processing;
step S23, responding to the selection operation of the user, configuring a corresponding first reference image for the main body template and/or configuring a corresponding second reference image for the background template;
step S24, establishing a first image processing task based on the main body template and the first reference image to obtain a first style migration image; and/or.
And establishing a second image processing task based on the background template and the second reference image to acquire a second style migration image. And
step S25, performing image fusion processing on the first style migration image, the second style migration image, and the original image to obtain a target image that conforms to the custom style migration configuration data.
Specifically, when a user inputs an original image and a plurality of reference images, a segmentation processing result of the original image is firstly shown in an interactive interface. For example, a foreground region image and foreground template, a background region image and background template are shown. Alternatively, when a plurality of figures or articles are included in the original image, each figure or article may be regarded as a plurality of foreground images, respectively.
In the interactive interface, each foreground image and background image after the segmentation processing can be displayed, and a plurality of reference images input by the user can be displayed. At this time, the user may configure the corresponding reference image for each foreground image by clicking the selected operation mode in the interactive interface. And, a corresponding image processing task is created for each foreground image and background image, respectively.
For example, for the original image shown in fig. 2, when the original image is divided into a foreground image and a background image, and different reference images are respectively configured for the foreground image and the background image, two image processing tasks can be established; the two image processing tasks may be performed simultaneously to obtain corresponding different style transition images. Finally, image fusion processing can be carried out on the two style migration images based on the original image, and then a target image with different conversion on the foreground and the background is obtained.
In addition, in the present exemplary embodiment, based on the above, a style transition model may be trained for executing the contents of step S12 to step S13 in the above-mentioned image processing method. Specifically, as shown with reference to fig. 8, the following steps may be included:
step S31, acquiring training sample images and corresponding training reference images;
step S32, calculating a corresponding training feature transformation matrix based on the feature matrix of the training sample image and the feature matrix of the training reference image;
step S33, acquiring a training style transition image based on a first mean processing parameter of the training sample image feature matrix, the training feature transformation matrix and a second mean processing parameter of the training reference image feature matrix;
step S34, coding the training style migration image and the training reference image to obtain a corresponding gram matrix, and calculating the Euclidean distance of the gram matrix to obtain a style loss function;
step S35, coding the training style migration image and the training sample image to obtain a corresponding feature matrix, and calculating the Euclidean distance of the feature matrix to obtain a content loss function;
step S36, determining a target loss function based on the style loss function and the content loss function, and training the style migration model according to the target loss function.
In particular, the style migration model may include training of a feature transformation matrix computation network and training of a style migration image computation network.
For the training sample image, image segmentation can be performed on the training sample image to obtain a corresponding foreground image and a corresponding background image, and a corresponding foreground template and a corresponding background template. And randomly establishing a corresponding relation between the training sample image and the training reference image as a pair of input parameters. In order to improve the adaptability of the model to different styles of migration, a plurality of training reference images can be set for one training sample image. For each pair of images, a corresponding feature matrix may be first calculated using the encoder network for training the feature transformation matrix model. The encoder can be trained by using an open data set in advance, and model parameters are fixed without participating in the training process of the style migration model parameters. The encoder network can adopt a shallow part of a VGG network, the number of network layers reaches relu3_1, and the network structure is a full convolutional layer structure.
The feature transformation matrix computation network may be a structure employing a deep neural network based model. Respectively inputting a training sample image and a training reference image, and outputting a characteristic matrix through encoder network processing; i.e., relu3_1 feature layer output feature matrix, the dimensions of the feature layer are (W, H, C), where W is the feature layer channel width, H is the feature layer channel height, and C is the number of channels, where W, H is related to the size of the input image. And respectively passing through a convolution layer network with 3 layers, and then calculating a covariance matrix of channel dimensions, wherein the dimensions are (C, C), and the C is irrelevant to the size of the input image. And then respectively carrying out matrix multiplication on the matrix with the dimensionality (C, C) output by the full connection layer through the full connection layer with the layer number of 1 to obtain a training characteristic conversion matrix with the dimensionality (C, C). And training the calculation of the characteristic torque matrix is realized.
And acquiring a corresponding training style transition image by using the following formula based on the obtained training feature transformation matrix.
F_dst=T*F_ori_relu3_1_zero_mean+mean(F_style_relu3_1)
During model training, the output of the feature matrix conversion module is recorded as F _ dst; the feature matrix corresponding to the training sample image is recorded as F _ ori _ relu3_1, the feature layer after the mean value is removed is recorded as F _ ori _ relu3_1_ zero _ mean, and the dimension is processed as (C, W H); and (5) training a feature matrix corresponding to the reference image, wherein the feature matrix is denoted as F _ style _ relu3_1, and the dimension is treated as (C, W × H). (ii) a mean (F _ style _ relu3_1) represents an averaging operation on a second dimensional (W × H) parameter of the feature matrix of F _ style _ relu3_1, and the feature matrix with the dimension (C) is obtained after the averaging operation; the feature transformation matrix is denoted as T and has a dimension of (C × C). As the decoder network, a network structure symmetrical to the encoder network may be adopted.
The loss function of the model may include both content loss and style loss. For the style loss part, the style migration image can be trained, the reference image can be trained and encoded by a second encoder to obtain a corresponding gram matrix, and the euclidean distance of the gram matrix is calculated to obtain the style loss function. Specifically, the second encoder may be based on the VGG network and trained in advance using the Imagenet data set, and output the relu4_1 feature layer of the VGG network.
And inputting the training style migration image and the training sample image into a second encoder, and outputting a feature matrix corresponding to the relu4_1 feature layer. The euclidean distance of the Gram (Gram) matrix of the two feature matrices is calculated, representing the style loss. And calculating the Euclidean distance of the two characteristic matrixes to represent the content loss. And multiplying the two distances by a preset weight according to a preset weight coefficient, and adding the two distances to obtain a final loss function. And forward optimizing the style migration model using the loss function.
According to the method provided by the embodiment of the disclosure, the original image is segmented, and the main body and the background area are divided. By using the user-defined reference image, style migration processing can be performed on a foreground image of a main body area or a background image of a background area in the original image by using the reference image; or, different reference images are configured, and the styles of the two areas and the novel different styles are migrated. In migrating the styles, the image content may be left unchanged, including image content structure, region edges, and the like. The image after the style migration processing is fused with the original image according to the region segmentation template, so that higher effect than that of the traditional filter processing is obtained. Because the user-defined reference images are supported, the user can define the style of the converted image by configuring the reference images with different styles, and the degree of freedom is higher than that of the traditional filter processing. The number of layers of the deep neural network model designed by the model is small, the total calculation amount is small, the calculation speed is high, and the high-frame-rate image stream processing can be supported.
It is to be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, as shown in fig. 9, an image processing apparatus 90 according to an embodiment of the present example includes: a graph acquisition module 901, a feature transformation matrix calculation module 902, a style migration image acquisition module 903 and an image fusion processing module 904. Wherein the content of the first and second substances,
the graph obtaining module 901 may be configured to obtain an original image and customize a style migration configuration parameter; the user-defined style migration configuration parameters comprise a to-be-processed area of the original image and a reference image corresponding to the to-be-processed area.
The feature transformation matrix calculation module 902 may be configured to calculate a corresponding feature transformation matrix according to the feature matrix of the original image and the feature matrix of the reference image.
The style migration image obtaining module 903 may be configured to obtain a style migration image based on the first average processing parameter of the original image feature matrix, the feature transformation matrix, and the second average processing parameter of the reference image feature matrix.
The image fusion processing module 904 may be configured to perform image fusion processing on the style migration image and the original image to obtain a target image that conforms to the custom style migration configuration data.
In an example of the present disclosure, the feature transformation matrix calculation module 902 may include: a first calculation unit, a second calculation unit and a third calculation unit (not shown in the figure). Wherein the content of the first and second substances,
the first calculating unit may be configured to encode the original image and obtain a corresponding first feature matrix, perform convolution processing on the first feature matrix to obtain a first convolution result, calculate a first covariance matrix of a channel dimension for the first convolution result, and perform full join processing on the first covariance matrix to obtain a first preprocessing matrix; and
the second calculating unit may be configured to encode the reference image and obtain a corresponding second feature matrix, perform convolution processing on the second feature matrix to obtain a second convolution result, calculate a second covariance matrix of a channel dimension for the second convolution result, and perform full join processing on the second covariance matrix to obtain a second preprocessing matrix;
the third calculation unit may be configured to multiply the first pre-processing matrix and the second pre-processing matrix to obtain the feature transformation matrix.
In an example of the present disclosure, the style migration image acquisition module 903 may include:
and summing the product of the feature transformation matrix and the first mean processing parameter obtained after the mean processing of the original image feature matrix and the second mean processing parameter obtained after the mean processing of the reference image feature matrix to obtain the style migration image.
In one example of the present disclosure, the apparatus may further include: an image segmentation module, an image configuration module (not shown in the figure). Wherein the content of the first and second substances,
the image segmentation module may be configured to perform segmentation processing on the original image to obtain a corresponding foreground region image and a corresponding background region image.
The image configuration module may be configured to, when the foreground region image is configured as the region to be processed, calculate a corresponding feature transformation matrix by using a feature matrix of the foreground region image and a feature matrix of the reference image; and acquiring a style migration image based on the mean processing parameter of the foreground region image feature matrix, the feature transformation matrix and the mean processing parameter of the reference image feature matrix.
In one example of the present disclosure, the apparatus may further include: a data receiving module, a display module, a reference image configuration module, and a task establishing module (not shown in the figure). Wherein the content of the first and second substances,
the data receiving module may be configured to receive the original image input by a user and one or more of the reference images.
The display module may be configured to perform segmentation processing on the original image, and display the main template and the background template after the segmentation processing.
The reference image configuration module may be configured to configure a corresponding first reference image for the subject template and/or a corresponding second reference image for the background template in response to a selection operation of a user.
The task establishing module may be configured to establish a first image processing task based on the subject template and the first reference image to obtain a first style migration image; and/or establishing a second image processing task based on the background template and the second reference image to obtain a second style migration image.
In one example of the present disclosure, the apparatus may further include: and an image fusion executing module (not shown in the figure).
The image fusion execution module may be configured to perform image fusion processing on the first style migration image, the second style migration image, and the original image when the first style migration image and the second style migration image are acquired, so as to acquire a target image that conforms to the user-defined style migration configuration data.
In one example of the present disclosure, the apparatus may further include: a model training module (not shown).
The model training module may be configured to pre-train a style migration model, including: acquiring a training sample image and a corresponding training reference image as input parameters; calculating a corresponding training feature transformation matrix based on the feature matrix of the training sample image and the feature matrix of the training reference image; acquiring a training style migration image based on a first mean processing parameter of the training sample image feature matrix, the training feature transformation matrix and a second mean processing parameter of the training reference image feature matrix; coding the training style migration image and the training reference image to obtain a corresponding gram matrix, and calculating the Euclidean distance of the gram matrix to obtain a style loss function; coding the training style migration image and the training sample image to obtain a corresponding feature matrix, and calculating the Euclidean distance of the feature matrix to obtain a content loss function; and determining a target loss function based on the style loss function and the content loss function, and training the style migration model according to the target loss function.
The details of each module in the image processing apparatus are already described in detail in the corresponding image processing method, and therefore, the details are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
FIG. 10 shows a schematic diagram of an electronic device suitable for use in implementing embodiments of the present invention.
It should be noted that the electronic device 100 shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
FIG. 10 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device to implement an embodiment of the invention.
It should be noted that the computer system 100 of the electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiment of the present invention.
As shown in fig. 10, the computer system 100 includes a Central Processing Unit (CPU)101 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 102 or a program loaded from a storage section 108 into a Random Access Memory (RAM) 103. In the RAM 103, various programs and data necessary for system operation are also stored. The CPU 101, ROM102, and RAM 103 are connected to each other via a bus 104. An Input/Output (I/O) interface 105 is also connected to the bus 104.
The following components are connected to the I/O interface 105: an input portion 106 including a keyboard, a mouse, and the like; an output section 107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 108 including a hard disk and the like; and a communication section 109 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 109 performs communication processing via a network such as the internet. A drive 110 is also connected to the I/O interface 105 as needed. A removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 110 as necessary, so that a computer program read out therefrom is mounted into the storage section 108 as necessary.
In particular, according to an embodiment of the present invention, the processes described below with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 109, and/or installed from the removable medium 111. When the computer program is executed by a Central Processing Unit (CPU)101, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiment of the present invention may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
It should be noted that, as another aspect, the present application also provides a computer-readable medium, which may be included in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 1.
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring an original image and a user-defined style migration configuration parameter; the user-defined style migration configuration parameters comprise a to-be-processed area of the original image and a reference image corresponding to the to-be-processed area;
calculating a corresponding characteristic transformation matrix according to the characteristic matrix of the original image and the characteristic matrix of the reference image;
acquiring a style migration image based on a first mean processing parameter of the original image feature matrix, the feature transformation matrix and a second mean processing parameter of the reference image feature matrix;
and carrying out image fusion processing on the style migration image and the original image to obtain a target image which accords with the user-defined style migration configuration data.
2. The image processing method according to claim 1, wherein the calculating a corresponding feature transformation matrix according to the corresponding feature matrices of the original image and the reference image comprises:
coding the original image and acquiring a corresponding first feature matrix, performing convolution processing on the first feature matrix to acquire a first convolution result, calculating a first covariance matrix of channel dimensions on the first convolution result, and performing full-connection processing on the first covariance matrix to acquire a first preprocessing matrix; and
coding the reference image and acquiring a corresponding second feature matrix, performing convolution processing on the second feature matrix to acquire a second convolution result, calculating a second covariance matrix of channel dimensions on the second convolution result, and performing full-connection processing on the second covariance matrix to acquire a second preprocessing matrix;
and multiplying the first preprocessing matrix and the second preprocessing matrix to obtain the characteristic transformation matrix.
3. The image processing method according to claim 1, wherein the obtaining a style transition image based on the first averaging processing parameter of the original image feature matrix, the feature transformation matrix, and the second averaging processing parameter of the reference image feature matrix comprises:
and summing the product of the feature transformation matrix and the first mean processing parameter after the mean processing of the original image feature matrix and the second mean processing parameter after the mean processing of the reference image feature matrix to obtain the style migration image.
4. The image processing method according to claim 1, wherein after the original image is acquired, the method further comprises:
carrying out segmentation processing on the original image to obtain a corresponding foreground area image and a corresponding background area image;
when the foreground area image is configured as the area to be processed, calculating a corresponding characteristic transformation matrix by using a characteristic matrix of the foreground area image and a characteristic matrix of the reference image; and acquiring a style migration image based on the mean processing parameter of the foreground region image feature matrix, the feature transformation matrix and the mean processing parameter of the reference image feature matrix.
5. The image processing method according to claim 1, characterized in that the method further comprises:
receiving the original image input by a user and one or more reference images;
carrying out segmentation processing on the original image, and displaying a main body template and a background template after the segmentation processing;
responding to selection operation of a user, configuring a corresponding first reference image for the main body template and/or configuring a corresponding second reference image for the background template;
establishing a first image processing task based on the main body template and the first reference image to obtain a first style migration image; and/or
And establishing a second image processing task based on the background template and the second reference image to acquire a second style migration image.
6. The image processing method according to claim 1, wherein when the first-style migration image and the second-style migration image are acquired, the method further comprises:
and carrying out image fusion processing on the first style migration image, the second style migration image and the original image to obtain a target image which accords with the user-defined style migration configuration data.
7. The image processing method according to claim 1, characterized in that the method further comprises: pre-training a style migration model, comprising:
acquiring a training sample image and a corresponding training reference image;
calculating a corresponding training feature transformation matrix based on the feature matrix of the training sample image and the feature matrix of the training reference image;
acquiring a training style migration image based on a first mean processing parameter of the training sample image feature matrix, the training feature transformation matrix and a second mean processing parameter of the training reference image feature matrix;
coding the training style migration image and the training reference image to obtain a corresponding gram matrix, and calculating the Euclidean distance of the gram matrix to obtain a style loss function;
coding the training style migration image and the training sample image to obtain a corresponding feature matrix, and calculating the Euclidean distance of the feature matrix to obtain a content loss function;
and determining a target loss function based on the style loss function and the content loss function, and training the style migration model according to the target loss function.
8. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an original image and customizing the style migration configuration parameters; the user-defined style migration configuration parameters comprise a to-be-processed area of the original image and a reference image corresponding to the to-be-processed area;
the characteristic transformation matrix calculation module is used for calculating a corresponding characteristic transformation matrix according to the characteristic matrix of the original image and the characteristic matrix of the reference image;
the style migration image acquisition module is used for acquiring a style migration image based on a first mean value processing parameter of the original image feature matrix, the feature transformation matrix and a second mean value processing parameter of the reference image feature matrix;
and the image fusion processing module is used for carrying out image fusion processing on the style migration image and the original image so as to obtain a target image which accords with the user-defined style migration configuration data.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 7.
10. A terminal device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the image processing method according to any one of claims 1 to 7.
CN202010601380.XA 2020-06-28 2020-06-28 Image processing method and device, computer readable medium and terminal equipment Pending CN111652830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010601380.XA CN111652830A (en) 2020-06-28 2020-06-28 Image processing method and device, computer readable medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010601380.XA CN111652830A (en) 2020-06-28 2020-06-28 Image processing method and device, computer readable medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN111652830A true CN111652830A (en) 2020-09-11

Family

ID=72343145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010601380.XA Pending CN111652830A (en) 2020-06-28 2020-06-28 Image processing method and device, computer readable medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN111652830A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215854A (en) * 2020-10-19 2021-01-12 珠海金山网络游戏科技有限公司 Image processing method and device
CN112241744A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Image color migration method, device, equipment and computer readable medium
CN113435454A (en) * 2021-05-21 2021-09-24 厦门紫光展锐科技有限公司 Data processing method, device and equipment
CN113781292A (en) * 2021-08-23 2021-12-10 北京达佳互联信息技术有限公司 Image processing method and device, electronic device and storage medium
CN114765692A (en) * 2021-01-13 2022-07-19 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN115439307A (en) * 2022-08-08 2022-12-06 荣耀终端有限公司 Style conversion method, style conversion model generation method, and style conversion system
CN115937020A (en) * 2022-11-08 2023-04-07 北京字跳网络技术有限公司 Image processing method, apparatus, device, medium, and program product
WO2023231918A1 (en) * 2022-06-01 2023-12-07 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426858A (en) * 2017-08-29 2019-03-05 京东方科技集团股份有限公司 Neural network, training method, image processing method and image processing apparatus
CN109766895A (en) * 2019-01-03 2019-05-17 京东方科技集团股份有限公司 The training method and image Style Transfer method of convolutional neural networks for image Style Transfer
CN110852940A (en) * 2019-11-01 2020-02-28 天津大学 Image processing method and related equipment
CN110956654A (en) * 2019-12-02 2020-04-03 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426858A (en) * 2017-08-29 2019-03-05 京东方科技集团股份有限公司 Neural network, training method, image processing method and image processing apparatus
CN109766895A (en) * 2019-01-03 2019-05-17 京东方科技集团股份有限公司 The training method and image Style Transfer method of convolutional neural networks for image Style Transfer
CN110852940A (en) * 2019-11-01 2020-02-28 天津大学 Image processing method and related equipment
CN110956654A (en) * 2019-12-02 2020-04-03 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XUETING LI等: "Learning Linear Transformations for Fast Image and Video Style Transfer", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215854A (en) * 2020-10-19 2021-01-12 珠海金山网络游戏科技有限公司 Image processing method and device
CN112241744A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Image color migration method, device, equipment and computer readable medium
CN114765692A (en) * 2021-01-13 2022-07-19 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN114765692B (en) * 2021-01-13 2024-01-09 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN113435454A (en) * 2021-05-21 2021-09-24 厦门紫光展锐科技有限公司 Data processing method, device and equipment
CN113781292A (en) * 2021-08-23 2021-12-10 北京达佳互联信息技术有限公司 Image processing method and device, electronic device and storage medium
WO2023231918A1 (en) * 2022-06-01 2023-12-07 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium
CN115439307A (en) * 2022-08-08 2022-12-06 荣耀终端有限公司 Style conversion method, style conversion model generation method, and style conversion system
CN115439307B (en) * 2022-08-08 2023-06-27 荣耀终端有限公司 Style conversion method, style conversion model generation method and style conversion system
CN115937020A (en) * 2022-11-08 2023-04-07 北京字跳网络技术有限公司 Image processing method, apparatus, device, medium, and program product
CN115937020B (en) * 2022-11-08 2023-10-31 北京字跳网络技术有限公司 Image processing method, apparatus, device, medium, and program product

Similar Documents

Publication Publication Date Title
CN111652830A (en) Image processing method and device, computer readable medium and terminal equipment
Zeng et al. Learning image-adaptive 3d lookup tables for high performance photo enhancement in real-time
US9639956B2 (en) Image adjustment using texture mask
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN110163237A (en) Model training and image processing method, device, medium, electronic equipment
CN109587560A (en) Method for processing video frequency, device, electronic equipment and storage medium
CN110751649B (en) Video quality evaluation method and device, electronic equipment and storage medium
CN108921942B (en) Method and device for 2D (two-dimensional) conversion of image into 3D (three-dimensional)
WO2023000895A1 (en) Image style conversion method and apparatus, electronic device and storage medium
WO2023226584A1 (en) Image noise reduction method and apparatus, filtering data processing method and apparatus, and computer device
CN113034523A (en) Image processing method, image processing device, storage medium and computer equipment
CN109640151A (en) Method for processing video frequency, device, electronic equipment and storage medium
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
WO2024041235A1 (en) Image processing method and apparatus, device, storage medium and program product
US20240013354A1 (en) Deep SDR-HDR Conversion
CN112200817A (en) Sky region segmentation and special effect processing method, device and equipment based on image
CN110689478B (en) Image stylization processing method and device, electronic equipment and readable medium
EP4345771A1 (en) Information processing method and apparatus, and computer device and storage medium
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
CN113034412B (en) Video processing method and device
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN115567712A (en) Screen content video coding perception code rate control method and device based on just noticeable distortion by human eyes
CN112200816A (en) Method, device and equipment for segmenting region of video image and replacing hair
CN112132923A (en) Two-stage digital image style transformation method and system based on style thumbnail high-definition
CN115496989B (en) Generator, generator training method and method for avoiding image coordinate adhesion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination