CN113658066A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113658066A
CN113658066A CN202110912037.1A CN202110912037A CN113658066A CN 113658066 A CN113658066 A CN 113658066A CN 202110912037 A CN202110912037 A CN 202110912037A CN 113658066 A CN113658066 A CN 113658066A
Authority
CN
China
Prior art keywords
image
filter
color
training
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110912037.1A
Other languages
Chinese (zh)
Inventor
毛芳勤
郭桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110912037.1A priority Critical patent/CN113658066A/en
Publication of CN113658066A publication Critical patent/CN113658066A/en
Priority to PCT/CN2022/110522 priority patent/WO2023016365A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device and electronic equipment, and belongs to the technical field of artificial intelligence. The method comprises the following steps: acquiring a first filter image and an image to be processed, wherein the first filter image comprises an image subjected to filter processing by using a target filter; determining the target filter based on the first filter image and the image to be processed; determining filter weight values respectively corresponding to the pixels in the image to be processed based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter; and applying the target filter to each pixel in the image to be processed according to the filter weight value corresponding to each pixel to obtain a second filter image.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to an image processing method and device and electronic equipment.
Background
In current photographing and photo album processing, a filter has become an essential image processing function. Essentially, the filter adjusts the color of the picture to achieve the purpose of changing the style of the picture.
In order to realize that a user can apply the filter effect of other images to a target image, a filter migration technology is developed, namely, a filter on a picture is extracted and applied to a new picture, so that the aim of obtaining the corresponding filter without downloading corresponding software is fulfilled.
However, the filter usually corresponds to a pure color image with a single color, and when the extracted filter is applied to other images, although a special color effect is generated, an effect similar to a mask is generated, so that the color effect on the image is very stiff and not natural enough.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method and apparatus, and an electronic device, which can solve the problem in the prior art that a color effect is hard and not natural enough when a filter extracted from a filter image is applied to other images.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first filter image and an image to be processed, wherein the first filter image comprises an image subjected to filter processing by using a target filter;
determining the target filter based on the first filter image and the image to be processed;
determining filter weight values respectively corresponding to the pixels in the image to be processed based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter;
and applying the target filter to each pixel in the image to be processed according to the filter weight value corresponding to each pixel to obtain a second filter image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a first filter image and an image to be processed, and the first filter image comprises an image which is subjected to filter processing by using a target filter;
a first determining module, configured to determine the target filter based on the first filter image and the image to be processed;
the second determining module is used for determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter;
and the filter module is used for applying the target filter to each pixel in the image to be processed by using the filter weight value corresponding to each pixel respectively to obtain a second filter image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a target filter used when the first filter image is obtained through filter processing is determined according to the first filter image after the filter processing and the image to be processed without the filter processing. By taking the image to be processed into consideration, it is possible to avoid interference caused by the content color of the first filter image itself when the target filter is extracted based on only the first filter image. And then determining filter weight values corresponding to pixels in the image to be processed respectively based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter, applying the target filter to each pixel of the image to be processed by combining the filter weight values corresponding to the pixels to obtain a second filter image after filter processing, wherein the color effect caused by the target filter in the whole second filter image is wrong and natural because the filter weight values of the target filter applied to the pixels with different colors are different, and the effect similar to a mask cannot appear.
Drawings
FIG. 1 is a flowchart illustrating steps of an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an actual application of the image processing method provided in the embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a process of processing a picture by a deep learning network model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a training process of a deep learning network model provided in an embodiment of the present application;
fig. 5 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an image processing method provided in an embodiment of the present application includes:
step 101: and acquiring a first filter image and an image to be processed.
In this step, the first filter image includes an image obtained by performing filter processing on the target filter, that is, the first filter image may be an image that generates a color effect after applying a filter to any image, where the target filter is the applied filter. The application of a filter to an image may result in a particular color effect of the image, a filter being understood as a kind of color data, which may be, for example, a pure color map of a single color. Thus, each filter may indicate one color. The color effect here may be an effect resulting from adjustment of color, texture, or the like. Of course, for images containing human faces, the color effects may also include effects brought by different makeup. The image to be processed may be any image selected by the user, and specifically, the image to be processed is an image selected by the user and not processed by the filter. The user wants to apply a target filter to the image to achieve the same color effect as the first filter image.
Step 102: and determining a target filter based on the first filter image and the image to be processed.
In this step, a filter image with a color effect is generated after applying a filter to a certain image, and the applied filter can be extracted based on the images before and after applying the filter. And replacing the original image corresponding to the first filter image with the image to be processed, wherein the original image corresponding to the first filter image is subjected to filter processing to obtain the first filter image. Of course, the filter applied to the image can also be obtained by directly performing filter extraction from the filter image, that is, the filter color in the first filter image is extracted to obtain the target filter.
Step 103: and determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter.
In this step, the difference between colors can be understood as the difference between color values of different colors in the same color space. The distance between two colors in the same color space can be used as a measure for measuring the difference between the two colors, and the difference of the colors is larger when the distance is larger; similarly, the smaller the distance is, the smaller the difference is, and if the distance is the same, the two colors are the same and have no difference.
The filter weight value for each pixel is associated with the difference between the color of the pixel and the color indicated by the target filter, and therefore, the filter weight values for different colored pixels are different. It can be understood that the image to be processed is composed of a large number of pixels, the colors of the pixels may be the same or different, the filter weight values corresponding to the pixels of the same color are the same, and the filter weight values corresponding to the pixels of different colors are different.
Step 104: and applying the target filter to each pixel in the image to be processed according to the filter weight value corresponding to each pixel to obtain a second filter image.
In this step, the target filter is applied to each pixel in the image to be processed by using the filter weight value corresponding to each pixel, that is, the image to be processed is subjected to filter processing through the target filter and the filter weight value. Specifically, for a target pixel of an image to be processed, a target filter is applied to the target pixel according to a filter weight value corresponding to the target pixel, where the target pixel is all pixels of the image to be processed, that is, each pixel in the image to be processed needs to be subjected to filter processing. It can be understood that the filter weight value is a specific numerical value, and when the target filter is applied to a certain pixel in the image to be processed with a certain filter weight value, the color value of the target filter is multiplied by the filter weight value to obtain a new color value, and then the new color value is applied to a certain pixel in the image to be processed. For example, if the color value of the target filter is 100 and the filter weight value of the target pixel on the image to be processed is 0.5, the process of applying the target filter to the target pixel includes: the new color value 50 is obtained by multiplying the color value 100 by the filter weight value 0.5, and the color value 50 is applied to the target pixel. Because the filter weight values corresponding to the pixels with different colors in the image to be processed are different, the color values applied to the pixels with different colors are different, and the color effects of the pixels with different colors in the second filter image are different but different. For example, the color indicated by the target filter is blue, when the color is applied to a pixel of a blue sky part with a larger filter weight value in the image to be processed, the blue sky is more blue, and when the color is applied to a pixel of a building part with a smaller filter weight value in the image to be processed, the building part has only faint and unobvious blue, so that the overall color effect of the second filter image after filter processing is wrong.
In the embodiment of the application, a target filter used when the first filter image is obtained through filter processing is determined according to the first filter image after the filter processing and the image to be processed without the filter processing. By taking the image to be processed into consideration, it is possible to avoid interference caused by the content color of the first filter image itself when the target filter is extracted based on only the first filter image. And then determining filter weight values corresponding to pixels in the image to be processed respectively based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter, applying the target filter to each pixel of the image to be processed by combining the filter weight values corresponding to the pixels to obtain a second filter image after filter processing, wherein the color effect caused by the target filter in the whole second filter image is wrong and natural because the filter weight values of the target filter applied to the pixels with different colors are different, and the effect similar to a mask cannot appear.
Optionally, determining a target filter based on the first filter image and the image to be processed includes:
and inputting the first filter image and the image to be processed into a preset target network model.
In this step, the preset target network model is a pre-trained network model. Here, an initial model may be trained based on a deep learning network to obtain a target network model. And the first filter image and the image to be processed are model input of the target network model.
And acquiring a first color characteristic of the first filter image and a second color characteristic of the image to be processed through the target network model.
In this step, the first color feature is a feature of data associated with the color thereof in the first filter image, and the second color feature is a feature of data associated with the color thereof in the image to be processed. Here, the color feature may be output as a stepwise output of the target network model, not as a final model output.
And acquiring a characteristic difference value of the first color characteristic and the second color characteristic through the target network model, and determining the characteristic difference value as a target filter.
In this step, a feature difference obtained by subtracting the first color feature from the second color feature may represent the target filter, and a color corresponding to the feature difference may be used as a color indicated by the target filter. It can be understood that the characteristic difference value can represent the color difference between the image to be processed and the first filter image, so that the image to be processed is processed through the color difference, and an image without the color difference from the first filter image can be obtained. Therefore, the feature difference value can be used as the target filter of the first filter image.
In the embodiment of the application, the target filter is extracted by using the pre-trained target network model, and the first filter image and the image to be processed are input as the model of the target network model, so that the target filter can be quickly and accurately obtained.
Optionally, obtaining, by the target network model, a first color feature of the first filter image and a second color feature of the image to be processed includes:
and acquiring a first image characteristic vector of the first filter image and a second image characteristic vector of the image to be processed through an image characteristic extraction module of the target network model.
In this step, in order to facilitate image processing, the image may be converted into a mathematical expression mode, and the mathematical expression mode is used to represent the first filter image and the image to be processed. The first image feature vector is a mathematical expression mode of the first filter image, and can represent features of each dimension of the first filter image. The second image feature vector is a mathematical expression mode of the image to be processed and can represent features of all dimensions of the image to be processed. Wherein the various dimensions of the image may include a brightness dimension, a color dimension, and the like.
Specifically, the image feature extraction module may be implemented by using some series of stacked mathematical operations, so as to obtain an image feature vector of the image. For example, a first image feature vector and a second image feature vector are calculated by using a first formula.
The first formula: outFeature ═ wn*(wn-1*(...(w1*x+b1))+bn-1)+bnWherein x is input, outFeature is output, outFeature is a first image feature vector when x is the first filter image, outFeature is a second image feature vector when x is the image to be processed, w represents convolution operation, w represents the convolution operation, and the output of the outFeature is output1~wnFor n convolution kernels, b1~bnIs n offset values, where n is a positive integer.
And respectively inputting the first image characteristic vector and the second image characteristic vector into a color characteristic extraction module of the target network model to obtain a first color characteristic of the first filter image and a second color characteristic of the image to be processed.
In this step, the color feature extraction module may also perform a series of stacked mathematical operations, so as to obtain the color features of the image feature vector. The first color characteristic and the second color characteristic are calculated, for example, using a second formula.
The second formula: outColor ═ cwn*(cwn-1*(...(cw1*outFeature+cb1))+cbn-1)+cbnWherein outFeature is input, outColor is output, outColor is first color feature when outFeature is the first image feature vector, outColor is the second color feature when outFeature is the second image feature vector, and cw is convolution operation1~cwnFor n convolution kernels, cb1~cbnFor n offset valuesWherein n is a positive integer.
In the embodiment of the application, a staged processing mode is adopted, the image feature vector is obtained firstly, and then color features related to colors are screened from the image feature vector, so that the whole process is convenient, simple and easy to realize.
Optionally, determining, based on a difference between a color of each pixel in the image to be processed and a color indicated by the target filter, a filter weight value corresponding to each pixel in the image to be processed, including:
and inputting the first filter image and the image to be processed into a preset target network model.
In this step, the preset target network model is a pre-trained network model. Here, an initial model may be trained based on a deep learning network to obtain a target network model. And the first filter image and the image to be processed are model input of the target network model.
Acquiring a first image characteristic vector of a first filter image and a second image characteristic vector of an image to be processed through an image characteristic extraction module of a target network model;
in this step, in order to facilitate image processing, the image may be converted into a mathematical expression mode, and the mathematical expression mode is used to represent the first filter image and the image to be processed. The first image feature vector is a mathematical expression mode of the first filter image, and can represent features of each dimension of the first filter image. The second image feature vector is a mathematical expression mode of the image to be processed and can represent features of all dimensions of the image to be processed. Wherein the various dimensions of the image may include a brightness dimension, a color dimension, and the like.
Specifically, the image feature extraction module may be implemented by using some series of stacked mathematical operations, so as to obtain an image feature vector of the image. For example, the first formula in the above application embodiment is used to calculate the first image feature vector and the second image feature vector, which is not described herein again.
And acquiring a vector difference value of the first image feature vector and the second image feature vector through the target network model.
In this step, in the case that the first image feature vector and the second image feature vector have been obtained, a vector difference value may be obtained by subtracting the two image feature vectors.
And inputting the absolute value of the vector difference value into a weight branch module of the target network model, and acquiring filter weight values corresponding to pixels in the image to be processed respectively, wherein the closer the color of the target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel is, and the target pixel comprises any pixel in the image to be processed.
In this step, the weight branching module may be implemented by a series of stacked mathematical operations, so as to obtain the filter weight value. For example, a third formula may be used to calculate filter weight values for different portions of the image to be processed. Wherein the third formula may be: weightimg ═ wwn*(wwn-1*(...(ww1*|outFeature1-outFeature2|+wb1))+wbn-1)+wbnWherein, | outFeature1-outFeature2I is the absolute value of the vector difference, WeightImg is the output, and represents the filter weight values corresponding to different pixels of the image to be processed, and W is the convolution operation1~wwnFor n convolution kernels, wb1~wbnIs n offset values, where n is a positive integer. It is understood that WeightImg can also be regarded as a global weight map of the image to be processed, including the filter weight value corresponding to each pixel of the image to be processed.
In the embodiment of the application, the closer the color of the target pixel in the image to be processed is to the color indicated by the target filter, the larger the filter weight value of the target pixel is, and thus the more obvious the color effect is when the target filter acts on the target pixel.
Optionally, before acquiring the first filter image and the image to be processed, the image processing method further includes:
acquiring an initial network model and sample data; wherein the initial network model comprises: the image characteristic extraction module, the color characteristic extraction module, the weight branch module and the content extraction module, wherein sample data comprises: the image processing device comprises an original image, a filter result image and a filter color image, wherein the filter result image is an image obtained after the filter color image is added to the original image.
In this step, the initial network model may be regarded as an untrained target network model. Different modules in the initial network model are used for performing different functions, for example, an image feature extraction module is used for extracting image feature vectors of an image, a color feature extraction module is used for extracting color features from the image feature vectors, a weight branching module is used for extracting weight values of pixels in an original image, and a content extraction module is used for extracting image content from the image feature vectors through decoupling learning.
Respectively inputting the filter result image and the original image into an image feature extraction module to obtain a first training image feature vector and a second training image feature vector;
in this step, each image may be in an RGB color mode, wherein the RGB color mode is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G), and blue (B) and superimposing the three color channels on each other, and RGB is a color representing three channels of red, green, and blue. Of course, the color model can be a Lab color model, wherein Lab is a device-independent color model and a color model based on physiological characteristics. The Lab color model consists of three elements, one element being luminance (L) and a and b being two color channels. a comprises colors from dark green (low brightness value) to gray (medium brightness value) to bright pink (high brightness value); b is from bright blue (low brightness value) to gray (medium brightness value) to yellow (high brightness value). Here, to facilitate the decoupling of color and brightness, in the case that each image is in RGB color mode, it is converted into Lab color model, that is, we often say, the RGB space of the image is converted into Lab space. In extracting the training image feature vector, the extraction may be performed based on the first formula described above.
Inputting the first training image feature vector and the second training image feature vector into a color feature extraction module respectively to obtain a first training color feature and a second training color feature;
in this step, the color features are extracted from the training image feature vector, and may be extracted based on the second formula.
And determining the difference value obtained by subtracting the first training color characteristic from the second training color characteristic as the predicted filter color.
In this step, the predicted filter color is obtained by calculation, and the color indicated by the applied filter is obtained when the original image is subjected to filter processing to obtain a filter result graph.
And inputting the target absolute value into the weight branching module to obtain the weight value of the training filter corresponding to each pixel in the original image, wherein the target absolute value is the absolute value of the difference value obtained by subtracting the second training image feature vector from the first training image feature vector.
In this step, the training filter weight values corresponding to the pixels in the original image may be extracted based on the third formula.
Inputting the first training image feature vector into a content extraction module to obtain a training content graph related to the image content of the filter result graph;
in this step, the image content of the filter result graph, that is, the training content graph, may be extracted based on a fourth formula: ImageContent ═ iwn*(iwn-1*(...(iw1*outFeature+ib1))+ibn-1)+ibnWherein outFeature is input, represents a first training image feature vector of the filter result graph, ImageContent is output, represents a training content graph, and is convolution operation iw1~iwnFor n convolution kernels, ib1~ibnIs n offset values, where n is a positive integer.
Determining model loss based on the predicted filter color, the training filter weight value, the training content graph and the filter color graph, wherein the model loss comprises image content loss, image color loss and cycle consistency loss;
in this step, the content difference is measured for the loss of image content; specifically, the image content loss is | ImgContent — ImgSource |, where ImgContent represents the training content graph and ImgSource represents the original graph. Measuring the difference of the filter colors by making a difference between a value (predicted filter color) output by the color branch and a mark color (filter color image) obtained in advance, specifically, color loss is | colorgroup dtruth-ColorPredicted ], wherein colorgroup dtruth represents the filter color image; ColorPredicted represents the predicted filter color; the training filter weight values corresponding to different pixels output by the weight branch module are utilized to combine the color branch output value (the predicted filter color) and the original image. Then, the original image value is added with the color to be multiplied by the weight value of the training filter to obtain a result image, and the obtained result image and the input filter result image are used for solving the L1 loss to measure the reconstruction accuracy; specifically, the cycle consistency loss is | ImgTarget-ImgSource ColorPredicted weight img |, wherein ImgTarget represents a filter result graph; ImgSource represents the original image; ColorPredicted represents the predicted filter color; WeightImg represents the training filter weight value.
And updating model parameters, continuing training based on new sample data until model loss is converged, and determining the initial network model after training as the target network model.
In this step, the content loss, color loss, and cyclic consistency loss are solved, the partial derivatives are calculated for the convolution kernels in the above formulas, and then the convolution kernels are updated, where the new convolution kernel is equal to the sum of the old convolution kernel plus the partial derivative calculated for the convolution kernel in the last training process. Training is carried out in this way, the training is finished after the model loss is converged, and model parameters, namely convolution kernels in the above formulas, are saved. Here, model training is required to be performed using different sample data, and model parameters are updated each time training is performed
In the embodiment of the application, model training is performed based on the original image, the filter result graph and the filter color graph in the sample data, model parameters are updated based on the result of each training, and the training is stopped after the model loss is converged, so that a trained target network model is obtained.
Optionally, inputting the first filter image and the image to be processed into a preset target network model, including:
and under the condition that the first filter image and the image to be processed are in an RGB color mode, respectively converting the first filter image and the image to be processed into an Lab color mode.
In the embodiment of the application, the color space of the image is converted, so that the subsequent extraction of color features is facilitated.
Fig. 2 is a schematic diagram illustrating an actual application of the image processing method according to the embodiment of the present application, where the method includes:
step 201: and acquiring a user graph and a target filter graph of a filter which the user wants to migrate, wherein the user graph is the image to be processed in the application embodiment, and the target filter graph is the first filter graph in the application embodiment.
Step 202: and inputting the acquired picture into a deep learning network model.
Step 203: and obtaining a result graph after the user graph is subjected to filter migration, namely the second filter graph in the embodiment of the application.
Fig. 3 is a schematic diagram illustrating a process of processing an image by a deep learning network model. The target filter image and the user image are respectively input into the image feature extraction module to obtain respective image feature vectors, the respective image feature vectors are respectively input into the corresponding color feature extraction modules to obtain respective color features, then the color features obtained by the filter image processing branch are subtracted from the color features obtained by the original image processing branch to obtain predicted colors, meanwhile, the absolute value obtained by subtracting the two image feature vectors is input into the weight branch module of the original image processing branch to obtain a global weight image, and the predicted colors, the global weight image and the user image are used to obtain a result image after the user image is migrated by the filter. The result graph is ImgSource ColorPredicted WeiightImg, wherein ImgSource represents the user graph; ColorPredicted represents the predicted color; the WeightImg represents a global weight map, and the process of obtaining the image feature vector, the color feature and the global weight map in the embodiment of the present application may refer to the first formula, the second formula and the third formula in the embodiment of the present application, which is not described herein again. It is noted that in fig. 3, the convolution kernel is shared between the two image feature extraction modules, and the convolution kernel is shared between the two color feature extraction modules.
It is understood that the process of training the deep learning network module (target network model) is similar to fig. 3, and as shown in fig. 4, the sample data used in the training process includes: the image processing device comprises an original image, a filter result image and a filter color image, wherein the filter result image is an image obtained after the filter color image is added to the original image. The original image and the filter result image are respectively input into the model, and the predicted color and the global weight image are obtained, wherein the process of obtaining the predicted color and the global weight image is similar to the process of respectively inputting the user image and the target filter image in fig. 3, and the process of obtaining the predicted color and the global weight image is not repeated here. It is worth noting that in the training process, the training content graph corresponding to the filter result graph can be obtained through the image feature extraction module and the content extraction module of the model. In fig. 4, the convolution kernel is shared between two image feature extraction modules, and the convolution kernel is shared between two color feature extraction modules. After the various items of data are obtained, model loss is calculated based on the various items of data obtained, and then model parameters are updated. Here, the model loss includes content loss, color loss, and cyclic consistency loss, wherein the content loss is used for measuring the difference of image contents; specifically, the image content loss is | ImgContent — ImgSource |, where ImgContent represents the training content graph and ImgSource represents the original graph. Color loss is measured as the difference in color of the filter, where color loss is | colorgroup dtruth-ColorPredicted |, where colorgroup dtruth represents the actual color, i.e., the filter color map; ColorPredicted represents the predicted color; the training filter weight values corresponding to different pixels output by the weight branch module are utilized to combine the color branch output value (predicted color) and the original image. Then, the original image value is added with the color to be multiplied by the weight value of the training filter to obtain a result image, and the obtained result image and the input filter result image are used for solving the L1 loss to measure the reconstruction accuracy; specifically, the cycle consistency loss is | ImgTarget-ImgSource ColorPredicted weight img |, wherein ImgTarget represents a filter result graph; ImgSource represents the original image; ColorPredicted represents the predicted color; WeightImg represents the training filter weight value. Through continuous training, model parameters can be continuously updated until the model loss converges, and the training is stopped.
According to the method and the device, more accurate filter color can be estimated, and the filter color is prevented from being interfered by picture content. In addition, a global weight graph is additionally output, and filters with different weights are used at different positions of the image, so that the result graph is more natural, and the mask effect is avoided.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
As shown in fig. 5, an embodiment of the present application further provides an image processing apparatus, including:
a first obtaining module 51, configured to obtain a first filter image and an image to be processed, where the first filter image includes an image obtained by performing filter processing on a target filter;
a first determining module 52, configured to determine a target filter based on the first filter image and the image to be processed;
a second determining module 53, configured to determine, based on a difference between a color of each pixel in the image to be processed and a color indicated by the target filter, a filter weight value corresponding to each pixel in the image to be processed;
and a filter module 54, configured to apply the target filter to each pixel in the image to be processed according to the filter weight value corresponding to each pixel, so as to obtain a second filter image.
Optionally, the first determining module 52 includes:
the first input unit is used for inputting the first filter image and the image to be processed into a preset target network model;
the first model unit is used for acquiring a first color characteristic of the first filter image and a second color characteristic of the image to be processed through the target network model;
and the second model unit is used for acquiring a characteristic difference value of the first color characteristic and the second color characteristic through the target network model and determining the characteristic difference value as the target filter.
Optionally, the first model unit comprises:
the first model subunit is used for acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;
and the second model subunit is used for respectively inputting the first image feature vector and the second image feature vector into the color feature extraction module of the target network model, and acquiring the first color feature of the first filter image and the second color feature of the image to be processed.
Optionally, the second determining module 53 includes:
the second input unit is used for inputting the first filter image and the image to be processed into a preset target network model;
the third model unit is used for acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;
the fourth model unit is used for acquiring a vector difference value of the first image feature vector and the second image feature vector through the target network model;
and the fifth model unit is used for inputting the absolute value of the vector difference value into the weight branch module of the target network model and acquiring the filter weight value corresponding to each pixel in the image to be processed, wherein the closer the color of the target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel is, and the target pixel comprises any pixel in the image to be processed.
Optionally, the image processing apparatus further comprises:
the second acquisition module is used for acquiring the initial network model and sample data; wherein the initial network model comprises: the image characteristic extraction module, the color characteristic extraction module, the weight branch module and the content extraction module, wherein sample data comprises: the image processing method comprises the following steps of (1) adding a filter color map to an original image, and obtaining a filter result image and a filter color image;
the first training module is used for inputting the filter result image and the original image into the image feature extraction module respectively to obtain a first training image feature vector and a second training image feature vector;
the second training module is used for inputting the first training image feature vector and the second training image feature vector into the color feature extraction module respectively to obtain a first training color feature and a second training color feature;
the third training module is used for determining the difference value obtained by subtracting the first training color characteristic from the second training color characteristic as the predicted filter color;
the fourth training module is used for inputting a target absolute value into the weight branching module to obtain a training filter weight value corresponding to each pixel in the original image, wherein the target absolute value is an absolute value of a difference value obtained by subtracting the second training image feature vector from the first training image feature vector;
the fifth training module is used for inputting the first training image characteristic vector into the content extraction module to obtain a training content graph related to the image content of the filter result graph;
the sixth training module is used for determining model loss based on the predicted filter color, the training filter weight value, the training content graph and the filter color graph, wherein the model loss comprises image content loss, image color loss and cycle consistency loss;
and the seventh training module is used for updating the model parameters, continuing training based on new sample data until the model loss is converged, and determining the initial network model after training as the target network model.
Optionally, the first input unit is specifically configured to convert the first filter image and the image to be processed into Lab color models respectively when the first filter image and the image to be processed are in an RGB color mode.
In the embodiment of the application, a target filter used when the first filter image is obtained through filter processing is determined according to the first filter image after the filter processing and the image to be processed without the filter processing. By taking the image to be processed into consideration, it is possible to avoid interference caused by the content color of the first filter image itself when the target filter is extracted based on only the first filter image. And then determining filter weight values corresponding to pixels in the image to be processed respectively based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter, applying the target filter to each pixel of the image to be processed by combining the filter weight values corresponding to the pixels to obtain a second filter image after filter processing, wherein the color effect caused by the target filter in the whole second filter image is wrong and natural because the filter weight values of the target filter applied to the pixels with different colors are different, and the effect similar to a mask cannot appear.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in an embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
A processor 710, configured to obtain a first filter image and an image to be processed, where the first filter image includes an image obtained by performing filter processing on a target filter;
a processor 710 for determining a target filter based on the first filter image and the image to be processed;
the processor 710 is further configured to determine, based on a difference between a color of each pixel in the image to be processed and a color indicated by the target filter, a filter weight value corresponding to each pixel in the image to be processed;
the processor 710 is further configured to apply the target filter to each pixel in the image to be processed according to the filter weight value corresponding to each pixel, so as to obtain a second filter image.
In the embodiment of the application, a target filter used when the first filter image is obtained through filter processing is determined according to the first filter image after the filter processing and the image to be processed without the filter processing. By taking the image to be processed into consideration, it is possible to avoid interference caused by the content color of the first filter image itself when the target filter is extracted based on only the first filter image. And then determining filter weight values corresponding to pixels in the image to be processed respectively based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter, applying the target filter to each pixel of the image to be processed by combining the filter weight values corresponding to the pixels to obtain a second filter image after filter processing, wherein the color effect caused by the target filter in the whole second filter image is wrong and natural because the filter weight values of the target filter applied to the pixels with different colors are different, and the effect similar to a mask cannot appear.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image processing method, characterized in that the image processing method comprises:
acquiring a first filter image and an image to be processed, wherein the first filter image comprises an image subjected to filter processing by using a target filter;
determining the target filter based on the first filter image and the image to be processed;
determining filter weight values respectively corresponding to the pixels in the image to be processed based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter;
and applying the target filter to each pixel in the image to be processed according to the filter weight value corresponding to each pixel to obtain a second filter image.
2. The image processing method according to claim 1, wherein the determining the target filter based on the first filter image and the image to be processed comprises:
inputting the first filter image and the image to be processed into a preset target network model;
acquiring a first color characteristic of the first filter image and a second color characteristic of the image to be processed through the target network model;
and acquiring a characteristic difference value of the first color characteristic and the second color characteristic through the target network model, and determining the characteristic difference value as the target filter.
3. The image processing method according to claim 2, wherein the obtaining, by the target network model, the first color feature of the first filter image and the second color feature of the image to be processed comprises:
acquiring a first image characteristic vector of the first filter image and a second image characteristic vector of the image to be processed through an image characteristic extraction module of the target network model;
and respectively inputting the first image feature vector and the second image feature vector into a color feature extraction module of the target network model, and acquiring a first color feature of the first filter image and a second color feature of the image to be processed.
4. The method according to claim 1, wherein the determining filter weight values respectively corresponding to pixels in the image to be processed based on a difference between a color of each pixel in the image to be processed and a color indicated by the target filter comprises:
inputting the first filter image and the image to be processed into a preset target network model;
acquiring a first image characteristic vector of the first filter image and a second image characteristic vector of the image to be processed through an image characteristic extraction module of the target network model;
acquiring a vector difference value of the first image feature vector and the second image feature vector through the target network model;
and inputting the absolute value of the vector difference value into a weight branch module of the target network model, and obtaining filter weight values corresponding to pixels in the image to be processed, wherein the closer the color of a target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel is, and the target pixel comprises any pixel in the image to be processed.
5. The image processing method according to claim 4, wherein before said acquiring the first filter image and the image to be processed, the image processing method further comprises:
acquiring an initial network model and sample data; wherein the initial network model comprises: the image feature extraction module, the color feature extraction module, the weight branch module and the content extraction module, wherein the sample data comprises: the image processing method comprises the following steps of (1) adding an original image, a filter result image and a filter color image, wherein the filter result image is an image obtained after the filter color image is added to the original image;
inputting the filter result graph and the original graph into the image feature extraction module respectively to obtain a first training image feature vector and a second training image feature vector;
inputting the first training image feature vector and the second training image feature vector into the color feature extraction module respectively to obtain a first training color feature and a second training color feature;
subtracting the first training color characteristic from the second training color characteristic to obtain a difference value, and determining the difference value as the color of the prediction filter;
inputting a target absolute value into the weight branching module to obtain a training filter weight value corresponding to each pixel in the original image, wherein the target absolute value is an absolute value of a difference value obtained by subtracting a second training image feature vector from the first training image feature vector;
inputting the first training image feature vector into the content extraction module to obtain a training content graph related to the image content of the filter result graph;
determining a model loss based on the predicted filter color, the training filter weight value, the training content map, and the filter color map, wherein the model loss comprises an image content loss, an image color loss, and a cyclic consistency loss;
updating model parameters, continuing training based on new sample data until the model loss converges, and determining the initial network model after training as the target network model.
6. An image processing apparatus characterized by comprising:
the device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a first filter image and an image to be processed, and the first filter image comprises an image which is subjected to filter processing by using a target filter;
a first determining module, configured to determine the target filter based on the first filter image and the image to be processed;
the second determining module is used for determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter;
and the filter module is used for applying the target filter to each pixel in the image to be processed by using the filter weight value corresponding to each pixel respectively to obtain a second filter image.
7. The image processing apparatus according to claim 6, wherein the first determination module includes:
the first input unit is used for inputting the first filter image and the image to be processed into a preset target network model;
the first model unit is used for acquiring a first color feature of the first filter image and a second color feature of the image to be processed through the target network model;
and the second model unit is used for acquiring a characteristic difference value of the first color characteristic and the second color characteristic through the target network model and determining the characteristic difference value as the target filter.
8. The image processing apparatus according to claim 7, wherein the first model unit includes:
the first model subunit is used for acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;
and the second model subunit is used for respectively inputting the first image feature vector and the second image feature vector into the color feature extraction module of the target network model, and acquiring the first color feature of the first filter image and the second color feature of the image to be processed.
9. The image processing apparatus according to claim 6, wherein the second determination module includes:
the second input unit is used for inputting the first filter image and the image to be processed into a preset target network model;
the third model unit is used for acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;
the fourth model unit is used for acquiring a vector difference value of the first image feature vector and the second image feature vector through the target network model;
and a fifth model unit, configured to input the absolute value of the vector difference to a weight branching module of the target network model, and obtain filter weight values corresponding to pixels in the image to be processed, where the closer the color of a target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel is, and the target pixel includes any pixel in the image to be processed.
10. The image processing apparatus according to claim 9, characterized by further comprising:
the second acquisition module is used for acquiring the initial network model and sample data; wherein the initial network model comprises: the image feature extraction module, the color feature extraction module, the weight branch module and the content extraction module, wherein the sample data comprises: the image processing method comprises the following steps of (1) adding an original image, a filter result image and a filter color image, wherein the filter result image is an image obtained after the filter color image is added to the original image;
the first training module is used for inputting the filter result graph and the original graph into the image feature extraction module respectively to obtain a first training image feature vector and a second training image feature vector;
the second training module is used for inputting the first training image feature vector and the second training image feature vector into the color feature extraction module respectively to obtain a first training color feature and a second training color feature;
the third training module is used for determining the difference value obtained by subtracting the first training color characteristic from the second training color characteristic as the predicted filter color;
the fourth training module is used for inputting a target absolute value into the weight branching module to obtain a training filter weight value corresponding to each pixel in the original image, wherein the target absolute value is an absolute value of a difference value obtained by subtracting a second training image feature vector from the first training image feature vector;
a fifth training module, configured to input the first training image feature vector to the content extraction module, so as to obtain a training content map related to image content of the filter result map;
a sixth training module, configured to determine a model loss based on the predicted filter color, the training filter weight value, the training content map, and the filter color map, where the model loss includes an image content loss, an image color loss, and a cyclic consistency loss;
and the seventh training module is used for updating the model parameters, continuing training based on new sample data until the model loss converges, and determining the initial network model after training as the target network model.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 5.
12. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method according to any one of claims 1 to 5.
CN202110912037.1A 2021-08-09 2021-08-09 Image processing method and device and electronic equipment Pending CN113658066A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110912037.1A CN113658066A (en) 2021-08-09 2021-08-09 Image processing method and device and electronic equipment
PCT/CN2022/110522 WO2023016365A1 (en) 2021-08-09 2022-08-05 Image processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110912037.1A CN113658066A (en) 2021-08-09 2021-08-09 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113658066A true CN113658066A (en) 2021-11-16

Family

ID=78491066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110912037.1A Pending CN113658066A (en) 2021-08-09 2021-08-09 Image processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113658066A (en)
WO (1) WO2023016365A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023016365A1 (en) * 2021-08-09 2023-02-16 维沃移动通信有限公司 Image processing method and apparatus, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376640A (en) * 2014-08-06 2016-03-02 腾讯科技(北京)有限公司 Filter processing method, filter processing device and electronic equipment
CN108961170A (en) * 2017-05-24 2018-12-07 阿里巴巴集团控股有限公司 Image processing method, device and system
CN109741283A (en) * 2019-01-23 2019-05-10 芜湖明凯医疗器械科技有限公司 A kind of method and apparatus for realizing smart filter
CN112529808A (en) * 2020-12-15 2021-03-19 北京映客芝士网络科技有限公司 Image color adjusting method, device, equipment and medium
CN113111791A (en) * 2021-04-16 2021-07-13 深圳市格灵人工智能与机器人研究院有限公司 Image filter conversion network training method and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014803A (en) * 2021-02-04 2021-06-22 维沃移动通信有限公司 Filter adding method and device and electronic equipment
CN113658066A (en) * 2021-08-09 2021-11-16 维沃移动通信有限公司 Image processing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376640A (en) * 2014-08-06 2016-03-02 腾讯科技(北京)有限公司 Filter processing method, filter processing device and electronic equipment
CN108961170A (en) * 2017-05-24 2018-12-07 阿里巴巴集团控股有限公司 Image processing method, device and system
CN109741283A (en) * 2019-01-23 2019-05-10 芜湖明凯医疗器械科技有限公司 A kind of method and apparatus for realizing smart filter
CN112529808A (en) * 2020-12-15 2021-03-19 北京映客芝士网络科技有限公司 Image color adjusting method, device, equipment and medium
CN113111791A (en) * 2021-04-16 2021-07-13 深圳市格灵人工智能与机器人研究院有限公司 Image filter conversion network training method and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023016365A1 (en) * 2021-08-09 2023-02-16 维沃移动通信有限公司 Image processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2023016365A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
US20150215590A1 (en) Image demosaicing
CN113132696B (en) Image tone mapping method, image tone mapping device, electronic equipment and storage medium
CN111835982B (en) Image acquisition method, image acquisition device, electronic device, and storage medium
CN108230407B (en) Image processing method and device
JP2017187994A (en) Image processing apparatus, image processing method, image processing system, and program
CN110211030B (en) Image generation method and device
US11576478B2 (en) Method for simulating the rendering of a make-up product on a body area
CN112037160B (en) Image processing method, device and equipment
CN113887599A (en) Screen light detection model training method, and ambient light detection method and device
CN112053417A (en) Image processing method, apparatus and system, and computer-readable storage medium
US11481927B2 (en) Method and apparatus for determining text color
CN113658066A (en) Image processing method and device and electronic equipment
Mukherjee et al. Backward compatible object detection using hdr image content
CN113507570B (en) Exposure compensation method and device and electronic equipment
CN113132639B (en) Image processing method and device, electronic equipment and storage medium
CN113034412B (en) Video processing method and device
CN109615620A (en) The recognition methods of compression of images degree, device, equipment and computer readable storage medium
CN111901519B (en) Screen light supplement method and device and electronic equipment
CN112419218A (en) Image processing method and device and electronic equipment
CN111754492A (en) Image quality evaluation method and device, electronic equipment and storage medium
CN114512094B (en) Screen color adjusting method, device, terminal and computer readable storage medium
CN114663570A (en) Map generation method and device, electronic device and readable storage medium
KR102215607B1 (en) Electronic device capable of correction to improve the brightness of dark images and operating method thereof
CN108282643B (en) Image processing method, image processing device and electronic equipment
CN113191376A (en) Image processing method, image processing device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination