CN107948529B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN107948529B
CN107948529B CN201711455985.7A CN201711455985A CN107948529B CN 107948529 B CN107948529 B CN 107948529B CN 201711455985 A CN201711455985 A CN 201711455985A CN 107948529 B CN107948529 B CN 107948529B
Authority
CN
China
Prior art keywords
image
processing
processed
unit
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711455985.7A
Other languages
Chinese (zh)
Other versions
CN107948529A (en
Inventor
涂治国
张轩哲
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilin Hesheng Network Technology Inc
Original Assignee
Qilin Hesheng Network Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilin Hesheng Network Technology Inc filed Critical Qilin Hesheng Network Technology Inc
Priority to CN201711455985.7A priority Critical patent/CN107948529B/en
Publication of CN107948529A publication Critical patent/CN107948529A/en
Application granted granted Critical
Publication of CN107948529B publication Critical patent/CN107948529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the application provides an image processing method and device, wherein the method comprises the following steps: acquiring an image to be processed, inputting the image to be processed into an image processing model, calling a coding unit in the image processing model, and coding the image to be processed to obtain a characteristic diagram corresponding to the image to be processed; determining a target style corresponding to an image to be processed; selecting a target processing unit corresponding to a target style from a plurality of processing units contained in the image processing model, and calling the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map; wherein, each processing unit corresponds to an image style; and calling a decoding unit in the image processing model, and decoding the processed characteristic graph to obtain a stylized image corresponding to the image to be processed. Through the embodiment, the display effect of the image can be adjusted, so that the image meets the display requirement of a user.

Description

Image processing method and device
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and apparatus.
Background
At present, image shooting has become a common leisure mode, and users can use professional shooting cameras to shoot images, and can also use mobile terminals such as mobile phones and tablet computers to shoot images, wherein the mobile terminals can be used for shooting images. The mobile terminal is used for shooting images, and the mobile terminal is more and more chosen by users based on the advantages of convenience, rapidness and no need of carrying professional equipment.
Due to the limitations of shooting light, shooting occasions, shooting technologies and the like during image shooting, the images shot by users are difficult to avoid display defects, and the ideal effects of the users cannot be achieved, such as too low image brightness, overexposure and the like. Therefore, in order to improve the image quality, it is necessary to provide an image processing method to adjust the display effect of the image so that the image meets the display requirements of the user.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and an image processing device, which can adjust the display effect of an image to enable the image to meet the display requirement of a user.
To achieve the above purpose, the embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method applied to a mobile terminal, including:
acquiring an image to be processed, inputting the image to be processed into an image processing model, calling a coding unit in the image processing model, and coding the image to be processed to obtain a feature map corresponding to the image to be processed;
determining a target style corresponding to the image to be processed according to the triggering operation of a user; the target style is an image style triggered by a user;
selecting a target processing unit corresponding to the target style in real time from a plurality of processing units included in the image processing model, and calling the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map; wherein each processing unit corresponds to an image style;
and calling a decoding unit in the image processing model, and decoding the processed feature map to obtain a stylized image corresponding to the image to be processed.
In a second aspect, an embodiment of the present application provides an image processing apparatus, applied to a mobile terminal, including:
the encoding unit calling module is used for acquiring an image to be processed, inputting the image to be processed into an image processing model, calling an encoding unit in the image processing model, and encoding the image to be processed to obtain a feature map corresponding to the image to be processed;
the image style determining module is used for determining a target style corresponding to the image to be processed according to the triggering operation of a user; the target style is an image style triggered by a user;
the processing unit calling module is used for selecting a target processing unit corresponding to the target style in real time from a plurality of processing units contained in the image processing model, and calling the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map; wherein each processing unit corresponds to an image style;
and the decoding unit calling module is used for calling a decoding unit in the image processing model to decode the processed feature map to obtain a stylized image corresponding to the image to be processed.
In a third aspect, an embodiment of the present application provides an image processing apparatus, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method as described in the first aspect above.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the image processing method according to the first aspect.
According to the method and the device, the stylized image corresponding to the image to be processed can be obtained by stylizing the image to be processed according to the target style specified by the user, so that the display effect of the image can be adjusted, and the image can meet the display requirement of the user. Moreover, the image processing model invoked in the embodiment of the present application includes a coding unit, a plurality of processing units and a decoding unit, where each processing unit corresponds to one image style, and therefore the processing units of the plurality of image styles corresponding to the image processing model share the coding unit. After the image to be processed is subjected to one-time stylization processing, if the user switches the image style, the image does not need to be repeatedly coded, and the stylization processing can be performed again by directly utilizing the feature diagram obtained by the previous coding, so that a repeated coding process is omitted in a scene that the user switches the image style, and the image processing speed is improved. And the image processing model comprises a coding unit, a plurality of processing units and a decoding unit, and compared with a structure that one coding unit and one decoding unit are respectively configured for each image style, the image processing model omits the repeated coding unit and the repeated decoding unit, reduces the volume and the data volume of the image processing model and further improves the image processing speed by adopting the mode that the plurality of processing units are fused and share the coding unit and the decoding unit.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image processing model according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a target processing unit according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a decoding unit according to an embodiment of the present application;
fig. 5 is a schematic block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to adjust the display effect of an image and enable the image to meet the display requirement of a user, the embodiment of the application provides an image processing method and an image processing device.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S102, acquiring an image to be processed, inputting the image to be processed into an image processing model, calling a coding unit in the image processing model, and coding the image to be processed to obtain a feature map corresponding to the image to be processed;
step S104, determining a target style corresponding to the image to be processed according to the triggering operation of the user; the target style is an image style triggered by a user;
step S106, selecting a target processing unit corresponding to a target style in real time from a plurality of processing units contained in the image processing model, and calling the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map; wherein, each processing unit corresponds to an image style;
and step S108, calling a decoding unit in the image processing model, and decoding the processed feature map to obtain a stylized image corresponding to the image to be processed.
According to the method and the device, the stylized image corresponding to the image to be processed can be obtained by stylizing the image to be processed according to the target style specified by the user, so that the display effect of the image can be adjusted, and the image can meet the display requirement of the user. Moreover, the image processing model invoked in the embodiment of the present application includes a coding unit, a plurality of processing units and a decoding unit, where each processing unit corresponds to one image style, and therefore the processing units of the plurality of image styles corresponding to the image processing model share the coding unit. After the image to be processed is subjected to one-time stylization processing, if the user switches the image style, the image does not need to be repeatedly coded, and the stylization processing can be performed again by directly utilizing the feature diagram obtained by the previous coding, so that a repeated coding process is omitted in a scene that the user switches the image style, and the image processing speed is improved. And the image processing model comprises a coding unit, a plurality of processing units and a decoding unit, and compared with a structure that one coding unit and one decoding unit are respectively configured for each image style, the image processing model omits the repeated coding unit and the repeated decoding unit, reduces the volume and the data volume of the image processing model and further improves the image processing speed by adopting the mode that the plurality of processing units are fused and share the coding unit and the decoding unit.
In the embodiment of the application, the stylized processing refers to a processing mode of adjusting the image to be processed into a target style, the target style can be from an art painting, and the stylized image obtained by processing comprises the content and the target style of the image to be processed. For example, the image to be processed is an animal image, and the target style is a style corresponding to "starry sky" which is a classic name picture of sansku gao, then the animal image can be processed into a style corresponding to "starry sky" through the stylization processing in the embodiment, and the stylized image obtained through the processing is an animal image with "starry sky" style.
Fig. 2 is a schematic structural diagram of an image processing model according to an embodiment of the present application, and as shown in fig. 2, the image processing model includes an encoding unit, a plurality of processing units, and a decoding unit, where an output of the encoding unit is used as an input of the processing unit, and an output of the processing unit is used as an input of the decoding unit. The processing units in fig. 2 are used to implement the stylization process described above, and each processing unit corresponds to one image style. In fig. 2, the whole of the plurality of processing units may be referred to as a residual module.
In step S102, the image to be processed may be an image captured by a user through a mobile terminal, after the mobile terminal obtains the image to be processed, the image to be processed is input to the image processing model in fig. 2, and the encoding unit in fig. 2 is called to encode the image to be processed, so as to obtain a feature map corresponding to the image to be processed, specifically, the image to be processed is input to the encoding unit, so that the encoding unit encodes the image to be processed, and an output of the encoding unit is used as the feature map corresponding to the image to be processed. The encoding unit can amplify the size of the image to be processed by a certain factor in the process of encoding the image to be processed.
In the step S104, the mobile terminal determines the target style corresponding to the image to be processed according to the trigger operation of the user; wherein the target style is an image style triggered by a user. The triggering operation of the user may be a selection operation.
Specifically, after the mobile terminal acquires the image to be processed, a plurality of image styles can be displayed on a screen of the mobile terminal for selection by a user, the mobile terminal determines the image style selected by the user for the first time as the target style corresponding to the image to be processed according to the selection operation of the user, or the mobile terminal determines the image style selected by the user again as the target style corresponding to the image to be processed after completing the image processing process according to the image style selected by the user. In one embodiment, the mobile terminal displays a plurality of images to be selected, each image to be selected corresponds to one image style, the user selects the image style by clicking the image to be selected on a screen of the mobile terminal, and the mobile terminal determines the image style selected by the user as a target style.
In step S106, each processing unit in the image processing model corresponds to one image style, so that after the target style is determined, the mobile terminal needs to select a target processing unit corresponding to the target style from a plurality of processing units included in the image processing model, and then call the target processing unit to perform stylization processing on the feature map according to the target style, so as to obtain a processed feature map. The triggering operation of the user occurs in real time, so that the mobile terminal can select the target processing unit corresponding to the target style in real time.
Each processing unit corresponds to one image style, and each processing unit has processing parameters (including an offset parameter and a scale transformation parameter) of the corresponding image style, so that the processing unit can perform corresponding stylization processing on the feature map according to the corresponding processing parameters.
In step S106, the calling target processing unit performs stylization processing on the feature map according to the target style to obtain a processed feature map, which specifically includes:
(1) inputting the feature map into a target processing unit of the image processing model;
(2) taking the processing result of the target processing unit as a processed characteristic diagram;
the target processing unit is used for respectively carrying out first convolution operation on the feature map for a first preset number of times and real-time normalization processing for the first preset number of times according to the target style.
Fig. 3 is a schematic structural diagram of a target processing unit according to an embodiment of the present application, and as shown in fig. 3, the target processing unit includes a plurality of first convolution operation layers and a plurality of real-time normalization processing layers that are alternately arranged, where the number of layers of the first convolution operation layers is the same as that of the real-time normalization processing layers, and both of the first convolution operation layers and the real-time normalization processing layers are a first preset number of times.
In this embodiment, the mobile terminal inputs the feature map obtained by encoding into the target processing unit, after receiving the feature map, the target processing unit inputs the feature map into the first convolution operation layer, the first convolution operation layer performs the first convolution operation on the feature map, and inputs the operation result into the first real-time normalization processing layer, the real-time normalization processing layer performs real-time normalization processing on the input data, and inputs the output data into the first convolution operation layer of the next layer, so that the processing is repeated until the first convolution operation of the first preset number of times and the real-time normalization processing of the first preset number of times are performed. And the mobile terminal acquires the output result of the last real-time normalization processing layer and takes the output result as the processed characteristic diagram.
In a specific embodiment, the feature maps obtained after the coding of the coding unit are N, in the target processing unit, the first layer of the first convolution operation layer has N input channels, each input channel corresponds to one feature map, the first layer of the first convolution operation layer has M output channels, each output channel corresponds to one feature map after convolution, the first layer of the real-time normalization processing layer connected with the first layer of the first convolution operation layer has M input channels, each input channel corresponds to one feature map after convolution, the first layer of the real-time normalization processing layer has P output channels, and each output channel corresponds to one feature map after real-time normalization processing. Wherein N, M, P are integers. In this way, the number of input channels of the first convolution operation layer is equal to the number of feature maps transmitted thereto, and the number of input channels of the real-time normalization processing layer is equal to the number of feature maps transmitted thereto, thereby ensuring that the feature maps are transmitted and processed in each first convolution operation layer and each real-time normalization processing layer.
In this embodiment, in each processing unit, the number of layers of the first convolution operation layer and the real-time normalization processing layer is the same and is a first preset number of times, that is, each processing unit can perform the first convolution operation on the input data for the first preset number of times and the real-time normalization processing for the first preset number of times. The first preset number is a value determined in the process of training the processing unit, and the size of the first preset number determines the processing precision and the processing speed of the processing unit, and the size of the first preset number can be set in the following manner:
(1) acquiring a training sample graph, calling a coding unit in an image processing model, and coding the training sample graph respectively to obtain a sample characteristic graph corresponding to the training sample graph;
(2) sequentially carrying out first convolution operation and real-time normalization processing on the sample characteristic diagram to obtain a first processing result;
(3) calculating a loss difference value between the first processing result and a preset processing result according to a preset loss function; the preset loss function is a linear combination of a content loss function, a style loss function and an overall variance loss function;
(4) repeatedly executing the steps of the first convolution operation, the real-time normalization processing and the loss difference value calculation until the size relation between the loss difference value obtained by the current calculation and the loss difference value obtained by the previous calculation meets the preset size requirement;
(5) and determining the times corresponding to the steps of repeating the first convolution operation, the real-time normalization processing and the loss difference value calculation as a first preset time.
In act (1), a training sample graph is obtained, which may be obtained by manual collation. And then, calling a coding unit in the image processing model, and coding the training sample graph to obtain a sample characteristic graph corresponding to the training sample graph.
In the action (2), the first convolution operation and the real-time normalization processing are sequentially performed on the sample feature map to obtain a first processing result, that is, the sample feature map is processed by using the first convolution operation and the real-time normalization processing in the processing unit to obtain a first processing result, and the first processing result can be regarded as a processing result obtained by simulating the processing mode of the processing unit.
In the action (3), the preset processing result may be a processing result of processing the sample feature map by the VGG16 model. The preset loss function is:
Lperceptual=αLstyle+βLcontent+γLtv
wherein L isperceptualIs a predetermined loss function, LstyleL is a style loss function (calculated over Vgg16 network)contentAs a function of content loss, Ltvα, β, γ are weights of respective functions, and are parameters for controlling the stylization degree of the output image, and specific values thereof may be set empirically.
In the above-mentioned formula,
Figure BDA0001529266720000081
wherein, P and F respectively represent the first processing result and the preset processing result, l represents the number of current operations, k represents the feature map serial number, the subtraction is matrix element subtraction, i, j are matrix coordinates, and M is determined empirically and can be the sum of euclidean distances of all feature maps of P and F in the operation of the first time.
In the above-mentioned formula,
Figure BDA0001529266720000082
wherein P and F respectively represent a first processing result and a preset processing result, and l represents the current operationNumber of times of calculation, wlThe weight parameters representing the feature loss during the i-th operation (which are generally equal each time), k represents the feature map number, the multiplication is a matrix element multiplication, the generated Gram matrix is an uncancelled covariance matrix, i, j are matrix coordinates, and N is determined empirically.
In the above-mentioned formula,
Figure BDA0001529266720000083
wherein (x)i,j+1-xi,j)2And (x)i+1,j-xi,j)2The gradients of the feature image in the horizontal direction and the vertical direction are respectively represented, i, j are matrix coordinates, x is the value of the coordinate (i, j), and β is a coefficient, and is usually 1.
The overall variance loss function is added in the preset loss function, so that the spatial smoothness of the stylized picture can be improved, and the spatial smoothness of the image output by the processing unit is improved.
By the above-described operation (3), a loss difference between the first processing result and the preset processing result can be calculated according to the preset loss function, and the loss difference is a sum of the results calculated by the content loss function, the style loss function, and the overall variance loss function multiplied by respective weights.
In the above action (4), the steps of the first convolution operation, the real-time normalization processing and the loss difference calculation are repeatedly executed until the size relationship between the loss difference obtained by the current calculation and the loss difference obtained by the previous calculation meets the preset size requirement.
Specifically, after the loss difference value is calculated once, the first convolution operation and the real-time normalization processing are sequentially performed on the first processing result again to obtain a second processing result, and the loss difference value between the second processing result and the preset processing result is calculated, then, the first convolution operation and the real-time normalization processing are sequentially performed on the second processing result, and the above steps are repeated until the size relationship between the loss difference value obtained by the current calculation and the loss difference value obtained by the previous calculation meets the preset size requirement, for example, the size of the loss difference value obtained by the current calculation is equal to the size of the loss difference value obtained by the previous calculation, or the difference value between the loss difference value obtained by the current calculation and the loss difference value obtained by the previous calculation is within the preset difference value range.
When the magnitude relation between the loss difference obtained by the calculation and the loss difference obtained by the previous calculation meets the preset magnitude requirement, the loss difference between the processing result and the preset processing result tends to be stable after the first convolution operation and the real-time normalization processing are repeated on the sample characteristic diagram for multiple times, and the training of the processing unit is completed.
In the operation (5), the number of times corresponding to the step of repeating the first convolution operation, the real-time normalization processing, and the loss difference calculation is determined as a first preset number of times, where the number of times corresponding to the step of repeating the first convolution operation, the real-time normalization processing, and the loss difference calculation is the number of times corresponding to the step of performing the first convolution operation, the real-time normalization processing, and the loss difference calculation. For example, when 5 times of the steps of the first convolution operation, the real-time normalization processing, and the loss difference calculation are performed, it is determined that the magnitude relationship between the loss difference obtained by the calculation of the current time and the loss difference obtained by the calculation of the previous time meets the preset magnitude requirement, and then it is determined that the first preset number of times is set to 5 times. Through the embodiment, the loss difference value between the processing result obtained after the processing unit processes the characteristic diagram for multiple times and the preset processing result tends to be stable, so that the accurate processing result is obtained.
In step S108, a decoding unit in the image processing model is called to decode the processed feature map, so as to obtain a stylized image corresponding to the image to be processed, specifically:
(1) inputting the processed feature map into a decoding unit;
(2) determining the processing result of the decoding unit as a stylized image corresponding to the image to be processed;
the decoding unit is used for carrying out amplification processing on the processed feature map based on the nearest adjacent sampling mode and carrying out second convolution operation on the amplified processed feature map to generate an intermediate image.
Specifically, the processed feature map is input to a decoding unit, and the processing result of the decoding unit is determined as a stylized image corresponding to the image to be processed, wherein the stylized image combines the content and the target style of the image to be processed.
After receiving the processed feature map, the decoding unit firstly amplifies the processed feature map based on the nearest sampling mode and carries out second convolution operation on the amplified feature map. Fig. 4 is a schematic structural diagram of a decoding unit according to an embodiment of the present application, and as shown in fig. 4, the decoding unit includes two amplification processing layers and two second convolution operation layers, where the amplification processing layers and the second convolution operation layers are alternately arranged.
In fig. 4, after receiving the processed feature map, the decoding unit performs amplification processing on the processed feature map by using the first amplification processing layer, performs second convolution on the amplification result by using the first convolution operation layer, then performs amplification on the convolution result by using the second amplification processing layer, and then performs second convolution on the amplification result by using the second convolution operation layer, so as to obtain an intermediate image.
In other embodiments, the decoding unit may be provided with other numbers of amplification processing layers and second convolution operation layers, and the specific number may be determined according to the scene requirement, so as to ensure that the amplification processing layers and the second convolution operation layers are alternately provided with the same number of layers.
In this embodiment, the decoding unit replaces the common deconvolution by adopting a combination of nearest neighbor sampling amplification and convolution, and can prevent the output stylized image from having a checkerboard effect, which means that the boundary of a certain region in the image is not smooth and cannot be smoothly connected with other regions, while ensuring that the size of the output stylized image is the same as that of the image to be processed.
In this embodiment, after the decoding unit generates the intermediate image, the decoding unit further determines an average value and a variance of pixel values of each pixel block of the intermediate image, and performs normalization adjustment on the pixel values corresponding to each pixel block of the intermediate image according to the average value and the variance.
The specific adjustment mode is that,
Figure BDA0001529266720000111
wherein, a is the adjusted pixel value of each pixel block, B is the pixel value of each pixel block before adjustment, P is the average value, and Q is the variance. In this embodiment, the same method may be used to perform normalization adjustment on both the brightness value and the contrast value of the intermediate image.
In this embodiment, the decoding unit performs pixel value normalization adjustment on the intermediate image, so that a situation that a black block exists in the output stylized image can be avoided.
In one embodiment, the decoding unit outputs the intermediate image as the processing result, and the stylized image is the intermediate image.
In another embodiment, the decoding unit outputs the intermediate image after the pixel value normalization adjustment as a processing result, and the stylized image is the intermediate image after the pixel value normalization adjustment.
In another embodiment, the decoding unit outputs the intermediate image with the pixel value, the brightness value and the contrast value all normalized and adjusted as the processing result, and the stylized image is the intermediate image with the pixel value, the brightness value and the contrast value all normalized and adjusted.
In this embodiment, the first convolution operation used by the target processing unit and the second convolution operation used by the decoding unit both include Depth Separable convolution operation (Depth Wise Separable convolution operations) and Point Wise Separable convolution operation (Point Wise Separable convolution operations), and the Depth Separable convolution operation and the Point-Wise Separable convolution operation are used to replace conventional convolution operation, so that the convolution speed can be greatly increased, the size of the image processing model can be greatly reduced, and the same feature expression can be obtained, and generally, when the size of the convolution kernel is 9, the model calculation amount and the model size can be reduced to one ninth of the original size.
With the image processing method in the embodiment of the present application, different stylization processing can also be performed on different areas of the image to be processed, specifically, in step S104, determining a target style corresponding to the image to be processed specifically is: dividing an image to be processed into a plurality of image units, and respectively determining the target style corresponding to each image unit.
Specifically, a user may segment an image to be processed on a mobile terminal into a plurality of image units, and after receiving image segmentation information of the user, the mobile terminal segments the image to be processed into the plurality of image units according to the image segmentation information, and determines a target style selected by the user for each image unit as a target style corresponding to each image unit.
Correspondingly, in step S106, selecting a target processing unit corresponding to the target style from a plurality of processing units included in the image processing model, and invoking the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map, where the method includes:
(1) selecting a target processing unit corresponding to a target style corresponding to each image unit from a plurality of processing units contained in the image processing model;
(2) extracting a unit feature map corresponding to each image unit from the feature maps;
(3) and calling each selected target processing unit, and performing stylization processing on the corresponding unit feature maps respectively according to the respective corresponding image styles to obtain the processed feature maps.
Specifically, after dividing the image to be processed into a plurality of image units, the mobile terminal selects a target processing unit corresponding to a target style of each image unit from a plurality of processing units included in the image processing model, extracts a unit feature map corresponding to each image unit from the feature map of the image to be processed, and finally invokes each selected target processing unit to style the corresponding unit feature map according to the corresponding image style to obtain a processed feature map, where the process of the target processing unit stylizing the corresponding unit feature map is the same as the above explanation of step S106, and details are not repeated here.
Therefore, because each processing unit in the image processing model shares the coding unit, different stylization processing can be performed on each region of the same image to be processed by the method in the embodiment of the application, so that different styles of different parts of the image are changed, and the diversity of image processing is enriched.
In summary, the embodiment of the present application has at least the following beneficial effects:
(1) the coding process and the stylized processing process in the image processing model are independent, and a plurality of processing units share the coding unit and the decoding unit, so that the model volume is greatly reduced, the image processing speed is improved, and the method is very suitable for being applied to a mobile terminal;
(2) the image processing model comprises a plurality of processing units, each processing unit corresponds to one image style, and for the image style specified by a user, only the corresponding processing unit needs to be called, so that the processing of multiple image styles is realized through the same image processing model;
(3) when the user switches the image style, because the multiple processing units share one coding unit, the image does not need to be repeatedly coded, and only the characteristic diagram obtained by the previous coding needs to be directly stylized, so that the image calculation amount is greatly saved;
(4) in the image processing model, a plurality of processing units share one coding unit, so that different stylization processing can be carried out on different areas of the same image, and the diversity of the stylization processing is improved.
Corresponding to the above method, an embodiment of the present application further provides an image processing apparatus, and fig. 5 is a schematic diagram of module composition of the image processing apparatus according to the embodiment of the present application, as shown in fig. 5, the apparatus includes:
the encoding unit calling module 51 is configured to acquire an image to be processed, input the image to be processed into an image processing model, call an encoding unit in the image processing model, and encode the image to be processed to obtain a feature map corresponding to the image to be processed;
the image style determining module 52 is configured to determine a target style corresponding to the image to be processed according to a trigger operation of a user; the target style is an image style triggered by a user;
the processing unit calling module 53 is configured to select, in real time, a target processing unit corresponding to the target style from a plurality of processing units included in the image processing model, and call the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map; wherein each processing unit corresponds to an image style;
and a decoding unit calling module 54, configured to call a decoding unit in the image processing model, and decode the processed feature map to obtain a stylized image corresponding to the image to be processed.
Optionally, the processing unit invoking module 53 is specifically configured to:
inputting the feature map to the target processing unit in the image processing model;
taking the processing result of the target processing unit as a processed feature map;
and the target processing unit is used for respectively carrying out first convolution operation and real-time normalization processing on the feature map for a first preset number of times according to the target style.
Optionally, the apparatus further comprises a training module for:
acquiring a training sample graph, calling a coding unit in the image processing model, and coding the training sample graph respectively to obtain a sample characteristic graph corresponding to the training sample graph;
sequentially performing the first convolution operation and the real-time normalization processing on the sample characteristic diagram to obtain a first processing result;
calculating a loss difference value between the first processing result and a preset processing result according to a preset loss function; the preset loss function is a linear combination of a content loss function, a style loss function and an overall variance loss function;
repeating the steps of the first convolution operation, the real-time normalization processing and the loss difference calculation until the loss difference obtained by the current calculation and the loss difference obtained by the previous calculation meet the preset size requirement;
and determining the times corresponding to the steps of repeating the first convolution operation, the real-time normalization processing and the loss difference calculation as the first preset times.
Optionally, the decoding unit invoking module 54 is specifically configured to:
inputting the processed feature map to the decoding unit;
determining the processing result of the decoding unit as a stylized image corresponding to the image to be processed;
the decoding unit is used for carrying out amplification processing on the processed feature map based on a nearest adjacent sampling mode and carrying out second convolution operation on the amplified processed feature map to generate an intermediate image.
Optionally, the decoding unit invoking module 54 is further specifically configured to
After generating the intermediate image, determining a mean and a variance of pixel values of respective pixel blocks of the intermediate image;
and respectively carrying out normalization adjustment on pixel values corresponding to all pixel blocks of the intermediate image according to the average value and the variance.
Alternatively,
the image style determination module 52 is specifically configured to:
dividing the image to be processed into a plurality of image units, and respectively determining the target style corresponding to each image unit;
the processing unit invoking module 53 is specifically configured to:
selecting a target processing unit corresponding to a target style corresponding to each image unit from a plurality of processing units contained in the image processing model;
extracting a unit feature map corresponding to each image unit from the feature maps;
and calling each selected target processing unit, and performing stylization processing on the corresponding unit feature maps respectively according to the respective corresponding image styles to obtain the processed feature maps.
Optionally, the first convolution operation comprises a depth separable convolution operation and a pixel-by-pixel separable convolution operation.
According to the method and the device, the stylized image corresponding to the image to be processed can be obtained by stylizing the image to be processed according to the target style specified by the user, so that the display effect of the image can be adjusted, and the image can meet the display requirement of the user. Moreover, the image processing model invoked in the embodiment of the present application includes a coding unit, a plurality of processing units and a decoding unit, where each processing unit corresponds to one image style, and therefore the processing units of the plurality of image styles corresponding to the image processing model share the coding unit. After the image to be processed is subjected to one-time stylization processing, if the user switches the image style, the image does not need to be repeatedly coded, and the stylization processing can be performed again by directly utilizing the feature diagram obtained by the previous coding, so that a repeated coding process is omitted in a scene that the user switches the image style, and the image processing speed is improved. And the image processing model comprises a coding unit, a plurality of processing units and a decoding unit, and compared with a structure that one coding unit and one decoding unit are respectively configured for each image style, the image processing model omits the repeated coding unit and the repeated decoding unit, reduces the volume and the data volume of the image processing model and further improves the image processing speed by adopting the mode that the plurality of processing units are fused and share the coding unit and the decoding unit.
Further, based on the foregoing method, an embodiment of the present application further provides an image processing apparatus, and fig. 6 is a schematic structural diagram of the image processing apparatus provided in an embodiment of the present application.
As shown in fig. 6, the image processing apparatus may have a relatively large difference due to different configurations or performances, and may include one or more processors 701 and a memory 702, where one or more stored applications or data may be stored in the memory 702. Memory 702 may be, among other things, transient storage or persistent storage. The application program stored in memory 702 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for the image processing device. Still further, the processor 701 may be configured to communicate with the memory 702 to execute a series of computer-executable instructions in the memory 702 on the image processing device. The image processing apparatus may also include one or more power supplies 703, one or more wired or wireless network interfaces 704, one or more input-output interfaces 705, one or more keyboards 706, and the like.
In a specific embodiment, the image processing apparatus includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the processes of the above-mentioned embodiment of the image processing method, and specifically includes the following steps:
acquiring an image to be processed, inputting the image to be processed into an image processing model, calling a coding unit in the image processing model, and coding the image to be processed to obtain a feature map corresponding to the image to be processed;
determining a target style corresponding to the image to be processed according to the triggering operation of a user; the target style is an image style triggered by a user;
selecting a target processing unit corresponding to the target style in real time from a plurality of processing units included in the image processing model, and calling the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map; wherein each processing unit corresponds to an image style;
and calling a decoding unit in the image processing model, and decoding the processed feature map to obtain a stylized image corresponding to the image to be processed.
Optionally, when executed, the computer executable instruction invokes the target processing unit to perform stylization processing on the feature map according to the target style, so as to obtain a processed feature map, where the processing includes:
inputting the feature map to the target processing unit in the image processing model;
taking the processing result of the target processing unit as a processed feature map;
and the target processing unit is used for respectively carrying out first convolution operation and real-time normalization processing on the feature map for a first preset number of times according to the target style.
Optionally, the computer executable instructions, when executed, further comprise:
acquiring a training sample graph, calling a coding unit in the image processing model, and coding the training sample graph respectively to obtain a sample characteristic graph corresponding to the training sample graph;
sequentially performing the first convolution operation and the real-time normalization processing on the sample characteristic diagram to obtain a first processing result;
calculating a loss difference value between the first processing result and a preset processing result according to a preset loss function; the preset loss function is a linear combination of a content loss function, a style loss function and an overall variance loss function;
repeating the steps of the first convolution operation, the real-time normalization processing and the loss difference calculation until the loss difference obtained by the current calculation and the loss difference obtained by the previous calculation meet the preset size requirement;
and determining the times corresponding to the steps of repeating the first convolution operation, the real-time normalization processing and the loss difference calculation as the first preset times.
Optionally, when executed, the computer executable instruction invokes a decoding unit in the image processing model to decode the processed feature map, so as to obtain a stylized image corresponding to the image to be processed, where the stylized image includes:
inputting the processed feature map to the decoding unit;
determining the processing result of the decoding unit as a stylized image corresponding to the image to be processed;
the decoding unit is used for carrying out amplification processing on the processed feature map based on a nearest adjacent sampling mode and carrying out second convolution operation on the amplified processed feature map to generate an intermediate image.
Optionally, the computer-executable instructions, when executed, the decoding unit further comprises, after generating the intermediate image:
determining a mean and a variance of pixel values of respective pixel blocks of the intermediate image;
and respectively carrying out normalization adjustment on pixel values corresponding to all pixel blocks of the intermediate image according to the average value and the variance.
Alternatively, computer-executable instructions, when executed,
determining a target style corresponding to the image to be processed, including:
dividing the image to be processed into a plurality of image units, and respectively determining the target style corresponding to each image unit;
calling the target processing unit to perform stylization processing on the feature graph according to the target style, wherein the stylization processing comprises the following steps:
selecting a target processing unit corresponding to a target style corresponding to each image unit from a plurality of processing units contained in the image processing model;
extracting a unit feature map corresponding to each image unit from the feature maps;
and calling each selected target processing unit, and performing stylization processing on the corresponding unit feature maps respectively according to the respective corresponding image styles to obtain the processed feature maps.
Optionally, the computer executable instructions, when executed, the first convolution operation comprise a depth separable convolution operation and a pixel-by-pixel separable convolution operation.
According to the method and the device, the stylized image corresponding to the image to be processed can be obtained by stylizing the image to be processed according to the target style specified by the user, so that the display effect of the image can be adjusted, and the image can meet the display requirement of the user. Moreover, the image processing model invoked in the embodiment of the present application includes a coding unit, a plurality of processing units and a decoding unit, where each processing unit corresponds to one image style, and therefore the processing units of the plurality of image styles corresponding to the image processing model share the coding unit. After the image to be processed is subjected to one-time stylization processing, if the user switches the image style, the image does not need to be repeatedly coded, and the stylization processing can be performed again by directly utilizing the feature diagram obtained by the previous coding, so that a repeated coding process is omitted in a scene that the user switches the image style, and the image processing speed is improved. And the image processing model comprises a coding unit, a plurality of processing units and a decoding unit, and compared with a structure that one coding unit and one decoding unit are respectively configured for each image style, the image processing model omits the repeated coding unit and the repeated decoding unit, reduces the volume and the data volume of the image processing model and further improves the image processing speed by adopting the mode that the plurality of processing units are fused and share the coding unit and the decoding unit.
Further, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (11)

1. An image processing method applied to a mobile terminal includes:
acquiring an image to be processed, inputting the image to be processed into an image processing model, calling a coding unit in the image processing model, and coding the image to be processed to obtain a feature map corresponding to the image to be processed;
determining a target style corresponding to the image to be processed according to the triggering operation of a user; the target style is an image style triggered by a user;
selecting a target processing unit corresponding to the target style in real time from a plurality of processing units included in the image processing model, and calling the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map; wherein each processing unit corresponds to an image style;
calling a decoding unit in the image processing model, decoding the processed feature map to obtain a stylized image corresponding to the image to be processed, wherein the stylized image obtained by processing comprises the content and the target style of the image to be processed;
calling the target processing unit to perform stylization processing on the feature graph according to the target style to obtain a processed feature graph, wherein the stylization processing comprises the following steps:
inputting the feature map to the target processing unit in the image processing model;
taking the processing result of the target processing unit as a processed feature map;
the target processing unit comprises a plurality of first convolution operation layers and a plurality of real-time normalization processing layers which are alternately arranged, and is used for respectively carrying out first convolution operation and real-time normalization processing on the feature map for a first preset number of times according to the target style.
2. The method of claim 1, further comprising:
acquiring a training sample graph, calling a coding unit in the image processing model, and coding the training sample graph respectively to obtain a sample characteristic graph corresponding to the training sample graph;
sequentially performing the first convolution operation and the real-time normalization processing on the sample characteristic diagram to obtain a first processing result;
calculating a loss difference value between the first processing result and a preset processing result according to a preset loss function; the preset loss function is a linear combination of a content loss function, a style loss function and an overall variance loss function;
repeating the steps of the first convolution operation, the real-time normalization processing and the loss difference calculation until the loss difference obtained by the current calculation and the loss difference obtained by the previous calculation meet the preset size requirement;
and determining the times corresponding to the steps of repeating the first convolution operation, the real-time normalization processing and the loss difference calculation as the first preset times.
3. The method according to claim 1, wherein invoking a decoding unit in the image processing model to decode the processed feature map to obtain a stylized image corresponding to the image to be processed comprises:
inputting the processed feature map to the decoding unit;
determining the processing result of the decoding unit as a stylized image corresponding to the image to be processed;
the decoding unit is used for carrying out amplification processing on the processed feature map based on a nearest adjacent sampling mode and carrying out second convolution operation on the amplified processed feature map to generate an intermediate image.
4. The method of claim 3, wherein the decoding unit, after generating the intermediate image, further comprises:
determining a mean and a variance of pixel values of respective pixel blocks of the intermediate image;
and respectively carrying out normalization adjustment on pixel values corresponding to all pixel blocks of the intermediate image according to the average value and the variance.
5. The method of claim 1, wherein determining the target style corresponding to the image to be processed comprises:
dividing the image to be processed into a plurality of image units, and respectively determining the target style corresponding to each image unit;
calling the target processing unit to perform stylization processing on the feature graph according to the target style, wherein the stylization processing comprises the following steps:
selecting a target processing unit corresponding to a target style corresponding to each image unit from a plurality of processing units contained in the image processing model;
extracting a unit feature map corresponding to each image unit from the feature maps;
and calling each selected target processing unit, and performing stylization processing on the corresponding unit feature maps respectively according to the respective corresponding image styles to obtain the processed feature maps.
6. The method of claim 2, wherein the first convolution operation comprises a depth separable convolution operation and a pixel-by-pixel separable convolution operation.
7. An image processing apparatus, applied to a mobile terminal, comprising:
the encoding unit calling module is used for acquiring an image to be processed, inputting the image to be processed into an image processing model, calling an encoding unit in the image processing model, and encoding the image to be processed to obtain a feature map corresponding to the image to be processed;
the image style determining module is used for determining a target style corresponding to the image to be processed according to the triggering operation of a user; the target style is an image style triggered by a user;
the processing unit calling module is used for selecting a target processing unit corresponding to the target style in real time from a plurality of processing units contained in the image processing model, and calling the target processing unit to perform stylization processing on the feature map according to the target style to obtain a processed feature map; wherein each processing unit corresponds to an image style;
the decoding unit calling module is used for calling a decoding unit in the image processing model to decode the processed feature map to obtain a stylized image corresponding to the image to be processed, and the stylized image obtained by processing comprises the content and the target style of the image to be processed;
the processing unit calling module is specifically configured to:
inputting the feature map to the target processing unit in the image processing model;
taking the processing result of the target processing unit as a processed feature map;
the target processing unit comprises a plurality of first convolution operation layers and a plurality of real-time normalization processing layers which are alternately arranged, and is used for respectively carrying out first convolution operation and real-time normalization processing on the feature map for a first preset number of times according to the target style.
8. The apparatus of claim 7, further comprising a training module to:
acquiring a training sample graph, calling a coding unit in the image processing model, and coding the training sample graph respectively to obtain a sample characteristic graph corresponding to the training sample graph;
sequentially performing the first convolution operation and the real-time normalization processing on the sample characteristic diagram to obtain a first processing result;
calculating a loss difference value between the first processing result and a preset processing result according to a preset loss function; the preset loss function is a linear combination of a content loss function, a style loss function and an overall variance loss function;
repeating the steps of the first convolution operation, the real-time normalization processing and the loss difference calculation until the loss difference obtained by the current calculation and the loss difference obtained by the previous calculation meet the preset size requirement;
and determining the times corresponding to the steps of repeating the first convolution operation, the real-time normalization processing and the loss difference calculation as the first preset times.
9. The apparatus of claim 7, wherein the decode unit call module is specifically configured to:
inputting the processed feature map to the decoding unit;
determining the processing result of the decoding unit as a stylized image corresponding to the image to be processed;
the decoding unit is used for carrying out amplification processing on the processed feature map based on a nearest adjacent sampling mode and carrying out second convolution operation on the amplified processed feature map to generate an intermediate image.
10. The apparatus of claim 9, wherein the decode unit call module is further specifically configured to:
after generating the intermediate image, determining a mean and a variance of pixel values of respective pixel blocks of the intermediate image;
and respectively carrying out normalization adjustment on pixel values corresponding to all pixel blocks of the intermediate image according to the average value and the variance.
11. The apparatus of claim 7,
the image style determination module is specifically configured to:
dividing the image to be processed into a plurality of image units, and respectively determining the target style corresponding to each image unit;
the processing unit calling module is specifically configured to:
selecting a target processing unit corresponding to a target style corresponding to each image unit from a plurality of processing units contained in the image processing model;
extracting a unit feature map corresponding to each image unit from the feature maps;
and calling each selected target processing unit, and performing stylization processing on the corresponding unit feature maps respectively according to the respective corresponding image styles to obtain the processed feature maps.
CN201711455985.7A 2017-12-28 2017-12-28 Image processing method and device Active CN107948529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711455985.7A CN107948529B (en) 2017-12-28 2017-12-28 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711455985.7A CN107948529B (en) 2017-12-28 2017-12-28 Image processing method and device

Publications (2)

Publication Number Publication Date
CN107948529A CN107948529A (en) 2018-04-20
CN107948529B true CN107948529B (en) 2020-11-06

Family

ID=61940671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711455985.7A Active CN107948529B (en) 2017-12-28 2017-12-28 Image processing method and device

Country Status (1)

Country Link
CN (1) CN107948529B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898556A (en) * 2018-05-24 2018-11-27 麒麟合盛网络技术股份有限公司 A kind of image processing method and device of three-dimensional face
CN108985317B (en) * 2018-05-25 2022-03-01 西安电子科技大学 Image classification method based on separable convolution and attention mechanism
CN108846835B (en) * 2018-05-31 2020-04-14 西安电子科技大学 Image change detection method based on depth separable convolutional network
CN108776959B (en) * 2018-07-10 2021-08-06 Oppo(重庆)智能科技有限公司 Image processing method and device and terminal equipment
CN109064428B (en) * 2018-08-01 2021-04-13 Oppo广东移动通信有限公司 Image denoising processing method, terminal device and computer readable storage medium
CN111091593B (en) * 2018-10-24 2024-03-22 深圳云天励飞技术有限公司 Image processing method, device, electronic equipment and storage medium
CN111124398A (en) * 2018-10-31 2020-05-08 中国移动通信集团重庆有限公司 User interface generation method, device, equipment and storage medium
CN109510943A (en) * 2018-12-17 2019-03-22 三星电子(中国)研发中心 Method and apparatus for shooting image
CN111383289A (en) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 Image processing method, image processing device, terminal equipment and computer readable storage medium
CN111325252B (en) * 2020-02-12 2022-08-26 腾讯科技(深圳)有限公司 Image processing method, apparatus, device, and medium
CN111784565B (en) * 2020-07-01 2021-10-29 北京字节跳动网络技术有限公司 Image processing method, migration model training method, device, medium and equipment
CN112241941B (en) * 2020-10-20 2024-03-22 北京字跳网络技术有限公司 Method, apparatus, device and computer readable medium for acquiring image
CN113052757A (en) * 2021-03-08 2021-06-29 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium
CN114422682B (en) * 2022-01-28 2024-02-02 安谋科技(中国)有限公司 Shooting method, electronic device and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651766A (en) * 2016-12-30 2017-05-10 深圳市唯特视科技有限公司 Image style migration method based on deep convolutional neural network
CN106778928A (en) * 2016-12-21 2017-05-31 广州华多网络科技有限公司 Image processing method and device
CN106847294A (en) * 2017-01-17 2017-06-13 百度在线网络技术(北京)有限公司 Audio-frequency processing method and device based on artificial intelligence
CN106886975A (en) * 2016-11-29 2017-06-23 华南理工大学 It is a kind of can real time execution image stylizing method
CN107240085A (en) * 2017-05-08 2017-10-10 广州智慧城市发展研究院 A kind of image interfusion method and system based on convolutional neural networks model
CN107277615A (en) * 2017-06-30 2017-10-20 北京奇虎科技有限公司 Live stylized processing method, device, computing device and storage medium
CN107369189A (en) * 2017-07-21 2017-11-21 成都信息工程大学 The medical image super resolution ratio reconstruction method of feature based loss
CN107464210A (en) * 2017-07-06 2017-12-12 浙江工业大学 A kind of image Style Transfer method based on production confrontation network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100924689B1 (en) * 2007-12-17 2009-11-03 한국전자통신연구원 Apparatus and method for transforming an image in a mobile device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886975A (en) * 2016-11-29 2017-06-23 华南理工大学 It is a kind of can real time execution image stylizing method
CN106778928A (en) * 2016-12-21 2017-05-31 广州华多网络科技有限公司 Image processing method and device
CN106651766A (en) * 2016-12-30 2017-05-10 深圳市唯特视科技有限公司 Image style migration method based on deep convolutional neural network
CN106847294A (en) * 2017-01-17 2017-06-13 百度在线网络技术(北京)有限公司 Audio-frequency processing method and device based on artificial intelligence
CN107240085A (en) * 2017-05-08 2017-10-10 广州智慧城市发展研究院 A kind of image interfusion method and system based on convolutional neural networks model
CN107277615A (en) * 2017-06-30 2017-10-20 北京奇虎科技有限公司 Live stylized processing method, device, computing device and storage medium
CN107464210A (en) * 2017-07-06 2017-12-12 浙江工业大学 A kind of image Style Transfer method based on production confrontation network
CN107369189A (en) * 2017-07-21 2017-11-21 成都信息工程大学 The medical image super resolution ratio reconstruction method of feature based loss

Also Published As

Publication number Publication date
CN107948529A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107948529B (en) Image processing method and device
CN110062272B (en) Video data processing method and related device
CN111275653B (en) Image denoising method and device
CN109817170B (en) Pixel compensation method and device and terminal equipment
CN111882627A (en) Image processing method, video processing method, device, equipment and storage medium
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
CN103745430A (en) Rapid beautifying method of digital image
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN115294055A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113421312A (en) Method and device for coloring black and white video, storage medium and terminal
CN115529834A (en) Image processing method and image processing apparatus
CN111292234B (en) Panoramic image generation method and device
CN112991497B (en) Method, device, storage medium and terminal for coloring black-and-white cartoon video
JP6155349B2 (en) Method, apparatus and computer program product for reducing chromatic aberration in deconvolved images
CN112788236B (en) Video frame processing method and device, electronic equipment and readable storage medium
CN114663570A (en) Map generation method and device, electronic device and readable storage medium
CN116645302A (en) Image enhancement method, device, intelligent terminal and computer readable storage medium
CN110060210B (en) Image processing method and related device
CN112446848A (en) Image processing method and device and electronic equipment
CN114331927A (en) Image processing method, storage medium and terminal equipment
CN111179158A (en) Image processing method, image processing apparatus, electronic device, and medium
CN116485979B (en) Mapping relation calculation method, color calibration method and electronic equipment
CN113489901B (en) Shooting method and device thereof
CN111583104B (en) Light spot blurring method and device, storage medium and computer equipment
US11962917B2 (en) Color adjustment method, color adjustment device, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 207A, 2nd floor, No. 2 Information Road, Haidian District, Beijing 100085 (1-8th floor, Building D, 2-2, Beijing Shichuang High-Tech Development Corporation)

Applicant after: QILIN HESHENG NETWORK TECHNOLOGY Inc.

Address before: Room 207A, 2nd floor, No. 2 Information Road, Haidian District, Beijing 100085 (1-8th floor, Building D, 2-2, Beijing Shichuang High-Tech Development Corporation)

Applicant before: QILIN HESHENG NETWORK TECHNOLOGY Inc.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant