CN107948529A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN107948529A
CN107948529A CN201711455985.7A CN201711455985A CN107948529A CN 107948529 A CN107948529 A CN 107948529A CN 201711455985 A CN201711455985 A CN 201711455985A CN 107948529 A CN107948529 A CN 107948529A
Authority
CN
China
Prior art keywords
image
processing
characteristic pattern
style
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711455985.7A
Other languages
Chinese (zh)
Other versions
CN107948529B (en
Inventor
涂治国
张轩哲
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kylin Hesheng Network Technology Co Ltd
Original Assignee
Beijing Kylin Hesheng Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kylin Hesheng Network Technology Co Ltd filed Critical Beijing Kylin Hesheng Network Technology Co Ltd
Priority to CN201711455985.7A priority Critical patent/CN107948529B/en
Publication of CN107948529A publication Critical patent/CN107948529A/en
Application granted granted Critical
Publication of CN107948529B publication Critical patent/CN107948529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the present application provides a kind of image processing method and device, and wherein method includes:Pending image is obtained, pending image is inputted to image processing model, the coding unit in image processing model is called, pending image is encoded, obtains the corresponding characteristic pattern of pending image;Determine the corresponding target style of pending image;In multiple processing units that image processing model includes, the corresponding object processing unit of target style is chosen, invocation target processing unit carries out characteristic pattern stylized processing, the characteristic pattern after being handled according to target style;Wherein, each processing unit corresponds to a kind of image style;The decoding unit in image processing model is called, the characteristic pattern after processing is decoded, obtains the corresponding stylized image of pending image.Through this embodiment, the display effect of image can be adjusted, image is met the display requirement of user.

Description

Image processing method and device
Technical field
This application involves image processing field, more particularly to a kind of image processing method and device.
Background technology
At present, image taking has become a kind of common leisure way, user can use the shooting camera of specialty into Row shooting, can also carry out image taking using mobile terminal such as mobile phone, tablet computer, wherein.Figure is carried out using mobile terminal As shooting is based on its is convenient, fast, need not carry the advantage of professional equipment, have become the selection of more and more users.
It is subject to shoot light during image taking, shoots the limitation such as occasion, technique for taking, the image that user shoots is difficult to keep away Exempt from there are display defect, do not reach the preferable effect of user, for example brightness of image is too low, over-exposed etc..Therefore, scheme to improve Image quality amount, it is necessary to a kind of image processing method is provided, is adjusted with the display effect to image, makes image meet user's Display requires.
The content of the invention
The purpose of the embodiment of the present application is to provide a kind of image processing method and device, can to the display effect of image into Row is adjusted, and image is met the display requirement of user.
To reach above-mentioned purpose, what the embodiment of the present application was realized in:
In a first aspect, the embodiment of the present application provides a kind of image processing method, applied to mobile terminal, including:
Pending image is obtained, the pending image is inputted to image processing model, calls described image processing mould Coding unit in type, encodes the pending image, obtains the corresponding characteristic pattern of the pending image;
The corresponding target style of the pending image is determined according to the trigger action of user;Wherein, the target style For the image style of user's triggering;
In multiple processing units that described image processing model includes, the corresponding target of the target style is chosen in real time Processing unit, and call the object processing unit to carry out stylized processing to the characteristic pattern according to the target style, obtain Characteristic pattern after to processing;Wherein, each processing unit corresponds to a kind of image style;
The decoding unit in described image processing model is called, the characteristic pattern after the processing is decoded, obtains institute State the corresponding stylized image of pending image.
Second aspect, the embodiment of the present application provides a kind of image processing apparatus, applied to mobile terminal, including:
Coding unit calling module, for obtaining pending image, the pending image is inputted to image procossing mould Type, calls the coding unit in described image processing model, the pending image is encoded, obtains the pending figure As corresponding characteristic pattern;
Image style determining module, for determining the corresponding target wind of the pending image according to the trigger action of user Lattice;Wherein, the target style is the image style of user's triggering;
Processing unit calling module, in multiple processing units that described image processing model includes, choosing in real time The corresponding object processing unit of the target style, and call the object processing unit according to the target style to the spy Sign figure carries out stylized processing, the characteristic pattern after being handled;Wherein, each processing unit corresponds to a kind of image style;
Decoding unit calling module, for calling the decoding unit in described image processing model, after the processing Characteristic pattern is decoded, and obtains the corresponding stylized image of the pending image.
The third aspect, the embodiment of the present application provide a kind of image processing equipment, including:Memory, processor and storage On the memory and the computer program that can run on the processor, the computer program are held by the processor The step of image processing method as described in above-mentioned first aspect is realized during row.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable recording medium, the computer-readable storage Computer program is stored with medium, the figure as described in above-mentioned first aspect is realized when the computer program is executed by processor As the step of processing method.
By the embodiment of the present application, pending image can be carried out at stylization according to the target style that user specifies Reason, obtains the corresponding stylized image of pending image, so as to fulfill the adjusting of the display effect to image, makes image meet to use The display requirement at family.Also, the image processing model called in the embodiment of the present application, including coding unit, multiple processing units And decoding unit, wherein each processing unit corresponds to a kind of image style, therefore the corresponding a variety of images of the image processing model The processing unit of style shares coding unit.Pending image by once stylization processing after, if user switch image wind Lattice, then without carrying out repeated encoding to image, directly can carry out stylized place again using the characteristic pattern encoded before Reason, so that under the scene that user switches image style, eliminates the cataloged procedure of repetition, improves image processing speed.And And image processing model includes coding unit, multiple processing units and decoding unit, merged, shared using multiple processing units The mode of coding unit and decoding unit, a coding unit and a decoding unit are respectively configured compared to every kind of image style Structure, eliminate the coding unit and decoding unit of repetition, reduce the volume and data volume of image processing model, further Improve image processing speed.
Brief description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments described in application, for those of ordinary skill in the art, in the premise of not making the creative labor property Under, other attached drawings can also be obtained according to these attached drawings.
Fig. 1 is the flow diagram for the image processing method that one embodiment of the application provides;
Fig. 2 is the structure diagram for the image processing model that one embodiment of the application provides;
Fig. 3 is the structure diagram for the object processing unit that one embodiment of the application provides;
Fig. 4 is the structure diagram for the decoding unit that one embodiment of the application provides;
Fig. 5 is the module composition schematic diagram for the image processing apparatus that one embodiment of the application provides;
Fig. 6 is the structure diagram for the image processing equipment that one embodiment of the application provides.
Embodiment
It is in order to make those skilled in the art better understand the technical solutions in the application, real below in conjunction with the application The attached drawing in example is applied, the technical solution in the embodiment of the present application is clearly and completely described, it is clear that described implementation Example is merely a part but not all of the embodiments of the present application.It is common based on the embodiment in the application, this area Technical staff's all other embodiments obtained without creative efforts, should all belong to the application protection Scope.
In order to which the display effect of image is adjusted, image is set to meet the display requirement of user, the embodiment of the present application carries A kind of image processing method and a kind of image processing apparatus are supplied, this method can be applied on mobile terminals, by mobile terminal Perform, the invention relates to mobile terminal include but not limited to the intelligence such as mobile phone, tablet computer, computer, wearable device Can terminal.
Fig. 1 is the flow diagram for the image processing method that one embodiment of the application provides, as shown in Figure 1, this method bag Include following steps:
Step S102, obtains pending image, and pending image is inputted to image processing model, calls image procossing mould Coding unit in type, encodes pending image, obtains the corresponding characteristic pattern of pending image;
Step S104, the corresponding target style of pending image is determined according to the trigger action of user;Wherein, target style For the image style of user's triggering;
Step S106, in multiple processing units that image processing model includes, chooses the corresponding mesh of target style in real time Processing unit is marked, and invocation target processing unit carries out features described above figure stylized processing according to target style, is handled Characteristic pattern afterwards;Wherein, each processing unit corresponds to a kind of image style;
Step S108, calls the decoding unit in image processing model, the characteristic pattern after processing is decoded, is treated Handle the corresponding stylized image of image.
By the embodiment of the present application, pending image can be carried out at stylization according to the target style that user specifies Reason, obtains the corresponding stylized image of pending image, so as to fulfill the adjusting of the display effect to image, makes image meet to use The display requirement at family.Also, the image processing model called in the embodiment of the present application, including coding unit, multiple processing units And decoding unit, wherein each processing unit corresponds to a kind of image style, therefore the corresponding a variety of images of the image processing model The processing unit of style shares coding unit.Pending image by once stylization processing after, if user switch image wind Lattice, then without carrying out repeated encoding to image, directly can carry out stylized place again using the characteristic pattern encoded before Reason, so that under the scene that user switches image style, eliminates the cataloged procedure of repetition, improves image processing speed.And And image processing model includes coding unit, multiple processing units and decoding unit, merged, shared using multiple processing units The mode of coding unit and decoding unit, a coding unit and a decoding unit are respectively configured compared to every kind of image style Structure, eliminate the coding unit and decoding unit of repetition, reduce the volume and data volume of image processing model, further Improve image processing speed.
In the embodiment of the present application, stylization processing is referred to the processing mode that pending Image Adjusting is target style, Target style may come from artistic pictures, and the stylized image handled includes the content and target style of pending image. Citing, pending image are animal painting, and target style is the classical famous painting of van gogh《Starry sky》Corresponding style, then by this reality The stylization processing in example is applied, animal painting can be processed into《Starry sky》Corresponding style, the stylized image handled For with《Starry sky》The animal painting of style.
Fig. 2 is the structure diagram for the image processing model that one embodiment of the application provides, as shown in Fig. 2, at the image Reason model includes a coding unit, multiple processing units and a decoding unit, and wherein the output of coding unit is as processing The input of unit, the input exported as decoding unit of processing unit.Processing unit in Fig. 2 is used for realization above-mentioned style Change is handled, and each processing unit corresponds to a kind of image style.In Fig. 2, the entirety that multiple processing units collectively constitute can claim For residual error module.
In above-mentioned steps S102, pending image can be the image that user is shot by mobile terminal, and mobile terminal obtains After taking pending image, pending image is inputted into the image processing model into Fig. 2, and the coding unit in calling figure 2, it is right Pending image is encoded, and obtains the corresponding characteristic pattern of pending image, specifically, pending image is inputted single to coding Member, so that coding unit encodes pending image, using the output of coding unit as the corresponding feature of pending image Figure.Wherein, the size of pending image can be amplified certain times by coding unit during being encoded to pending image Number.
In above-mentioned steps S104, mobile terminal determines the corresponding target wind of pending image according to the trigger action of user Lattice;Wherein, target style is the image style of user's triggering.Wherein, the trigger action of user can be selection operation.
Specifically, mobile terminal can show multiple images after pending image is got on the screen of mobile terminal For selection by the user, according to the selection operation of user, the image style that user is selected first, is determined as treating mobile terminal style The corresponding target style of image is handled, alternatively, mobile terminal completes image procossing in the image style being had been selected according to user After process, image style that user is selected again is determined as the corresponding target style of pending image.In one embodiment, Mobile terminal shows multiple images to be selected, and each image to be selected corresponds to a kind of image style, and user passes through in mobile terminal Screen on click on the mode of image to be selected and select image style, the image style that user selects is determined as mesh by mobile terminal Mark style.
In above-mentioned steps S106, since each processing unit corresponds to a kind of image style in image processing model, move Dynamic terminal chooses target style pair after target style is determined, it is necessary in multiple processing units that image processing model includes The object processing unit answered, then, invocation target processing unit carry out features described above figure stylized processing according to target style, Characteristic pattern after being handled.Wherein, since the trigger action of above-mentioned user occurs in real time, mobile terminal can be real When choose the corresponding object processing unit of target style.
Wherein, each processing unit corresponds to a kind of image style, and each processing unit has corresponding image style Processing parameter (including offset parameter and change of scale parameter) so that processing unit can according to corresponding processing parameter, Corresponding stylization processing is carried out to characteristic pattern.
In above-mentioned steps S106, invocation target processing unit carries out characteristic pattern stylized processing according to target style, obtains Characteristic pattern after to processing, is specially:
(1) characteristic pattern is inputted to the object processing unit of image processing model;
(2) by the handling result of object processing unit, as the characteristic pattern after processing;
Wherein, object processing unit is used for, and according to target style, carries out the first of the first preset times respectively to characteristic pattern The real-time normalized of convolution algorithm and the first preset times.
Fig. 3 is the structure diagram for the object processing unit that one embodiment of the application provides, as shown in figure 3, target is handled Unit includes multilayer the first convolution operation layer and the real-time normalized layer of multilayer being arranged alternately, wherein, the first convolution algorithm Layer is identical with the number of plies of real-time normalized layer, is the first preset times.
In the present embodiment, mobile terminal inputs the characteristic pattern that coding obtains to object processing unit, object processing unit After receiving characteristic pattern, characteristic pattern is inputted to first layer the first convolution operation layer, the first convolution operation layer characteristic pattern is carried out First convolution algorithm, and operation result is inputted to the real-time normalized layer of first layer, real-time normalized layer is to input Data carry out real-time normalized, and the data of output are inputted to next layer of the first convolution operation layer, are repeated with this Carry out, until carrying out the first convolution algorithm of the first preset times and the real-time normalized of the first preset times.It is mobile whole End obtains the output of last layer of real-time normalized layer as a result, result will be exported as the characteristic pattern after processing.
In one specific embodiment, obtained characteristic pattern is N width after coding unit coding, in object processing unit, the One layer of first convolution operation layer has N number of input channel, and each input channel corresponds to a width characteristic pattern, the first convolution of first layer fortune Calculating layer has M output channel, and each output channel corresponds to the characteristic pattern after a width convolution, with first layer the first convolution operation layer The real-time normalized layer of first layer to connect has M input channel, and each input channel corresponds to the feature after a width convolution Figure, after there is the real-time normalized layer of first layer P output channel, each output channel to correspond to the real-time normalized of a width Characteristic pattern.Wherein, N, M, P are integer.In this manner it is achieved that the quantity of the input channel of the first convolution operation layer, is equal to The quantity of the characteristic pattern transmitted to it, the quantity of the input channel of real-time normalized layer, equal to the characteristic pattern transmitted to it Quantity, so as to ensure characteristic pattern transmission process in each first convolution operation layer and each normalized layer in real time.
In the present embodiment, in each processing unit, the first convolution operation layer is identical with the number of plies of real-time normalized layer, It is the first preset times, that is, each processing unit can carry out input data the first convolution of the first preset times Computing and the real-time normalized of the first preset times.Wherein, the first preset times are during training managing unit Definite numerical value, its size determine the processing accuracy and processing speed of processing unit, and the size of the first preset times can use In the following manner is set:
(1) training sample figure is obtained, the coding unit in image processing model is called, training sample figure is compiled respectively Code, obtains the corresponding sample characteristics figure of training sample figure;
(2) the first convolution algorithm and real-time normalized are carried out successively to sample characteristics figure, obtains the first handling result;
(3) according to default loss function, the loss difference between the first handling result and default handling result is calculated;Its In, default loss function is the linear combination of content loss function, style loss function and overall variance loss function;
(4) the step of repeating above-mentioned first convolution algorithm, real-time normalized, loss mathematic interpolation, until working as The magnitude relationship of the secondary loss difference being calculated and the previous loss difference being calculated meets default size requirements;
(5) by time corresponding the step of repeating above-mentioned first convolution algorithm, real-time normalized, loss mathematic interpolation Number, is determined as the first preset times.
Act in (1), obtain training sample figure, training sample figure can be obtained by manual sorting.Then, at calling figure picture The coding unit in model is managed, training sample figure is encoded, obtains the corresponding sample characteristics figure of training sample figure.
Act in (2), the first convolution algorithm and real-time normalized are carried out successively to sample characteristics figure, obtained at first Reason is as a result, that is, using the first convolution algorithm in processing unit and real-time normalized, at sample characteristics figure Reason, obtains the first handling result, the processing knot that the processing mode which is considered as analog processing unit obtains Fruit.
Act in (3), default handling result can be the handling result that VGG16 models handle sample characteristics figure. Default loss function is:
Lperceptual=α Lstyle+βLcontent+γLtv
Wherein, LperceptualFor default loss function, Lstyle(obtained for style loss function by Vgg16 network calculations To), LcontentFor content loss function, LtvFor overall variance loss function, α, β, γ are the weight of various functions, for for controlling The parameter of the stylized degree of the image of output is made, its occurrence can rule of thumb be set.
In above-mentioned formula,
Wherein, P and F represents the first handling result and default handling result respectively, and l represents the number of current operation, and k is represented Characteristic pattern sequence number, subtraction are that matrix element subtracts, and i, j are matrix coordinates, and M is empirically determined, P and F when can be the l times computing The sum of the Euclidean distance of all characteristic patterns.
In above-mentioned formula,
Wherein, P and F represents the first handling result and default handling result respectively, and l represents the number of current operation, wlTable The weight parameter (general each time is equal) of characteristic loss when showing the l times computing, k represents characteristic pattern sequence number, and multiplication is matrix Element multiplication, the Gram matrixes of generation are a kind of covariance matrixes of non-centralization, and i, j are matrix coordinates, and N is rule of thumb true It is fixed.
In above-mentioned formula,
Wherein, (xi,j+1-xi,j)2(xi+1,j-xi,j)2The ladder of characteristic image both horizontally and vertically is represented respectively Degree, i, j are matrix coordinates, and x is the value of coordinate (i, j), and β is coefficient, usually takes 1.
Increase overall variance loss function in default loss function, the space smoothing journey of stylized picture can be improved Degree, improves the space smoothing degree of the image of processing unit output.
By above-mentioned action (3), the first handling result and default handling result can be calculated according to default loss function Between loss difference, which is content loss function, style loss function and overall variance loss function calculate To result be multiplied by the adduction after respective weight respectively.
In above-mentioned action (4), repeat the first convolution algorithm, real-time normalized, lose the step of mathematic interpolation, Until when the magnitude relationship of the secondary loss difference being calculated and the previous loss difference being calculated meets default size requirements.
Specifically, calculate after once losing difference, the first convolution algorithm and reality are carried out successively to the first handling result again When normalized, obtain second processing as a result, and calculate the loss difference between second processing result and default handling result, Then, the first convolution algorithm and real-time normalized then to second processing result are carried out successively, is so repeated, until when time meter The magnitude relationship of obtained loss difference and the previous loss difference being calculated meets default size requirements, such as, when secondary The loss difference being calculated is equal in magnitude with the previous loss difference being calculated, or, when the secondary loss being calculated is poor The difference of value and the previous loss difference being calculated, in the range of predetermined difference value.
When the magnitude relationship of the secondary loss difference being calculated and the previous loss difference being calculated meets default size It is required that illustrate to after sample characteristics figure repeatedly the first convolution algorithm and real-time normalized, its handling result with it is default Loss difference between handling result tends to be steady, and illustrates that processing unit training is completed.
In above-mentioned action (5), above-mentioned first convolution algorithm, real-time normalized, the step for losing mathematic interpolation will be repeated Rapid corresponding number, is determined as the first preset times, repeats above-mentioned first convolution algorithm, real-time normalized, damage here The corresponding number of the step of losing mathematic interpolation, as performs above-mentioned first convolution algorithm, real-time normalized, loss difference The step of calculating corresponding number.For example performing 5 above-mentioned first convolution algorithms, real-time normalized, loss difference During the step of calculating, determine when the magnitude relationship of the secondary loss difference being calculated and the previous loss difference being calculated meets Default size requirements, it is determined that the first preset times are arranged to 5 times.Through this embodiment, processing unit can be ensured to spy When sign figure is handled, the handling result obtained after multiple calculating process is poor relative to the loss between default handling result Value tends to be steady, so as to obtain accurate handling result.
In above-mentioned steps S108, the decoding unit in image processing model is called, the characteristic pattern after processing is decoded, The corresponding stylized image of pending image is obtained, is specially:
(1) characteristic pattern after processing is inputted to decoding unit;
(2) by the handling result of decoding unit, it is determined as the corresponding stylized image of pending image;
Wherein, decoding unit is used for, and processing is amplified to the characteristic pattern after processing based on the mode for facing sampling recently, with And the second convolution algorithm is carried out to the characteristic pattern after amplified processing, generate intermediate image.
Specifically, the characteristic pattern after processing is inputted to decoding unit, the handling result of decoding unit is determined as waiting to locate The corresponding stylized image of image is managed, stylized image combines the content and target style of pending image.
Decoding unit is primarily based on the mode for facing sampling recently to the spy after processing after the characteristic pattern after receiving processing Sign figure is amplified processing, and carries out the second convolution algorithm to amplified characteristic pattern.Fig. 4 provides for one embodiment of the application Decoding unit structure diagram, as shown in figure 4, decoding unit includes two layers of enhanced processing layer, two layers of second convolution algorithms Layer, enhanced processing layer and the second convolution operation layer are arranged alternately.
In Fig. 4, decoding unit is after the characteristic pattern after receiving processing, using first layer enhanced processing layer, after processing Characteristic pattern be amplified processing, the second convolution is carried out to amplification result using first layer the second convolution operation layer, it is then sharp again Convolution results are amplified with second layer enhanced processing layer, then recycle the second layer the second convolution operation layer to amplifying result The second convolution is carried out, so far, obtains intermediate image.
In other embodiments, the enhanced processing layer and the second convolution algorithm of other numbers of plies can be set in decoding unit Layer, particular number can be determined according to scene demand, ensure that enhanced processing layer and the second convolution operation layer are arranged alternately identical layer Number.
In the present embodiment, decoding unit substitutes common warp by the way of arest neighbors amplifier and convolution are combined Product, while ensureing that the size of stylized image of output is identical with the size of pending image, can avoid the wind of output Image of formatting has gridiron pattern effect, and gridiron pattern effect refers to that the border in image there are some region is unsmooth, with other Region could not smoothly connect.
In the present embodiment, after decoding unit generation intermediate image, the pixel value of each block of pixels of intermediate image is also determined Average value and variance, according to the average value and variance, the corresponding pixel value of each block of pixels of intermediate image is carried out respectively Normalization adjustment.
Specific adjustment mode is,Wherein, A is the pixel value after the adjustment of each block of pixels, and B is each block of pixels Pixel value before adjustment, P are above-mentioned average value, and Q is above-mentioned variance.In the present embodiment, identical method centering can also be used Between image brightness value and contrast value do normalization adjustment.
In the present embodiment, decoding unit carries out intermediate image pixel value normalization adjustment, can avoid the style of output Change image and there is a situation where black block.
In one embodiment, decoding unit is exported intermediate image as handling result, then stylized image is middle Image.
In another embodiment, decoding unit is defeated as handling result using the intermediate image after pixel value normalization adjustment Go out, then stylized image is the intermediate image after pixel value normalization adjustment.
In another embodiment, pixel value, brightness value and contrast value are normalized the centre after adjustment by decoding unit Image is exported as handling result, then stylized image is after pixel value, brightness value and contrast value normalize adjustment Intermediate image.
In the present embodiment, the second convolution that the first convolution algorithm and decoding unit that above-mentioned object processing unit is used are used Computing, including the separable convolution algorithm of depth (Depth Wise Separable Convolutions) and pixel-by-pixel may be used Separated convolution algorithm (Point Wise Separable Convolutions), using the separable convolution algorithm of depth and Separable convolution algorithm replaces conventional convolution algorithm pixel-by-pixel, can greatly improve convolution speed, greatly reduce at image The size of model is managed, and obtains identical feature representation, in general, when convolution kernel size is 9, can be by model calculation amount and mould Molded dimension is reduced to original 1/9th.
Utilize the image processing method in the embodiment of the present application, additionally it is possible to which difference is carried out to the different zones of pending image Stylization processing, specifically, in above-mentioned steps S104, determine the corresponding target style of pending image, be specially:It will wait to locate Reason image is divided into multiple images unit, determines each corresponding target style of elementary area respectively.
Specifically, user on mobile terminals can split pending image, be divided into multiple images unit, mobile Pending image, according to the image segmentation information, is divided into multiple figures by terminal after the image segmentation information of user is received It is the target style that each elementary area selects as unit, and by user, it is corresponding is identified as each elementary area Target style.
Correspondingly, in above-mentioned steps S106, in multiple processing units that image processing model includes, target style is chosen Corresponding object processing unit, invocation target processing unit carry out characteristic pattern stylized processing according to target style, obtain everywhere Characteristic pattern after reason, including:
(1) in multiple processing units that image processing model includes, the corresponding target of each elementary area is chosen Object processing unit corresponding to style;
(2) from features described above figure, extraction obtains the corresponding element characteristic figure of each elementary area;
(3) each object processing unit chosen is called, according to corresponding image style, respectively to corresponding unit Characteristic pattern each carries out stylized processing, the characteristic pattern after being handled.
Specifically, mobile terminal includes after pending image is divided into multiple images unit in image processing model Multiple processing units in, choose the object processing unit corresponding to the corresponding target style of each elementary area, also, From the characteristic pattern of pending image, extraction obtains the corresponding element characteristic figure of each elementary area, finally, calls and chooses Each object processing unit, according to corresponding image style, corresponding element characteristic figure is carried out at stylization respectively Reason, the characteristic pattern after being handled, the process that object processing unit carries out corresponding element characteristic figure stylized processing are same as above In face of the explanation of step S106, which is not described herein again.
As it can be seen that since each processing unit in image processing model shares coding unit, implemented by the application Method in example, can carry out the regional of same pending image different stylizations and handle, so as to fulfill image not It is same partly to carry out the declinable effect of different wind, the diversity of rich image processing.
To sum up, the embodiment of the present application at least has the advantages that:
(1) cataloged procedure and stylized processing procedure are independently opened in image processing model, and multiple processing units share coding Unit and decoding unit, are substantially reduced model volume, and improve image processing speed, are very suitable for applying in mobile terminal;
(2) image processing model includes multiple processing units, and each processing unit corresponds to a kind of image style, for The image style that family is specified, it is only necessary to corresponding processing unit is called, so as to be realized by same image processing model The processing of a variety of image styles;
(3) when user switches image style, since multiple processing units share a coding unit, without to figure As carrying out repeated encoding, it is only necessary to which the characteristic pattern obtained using previous coding directly carries out stylized processing, greatlys save Image calculation amount;
(4) since in image processing model, multiple processing units share a coding unit, therefore can be to same image Different zones carry out different stylization processing, improve the diversity of stylization processing.
Corresponding above-mentioned method, the embodiment of the present application additionally provide a kind of image processing apparatus, and Fig. 5 is implemented for the application one The module composition schematic diagram for the image processing apparatus that example provides, as shown in figure 5, the device includes:
Coding unit calling module 51, for obtaining pending image, the pending image is inputted to image procossing Model, calls the coding unit in described image processing model, the pending image is encoded, is obtained described pending The corresponding characteristic pattern of image;
Image style determining module 52, for determining the corresponding target of the pending image according to the trigger action of user Style;Wherein, the target style is the image style of user's triggering;
Processing unit calling module 53, in multiple processing units that described image processing model includes, selecting in real time The corresponding object processing unit of the target style is taken, and calls the object processing unit according to the target style to described Characteristic pattern carries out stylized processing, the characteristic pattern after being handled;Wherein, each processing unit corresponds to a kind of image wind Lattice;
Decoding unit calling module 54, for calling the decoding unit in described image processing model, after the processing Characteristic pattern decoded, obtain the corresponding stylized image of the pending image.
Alternatively, the processing unit calling module 53 is specifically used for:
The characteristic pattern is inputted to described image to the object processing unit handled in model;
By the handling result of the object processing unit, as the characteristic pattern after processing;
Wherein, the object processing unit is used for, and according to the target style, it is pre- to carry out first respectively to the characteristic pattern If the first convolution algorithm and the real-time normalized of number.
Alternatively, which further includes training module, is used for:
Training sample figure is obtained, the coding unit in described image processing model is called, respectively to the training sample figure Encoded, obtain the corresponding sample characteristics figure of the training sample figure;
First convolution algorithm and the real-time normalized are carried out successively to the sample characteristics figure, obtain first Handling result;
According to default loss function, the loss difference between first handling result and default handling result is calculated; Wherein, the default loss function be content loss function, linear group of style loss function and overall variance loss function Close;
The step of repeating first convolution algorithm, the normalized in real time, the loss mathematic interpolation, directly Meet default size requirements to when the secondary loss difference being calculated and the previous loss difference being calculated;
The step of repeating first convolution algorithm, the normalized in real time, the loss mathematic interpolation, is right The number answered, is determined as first preset times.
Alternatively, the decoding unit calling module 54 is specifically used for:
Characteristic pattern after the processing is inputted to the decoding unit;
By the handling result of the decoding unit, it is determined as the corresponding stylized image of the pending image;
Wherein, the decoding unit is used for, and the characteristic pattern after the processing is put based on the mode for facing sampling recently Big processing, and the second convolution algorithm is carried out to the characteristic pattern after the amplified processing, generate intermediate image.
Alternatively, the decoding unit calling module 54 also particularly useful for
After the intermediate image is generated, determine the pixel value of each block of pixels of the intermediate image average value and Variance;
According to the average value and the variance, respectively to the corresponding pixel value of each block of pixels of the intermediate image into Row normalization adjustment.
Alternatively,
Described image style determining module 52 is specifically used for:
The pending image is divided into multiple images unit, determines that each described image unit is corresponding respectively Target style;
The processing unit calling module 53 is specifically used for:
In multiple processing units that described image processing model includes, it is corresponding to choose each described image unit Object processing unit corresponding to target style;
From the characteristic pattern, extraction obtains the corresponding element characteristic figure of each described image unit;
The each object processing unit chosen is called, according to corresponding image style, respectively to corresponding institute State element characteristic figure and each carry out stylized processing, the characteristic pattern after being handled.
Alternatively, first convolution algorithm includes the separable convolution algorithm of depth and pixel-by-pixel separable convolution fortune Calculate.
By the embodiment of the present application, pending image can be carried out at stylization according to the target style that user specifies Reason, obtains the corresponding stylized image of pending image, so as to fulfill the adjusting of the display effect to image, makes image meet to use The display requirement at family.Also, the image processing model called in the embodiment of the present application, including coding unit, multiple processing units And decoding unit, wherein each processing unit corresponds to a kind of image style, therefore the corresponding a variety of images of the image processing model The processing unit of style shares coding unit.Pending image by once stylization processing after, if user switch image wind Lattice, then without carrying out repeated encoding to image, directly can carry out stylized place again using the characteristic pattern encoded before Reason, so that under the scene that user switches image style, eliminates the cataloged procedure of repetition, improves image processing speed.And And image processing model includes coding unit, multiple processing units and decoding unit, merged, shared using multiple processing units The mode of coding unit and decoding unit, a coding unit and a decoding unit are respectively configured compared to every kind of image style Structure, eliminate the coding unit and decoding unit of repetition, reduce the volume and data volume of image processing model, further Improve image processing speed.
Further, based on above-mentioned method, the embodiment of the present application additionally provides a kind of image processing equipment, and Fig. 6 is this Apply for the structure diagram for the image processing equipment that an embodiment provides.
As shown in fig. 6, image processing equipment can produce bigger difference because configuration or performance are different, one can be included A or more than one processor 701 and memory 702, one or more storages can be stored with memory 702 should With program or data.Wherein, memory 702 can be of short duration storage or persistently storage.It is stored in the application program of memory 702 It can include one or more modules (diagram is not shown), each module can include to the system in image processing equipment Column count machine executable instruction.Further, processor 701 could be provided as communicating with memory 702, be set in image procossing Series of computation machine executable instruction in standby upper execution memory 702.Image processing equipment can also include one or one Above power supply 703, one or more wired or wireless network interfaces 704, one or more input/output interfaces 705, one or more keyboards 706 etc..
In a specific embodiment, image processing equipment includes processor, memory, and storage is on a memory and can The computer program run on the processor, the computer program realize above-mentioned image processing method when being executed by processor Each process of embodiment, specifically includes following steps:
Pending image is obtained, the pending image is inputted to image processing model, calls described image processing mould Coding unit in type, encodes the pending image, obtains the corresponding characteristic pattern of the pending image;
The corresponding target style of the pending image is determined according to the trigger action of user;Wherein, the target style For the image style of user's triggering;
In multiple processing units that described image processing model includes, the corresponding target of the target style is chosen in real time Processing unit, and call the object processing unit to carry out stylized processing to the characteristic pattern according to the target style, obtain Characteristic pattern after to processing;Wherein, each processing unit corresponds to a kind of image style;
The decoding unit in described image processing model is called, the characteristic pattern after the processing is decoded, obtains institute State the corresponding stylized image of pending image.
Alternatively, computer executable instructions when executed, call the object processing unit according to the target wind Lattice are to the stylized processing of characteristic pattern progress, the characteristic pattern after being handled, including:
The characteristic pattern is inputted to described image to the object processing unit handled in model;
By the handling result of the object processing unit, as the characteristic pattern after processing;
Wherein, the object processing unit is used for, and according to the target style, it is pre- to carry out first respectively to the characteristic pattern If the first convolution algorithm and the real-time normalized of number.
Alternatively, computer executable instructions when executed, further include:
Training sample figure is obtained, the coding unit in described image processing model is called, respectively to the training sample figure Encoded, obtain the corresponding sample characteristics figure of the training sample figure;
First convolution algorithm and the real-time normalized are carried out successively to the sample characteristics figure, obtain first Handling result;
According to default loss function, the loss difference between first handling result and default handling result is calculated; Wherein, the default loss function be content loss function, linear group of style loss function and overall variance loss function Close;
The step of repeating first convolution algorithm, the normalized in real time, the loss mathematic interpolation, directly Meet default size requirements to when the secondary loss difference being calculated and the previous loss difference being calculated;
The step of repeating first convolution algorithm, the normalized in real time, the loss mathematic interpolation, is right The number answered, is determined as first preset times.
Alternatively, computer executable instructions when executed, call the decoding unit in described image processing model, right Characteristic pattern after the processing is decoded, and obtains the corresponding stylized image of the pending image, including:
Characteristic pattern after the processing is inputted to the decoding unit;
By the handling result of the decoding unit, it is determined as the corresponding stylized image of the pending image;
Wherein, the decoding unit is used for, and the characteristic pattern after the processing is put based on the mode for facing sampling recently Big processing, and the second convolution algorithm is carried out to the characteristic pattern after the amplified processing, generate intermediate image.
Alternatively, computer executable instructions when executed, the decoding unit after the intermediate image is generated, Further include:
Determine the average value and variance of the pixel value of each block of pixels of the intermediate image;
According to the average value and the variance, respectively to the corresponding pixel value of each block of pixels of the intermediate image into Row normalization adjustment.
Alternatively, computer executable instructions when executed,
Determine the corresponding target style of the pending image, including:
The pending image is divided into multiple images unit, determines that each described image unit is corresponding respectively Target style;
The object processing unit is called to carry out stylized processing to the characteristic pattern according to the target style, including:
In multiple processing units that described image processing model includes, it is corresponding to choose each described image unit Object processing unit corresponding to target style;
From the characteristic pattern, extraction obtains the corresponding element characteristic figure of each described image unit;
The each object processing unit chosen is called, according to corresponding image style, respectively to corresponding institute State element characteristic figure and each carry out stylized processing, the characteristic pattern after being handled.
Alternatively, when executed, first convolution algorithm includes the separable volume of depth to computer executable instructions Product computing and pixel-by-pixel separable convolution algorithm.
By the embodiment of the present application, pending image can be carried out at stylization according to the target style that user specifies Reason, obtains the corresponding stylized image of pending image, so as to fulfill the adjusting of the display effect to image, makes image meet to use The display requirement at family.Also, the image processing model called in the embodiment of the present application, including coding unit, multiple processing units And decoding unit, wherein each processing unit corresponds to a kind of image style, therefore the corresponding a variety of images of the image processing model The processing unit of style shares coding unit.Pending image by once stylization processing after, if user switch image wind Lattice, then without carrying out repeated encoding to image, directly can carry out stylized place again using the characteristic pattern encoded before Reason, so that under the scene that user switches image style, eliminates the cataloged procedure of repetition, improves image processing speed.And And image processing model includes coding unit, multiple processing units and decoding unit, merged, shared using multiple processing units The mode of coding unit and decoding unit, a coding unit and a decoding unit are respectively configured compared to every kind of image style Structure, eliminate the coding unit and decoding unit of repetition, reduce the volume and data volume of image processing model, further Improve image processing speed.
Further, the embodiment of the present application also provides a kind of computer-readable recording medium, computer-readable recording medium On be stored with computer program, which realizes each of above-mentioned image processing method embodiment when being executed by processor Process, and identical technique effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer-readable storage Medium, such as read-only storage (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic disc or CD etc..
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment Divide mutually referring to what each embodiment stressed is the difference with other embodiment.It is real especially for system For applying example, since it is substantially similar to embodiment of the method, so description is fairly simple, related part is referring to embodiment of the method Part explanation.
The foregoing is merely embodiments herein, is not limited to the application.For those skilled in the art For, the application can have various modifications and variations.All any modifications made within spirit herein and principle, be equal Replace, improve etc., it should be included within the scope of claims hereof.

Claims (13)

  1. A kind of 1. image processing method, it is characterised in that applied to mobile terminal, including:
    Pending image is obtained, the pending image is inputted to image processing model, is called in described image processing model Coding unit, the pending image is encoded, obtains the corresponding characteristic pattern of the pending image;
    The corresponding target style of the pending image is determined according to the trigger action of user;Wherein, the target style is use The image style of family triggering;
    In multiple processing units that described image processing model includes, the corresponding target processing of the target style is chosen in real time Unit, and call the object processing unit to carry out stylized processing to the characteristic pattern according to the target style, obtain everywhere Characteristic pattern after reason;Wherein, each processing unit corresponds to a kind of image style;
    The decoding unit in described image processing model is called, the characteristic pattern after the processing is decoded, obtains described treat Handle the corresponding stylized image of image.
  2. 2. according to the method described in claim 1, it is characterized in that, the object processing unit is called according to the target style To the stylized processing of characteristic pattern progress, the characteristic pattern after being handled, including:
    The characteristic pattern is inputted to described image to the object processing unit handled in model;
    By the handling result of the object processing unit, as the characteristic pattern after processing;
    Wherein, the object processing unit is used for, and according to the target style, the characteristic pattern is carried out respectively first default time The first several convolution algorithms and real-time normalized.
  3. 3. according to the method described in claim 2, it is characterized in that, further include:
    Training sample figure is obtained, the coding unit in described image processing model is called, the training sample figure is carried out respectively Coding, obtains the corresponding sample characteristics figure of the training sample figure;
    First convolution algorithm and the real-time normalized are carried out successively to the sample characteristics figure, obtain the first processing As a result;
    According to default loss function, the loss difference between first handling result and default handling result is calculated;Wherein, The default loss function is the linear combination of content loss function, style loss function and overall variance loss function;
    The step of repeating first convolution algorithm, the normalized in real time, the loss mathematic interpolation, until working as The secondary loss difference being calculated meets default size requirements with the previous loss difference being calculated;
    The step of repeating first convolution algorithm, the normalized in real time, the loss mathematic interpolation, is corresponding Number, is determined as first preset times.
  4. 4. according to the method described in claim 1, it is characterized in that, call described image processing model in decoding unit, it is right Characteristic pattern after the processing is decoded, and obtains the corresponding stylized image of the pending image, including:
    Characteristic pattern after the processing is inputted to the decoding unit;
    By the handling result of the decoding unit, it is determined as the corresponding stylized image of the pending image;
    Wherein, the decoding unit is used for, and place is amplified to the characteristic pattern after the processing based on the mode for facing sampling recently Reason, and the second convolution algorithm is carried out to the characteristic pattern after the amplified processing, generate intermediate image.
  5. 5. according to the method described in claim 4, it is characterized in that, the decoding unit after the intermediate image is generated, Further include:
    Determine the average value and variance of the pixel value of each block of pixels of the intermediate image;
    According to the average value and the variance, the corresponding pixel value of each block of pixels of the intermediate image is returned respectively One changes adjustment.
  6. 6. according to the method described in claim 1, it is characterized in that, determine the corresponding target style of the pending image, bag Include:
    The pending image is divided into multiple images unit, determines each corresponding target of described image unit respectively Style;
    The object processing unit is called to carry out stylized processing to the characteristic pattern according to the target style, including:
    In multiple processing units that described image processing model includes, each corresponding target of described image unit is chosen Object processing unit corresponding to style;
    From the characteristic pattern, extraction obtains the corresponding element characteristic figure of each described image unit;
    The each object processing unit chosen is called, according to corresponding image style, respectively to the corresponding list First characteristic pattern each carries out stylized processing, the characteristic pattern after being handled.
  7. 7. according to the method in claim 2 or 3, it is characterised in that it is separable that first convolution algorithm includes depth Convolution algorithm and pixel-by-pixel separable convolution algorithm.
  8. A kind of 8. image processing apparatus, it is characterised in that applied to mobile terminal, including:
    Coding unit calling module, for obtaining pending image, the pending image is inputted to image processing model, is adjusted The coding unit in model is handled with described image, the pending image is encoded, obtains the pending image pair The characteristic pattern answered;
    Image style determining module, for determining the corresponding target style of the pending image according to the trigger action of user; Wherein, the target style is the image style of user's triggering;
    Processing unit calling module, described in multiple processing units that described image processing model includes, choosing in real time The corresponding object processing unit of target style, and call the object processing unit according to the target style to the characteristic pattern Carry out stylized processing, the characteristic pattern after being handled;Wherein, each processing unit corresponds to a kind of image style;
    Decoding unit calling module, for calling the decoding unit in described image processing model, to the feature after the processing Figure is decoded, and obtains the corresponding stylized image of the pending image.
  9. 9. device according to claim 8, it is characterised in that the processing unit calling module is specifically used for:
    The characteristic pattern is inputted to described image to the object processing unit handled in model;
    By the handling result of the object processing unit, as the characteristic pattern after processing;
    Wherein, the object processing unit is used for, and according to the target style, the characteristic pattern is carried out respectively first default time The first several convolution algorithms and real-time normalized.
  10. 10. device according to claim 9, it is characterised in that further include training module, be used for:
    Training sample figure is obtained, the coding unit in described image processing model is called, the training sample figure is carried out respectively Coding, obtains the corresponding sample characteristics figure of the training sample figure;
    First convolution algorithm and the real-time normalized are carried out successively to the sample characteristics figure, obtain the first processing As a result;
    According to default loss function, the loss difference between first handling result and default handling result is calculated;Wherein, The default loss function is the linear combination of content loss function, style loss function and overall variance loss function;
    The step of repeating first convolution algorithm, the normalized in real time, the loss mathematic interpolation, until working as The secondary loss difference being calculated meets default size requirements with the previous loss difference being calculated;
    The step of repeating first convolution algorithm, the normalized in real time, the loss mathematic interpolation, is corresponding Number, is determined as first preset times.
  11. 11. device according to claim 8, it is characterised in that the decoding unit calling module is specifically used for:
    Characteristic pattern after the processing is inputted to the decoding unit;
    By the handling result of the decoding unit, it is determined as the corresponding stylized image of the pending image;
    Wherein, the decoding unit is used for, and place is amplified to the characteristic pattern after the processing based on the mode for facing sampling recently Reason, and the second convolution algorithm is carried out to the characteristic pattern after the amplified processing, generate intermediate image.
  12. 12. according to the devices described in claim 11, it is characterised in that the decoding unit calling module also particularly useful for:
    After the intermediate image is generated, average value and the side of the pixel value of each block of pixels of the intermediate image are determined Difference;
    According to the average value and the variance, the corresponding pixel value of each block of pixels of the intermediate image is returned respectively One changes adjustment.
  13. 13. device according to claim 8, it is characterised in that
    Described image style determining module is specifically used for:
    The pending image is divided into multiple images unit, determines each corresponding target of described image unit respectively Style;
    The processing unit calling module is specifically used for:
    In multiple processing units that described image processing model includes, each corresponding target of described image unit is chosen Object processing unit corresponding to style;
    From the characteristic pattern, extraction obtains the corresponding element characteristic figure of each described image unit;Call each of selection A object processing unit, according to corresponding image style, respectively each carries out the corresponding element characteristic figure Stylization processing, the characteristic pattern after being handled.
CN201711455985.7A 2017-12-28 2017-12-28 Image processing method and device Active CN107948529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711455985.7A CN107948529B (en) 2017-12-28 2017-12-28 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711455985.7A CN107948529B (en) 2017-12-28 2017-12-28 Image processing method and device

Publications (2)

Publication Number Publication Date
CN107948529A true CN107948529A (en) 2018-04-20
CN107948529B CN107948529B (en) 2020-11-06

Family

ID=61940671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711455985.7A Active CN107948529B (en) 2017-12-28 2017-12-28 Image processing method and device

Country Status (1)

Country Link
CN (1) CN107948529B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776959A (en) * 2018-07-10 2018-11-09 Oppo(重庆)智能科技有限公司 Image processing method, device and terminal device
CN108846835A (en) * 2018-05-31 2018-11-20 西安电子科技大学 The image change detection method of convolutional network is separated based on depth
CN108898556A (en) * 2018-05-24 2018-11-27 麒麟合盛网络技术股份有限公司 A kind of image processing method and device of three-dimensional face
CN108985317A (en) * 2018-05-25 2018-12-11 西安电子科技大学 A kind of image classification method based on separable convolution sum attention mechanism
CN109064428A (en) * 2018-08-01 2018-12-21 Oppo广东移动通信有限公司 A kind of image denoising processing method, terminal device and computer readable storage medium
CN109510943A (en) * 2018-12-17 2019-03-22 三星电子(中国)研发中心 Method and apparatus for shooting image
CN111091593A (en) * 2018-10-24 2020-05-01 深圳云天励飞技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111124398A (en) * 2018-10-31 2020-05-08 中国移动通信集团重庆有限公司 User interface generation method, device, equipment and storage medium
CN111325252A (en) * 2020-02-12 2020-06-23 腾讯科技(深圳)有限公司 Image processing method, apparatus, device, and medium
CN111383289A (en) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 Image processing method, image processing device, terminal equipment and computer readable storage medium
CN111784565A (en) * 2020-07-01 2020-10-16 北京字节跳动网络技术有限公司 Image processing method, migration model training method, device, medium and equipment
CN112241941A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for acquiring image
CN113052757A (en) * 2021-03-08 2021-06-29 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium
CN114422682A (en) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 Photographing method, electronic device, and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090154841A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Apparatus and method for transforming image in mobile device
CN106651766A (en) * 2016-12-30 2017-05-10 深圳市唯特视科技有限公司 Image style migration method based on deep convolutional neural network
CN106778928A (en) * 2016-12-21 2017-05-31 广州华多网络科技有限公司 Image processing method and device
CN106847294A (en) * 2017-01-17 2017-06-13 百度在线网络技术(北京)有限公司 Audio-frequency processing method and device based on artificial intelligence
CN106886975A (en) * 2016-11-29 2017-06-23 华南理工大学 It is a kind of can real time execution image stylizing method
CN107240085A (en) * 2017-05-08 2017-10-10 广州智慧城市发展研究院 A kind of image interfusion method and system based on convolutional neural networks model
CN107277615A (en) * 2017-06-30 2017-10-20 北京奇虎科技有限公司 Live stylized processing method, device, computing device and storage medium
CN107369189A (en) * 2017-07-21 2017-11-21 成都信息工程大学 The medical image super resolution ratio reconstruction method of feature based loss
CN107464210A (en) * 2017-07-06 2017-12-12 浙江工业大学 A kind of image Style Transfer method based on production confrontation network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090154841A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Apparatus and method for transforming image in mobile device
CN106886975A (en) * 2016-11-29 2017-06-23 华南理工大学 It is a kind of can real time execution image stylizing method
CN106778928A (en) * 2016-12-21 2017-05-31 广州华多网络科技有限公司 Image processing method and device
CN106651766A (en) * 2016-12-30 2017-05-10 深圳市唯特视科技有限公司 Image style migration method based on deep convolutional neural network
CN106847294A (en) * 2017-01-17 2017-06-13 百度在线网络技术(北京)有限公司 Audio-frequency processing method and device based on artificial intelligence
CN107240085A (en) * 2017-05-08 2017-10-10 广州智慧城市发展研究院 A kind of image interfusion method and system based on convolutional neural networks model
CN107277615A (en) * 2017-06-30 2017-10-20 北京奇虎科技有限公司 Live stylized processing method, device, computing device and storage medium
CN107464210A (en) * 2017-07-06 2017-12-12 浙江工业大学 A kind of image Style Transfer method based on production confrontation network
CN107369189A (en) * 2017-07-21 2017-11-21 成都信息工程大学 The medical image super resolution ratio reconstruction method of feature based loss

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898556A (en) * 2018-05-24 2018-11-27 麒麟合盛网络技术股份有限公司 A kind of image processing method and device of three-dimensional face
CN108985317A (en) * 2018-05-25 2018-12-11 西安电子科技大学 A kind of image classification method based on separable convolution sum attention mechanism
CN108985317B (en) * 2018-05-25 2022-03-01 西安电子科技大学 Image classification method based on separable convolution and attention mechanism
CN108846835A (en) * 2018-05-31 2018-11-20 西安电子科技大学 The image change detection method of convolutional network is separated based on depth
CN108846835B (en) * 2018-05-31 2020-04-14 西安电子科技大学 Image change detection method based on depth separable convolutional network
CN108776959B (en) * 2018-07-10 2021-08-06 Oppo(重庆)智能科技有限公司 Image processing method and device and terminal equipment
CN108776959A (en) * 2018-07-10 2018-11-09 Oppo(重庆)智能科技有限公司 Image processing method, device and terminal device
CN109064428A (en) * 2018-08-01 2018-12-21 Oppo广东移动通信有限公司 A kind of image denoising processing method, terminal device and computer readable storage medium
CN111091593A (en) * 2018-10-24 2020-05-01 深圳云天励飞技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111091593B (en) * 2018-10-24 2024-03-22 深圳云天励飞技术有限公司 Image processing method, device, electronic equipment and storage medium
CN111124398A (en) * 2018-10-31 2020-05-08 中国移动通信集团重庆有限公司 User interface generation method, device, equipment and storage medium
CN109510943A (en) * 2018-12-17 2019-03-22 三星电子(中国)研发中心 Method and apparatus for shooting image
CN111383289A (en) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 Image processing method, image processing device, terminal equipment and computer readable storage medium
CN111325252A (en) * 2020-02-12 2020-06-23 腾讯科技(深圳)有限公司 Image processing method, apparatus, device, and medium
CN111784565B (en) * 2020-07-01 2021-10-29 北京字节跳动网络技术有限公司 Image processing method, migration model training method, device, medium and equipment
CN111784565A (en) * 2020-07-01 2020-10-16 北京字节跳动网络技术有限公司 Image processing method, migration model training method, device, medium and equipment
CN112241941A (en) * 2020-10-20 2021-01-19 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for acquiring image
CN112241941B (en) * 2020-10-20 2024-03-22 北京字跳网络技术有限公司 Method, apparatus, device and computer readable medium for acquiring image
CN113052757A (en) * 2021-03-08 2021-06-29 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium
CN114422682A (en) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 Photographing method, electronic device, and readable storage medium
CN114422682B (en) * 2022-01-28 2024-02-02 安谋科技(中国)有限公司 Shooting method, electronic device and readable storage medium

Also Published As

Publication number Publication date
CN107948529B (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN107948529A (en) Image processing method and device
He et al. Conditional sequential modulation for efficient global image retouching
CN109191558B (en) Image polishing method and device
CN109102483B (en) Image enhancement model training method and device, electronic equipment and readable storage medium
CN109584179A (en) A kind of convolutional neural networks model generating method and image quality optimization method
US20190294931A1 (en) Systems and Methods for Generative Ensemble Networks
CN106778928A (en) Image processing method and device
CN110598781A (en) Image processing method, image processing device, electronic equipment and storage medium
CN107145902B (en) A kind of image processing method based on convolutional neural networks, device and mobile terminal
CN109886891B (en) Image restoration method and device, electronic equipment and storage medium
CN110751649B (en) Video quality evaluation method and device, electronic equipment and storage medium
CN111835983B (en) Multi-exposure-image high-dynamic-range imaging method and system based on generation countermeasure network
CN109978764A (en) A kind of image processing method and calculate equipment
Kim et al. Multiple level feature-based universal blind image quality assessment model
US20220172322A1 (en) High resolution real-time artistic style transfer pipeline
CN109919874A (en) Image processing method, device, computer equipment and storage medium
CN107424184A (en) A kind of image processing method based on convolutional neural networks, device and mobile terminal
Jiang et al. Lightweight super-resolution using deep neural learning
CN109685750A (en) Image enchancing method and calculating equipment
CN113822830A (en) Multi-exposure image fusion method based on depth perception enhancement
CN110991627A (en) Information processing apparatus, information processing method, and computer program
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN106778550B (en) Face detection method and device
CN107766803A (en) Video personage based on scene cut dresss up method, apparatus and computing device
JP2002358515A5 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 207A, 2nd floor, No. 2 Information Road, Haidian District, Beijing 100085 (1-8th floor, Building D, 2-2, Beijing Shichuang High-Tech Development Corporation)

Applicant after: QILIN HESHENG NETWORK TECHNOLOGY Inc.

Address before: Room 207A, 2nd floor, No. 2 Information Road, Haidian District, Beijing 100085 (1-8th floor, Building D, 2-2, Beijing Shichuang High-Tech Development Corporation)

Applicant before: QILIN HESHENG NETWORK TECHNOLOGY Inc.

GR01 Patent grant
GR01 Patent grant