CN109801224A - A kind of image processing method, device, server and storage medium - Google Patents

A kind of image processing method, device, server and storage medium Download PDF

Info

Publication number
CN109801224A
CN109801224A CN201811474346.XA CN201811474346A CN109801224A CN 109801224 A CN109801224 A CN 109801224A CN 201811474346 A CN201811474346 A CN 201811474346A CN 109801224 A CN109801224 A CN 109801224A
Authority
CN
China
Prior art keywords
picture
processed
feature
content type
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811474346.XA
Other languages
Chinese (zh)
Inventor
蒋紫东
钟韬
冯巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201811474346.XA priority Critical patent/CN109801224A/en
Publication of CN109801224A publication Critical patent/CN109801224A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of image processing method, device, server and storage mediums, this method comprises: detecting whether picture to be processed is high definition picture bright in luster;Detect whether the picture to be processed is the picture for having been subjected to enhancing processing;When not being high definition picture bright in luster in the picture to be processed and being not to have been subjected to the picture of enhancing processing, classifying content is carried out to picture to be processed, obtains the content type of the picture to be processed;According to the content type, color enhancement processing is carried out to the picture to be processed by color enhancement model corresponding with the content type, obtains Target Photo.The present invention realizes the processing of the automatic enhancing to picture, improves the efficiency of enhancing processing, reduces human cost.

Description

A kind of image processing method, device, server and storage medium
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method, device, server and deposit Storage media.
Background technique
With the fast development of Internet technology, user not only may browse through the video on internet, can be with oneself bat It takes the photograph video and uploads and share, that is, UGC (User Generated Content, refer to user's original content) occur.
The video of a large number of users upload is had in internet UGC daily, each video has a corresponding surface plot, But since user unprofessional shooting equipment and equipment, surface plot may have some problems, as picture is too bright or too Secretly, the problem of colors such as color is gloomy, it may be possible to much noise is introduced caused by shooting under half-light, video passes through incorrect pressure Contracting leads to the presence of compression blocking artifact etc..These low-quality information flow surface plots will lead to user and be unwilling a little to open the video, because This needs to enhance surface plot.
In the prior art, the mode enhanced surface plot is usually artificial enhancing, and artificial enhancing needs to expend a large amount of Manpower and time cost.
Summary of the invention
In view of the above problems, it proposes the embodiment of the present invention and overcomes the above problem or at least partly in order to provide one kind A kind of image processing method, device, server and the storage medium to solve the above problems.
According to the present invention in a first aspect, providing a kind of image processing method, comprising:
Detect whether picture to be processed is high definition picture bright in luster;
Detect whether the picture to be processed is the picture for having been subjected to enhancing processing;
When not being high definition picture bright in luster in the picture to be processed and being not to have been subjected to the picture of enhancing processing, to institute It states picture to be processed and carries out classifying content, obtain the content type of the picture to be processed;
According to the content type, by color enhancement model corresponding with the content type to the picture to be processed Color enhancement processing is carried out, Target Photo is obtained.
Optionally, described that classifying content is carried out to picture to be processed, the content type of the picture to be processed is obtained, is wrapped It includes:
The pre-generated picture classification model of the picture input to be processed is subjected to feature extraction and carries out classifying content, The content type of the picture to be processed is obtained, the picture classification model is the model based on convolutional neural networks.
Optionally, the pre-generated picture classification model of the picture input to be processed is carried out in feature extraction and progress Hold classification, obtain the content type of the picture to be processed, comprising:
Feature is carried out to the picture to be processed by the conventional part of the MobileNet in the picture classification model to mention It takes, obtains feature extraction data;
The feature extraction data are successively carried out with global average pondization processing and full connection is handled, is obtained and setting content The identical categorical data of the quantity of classification;
The categorical data is normalized by normalizing exponential function, the picture to be processed is obtained and belongs to The probability of each setting content classification;
The probability for belonging to each setting content classification according to the picture to be processed determines the content of the picture to be processed Classification.
Optionally, according to the content type, by color enhancement model corresponding with the content type to described Picture to be processed carries out before color enhancement processing, further includes: according to the content type, by corresponding with the content type Denoising model denoising is carried out to the picture to be processed, the picture after being denoised;
According to the content type, by color enhancement model corresponding with the content type to the picture to be processed Color enhancement processing is carried out, obtains Target Photo, comprising: according to the content type, by corresponding with the content type Color enhancement model carries out color enhancement processing to the picture after the denoising, obtains Target Photo.
Optionally, according to the content type, the picture to be processed is carried out at denoising by corresponding denoising model Reason, the picture after being denoised, comprising:
According to the content type, mentioned by the feature that corresponding denoising model carries out six convolutional layers to picture to be processed It takes and activation is handled, obtain 64 characteristic patterns, six convolutional layers use 64 3 × 3 convolution kernels, and the activation processing uses Activation primitive be to correct linear unit function;
Feature extraction is carried out to 64 characteristic patterns by 33 × 3 convolution kernels, the picture after being denoised.
Optionally, according to the content type, the picture to be processed is denoised by corresponding denoising model Before processing, further includes:
It is trained, is preset using convolutional neural networks according to the picture sample of preset quantity for pre-set categories The corresponding denoising model of classification.
Optionally, according to the content type, by color enhancement model corresponding with the content type to described Picture to be processed carries out color enhancement processing, after obtaining Target Photo, further includes:
According to the content type, the Target Photo is gone by denoising model corresponding with the content type It makes an uproar processing, the Target Photo after being denoised.
Optionally, according to the content type, by color enhancement model corresponding with the content type to described Picture to be processed carries out before color enhancement processing, further includes:
For pre-set categories, deterioration operation is originally carried out to the good pattern of preset quantity, obtains the poor pattern sheet of preset quantity;
According to the good pattern sheet of the corresponding preset quantity of the pre-set categories and poor pattern sheet, using based on Unet structure Convolutional neural networks are trained, and obtain the corresponding color enhancement model of pre-set categories.
Optionally, according to the content type, by color enhancement model corresponding with the content type to it is described to It handles picture and carries out color enhancement processing, obtain Target Photo, comprising:
According to the content type, by color enhancement model corresponding with the content type to the picture to be processed The convolution algorithm for carrying out 5 convolutional layers, obtains the fisrt feature figure in 128 channels, the fisrt feature figure it is a length of it is described to The 1/16 of the length of picture is handled, the width of the first response picture is wide 1/16 of the picture to be processed, each convolution The length of the characteristic pattern of layer output is the 1/2 of the length of the characteristic pattern of upper convolutional layer output, the feature of each convolutional layer output Figure it is wide be the output of a upper convolutional layer characteristic pattern wide 1/2, the picture to be processed includes the number of R, G and B triple channel According to;
After carrying out convolution algorithm twice to the fisrt feature figure in 128 channels, the second feature in 128 channels is obtained Figure, the 1/64 of the length of a length of picture to be processed of the second feature figure, the width of the second feature figure is described wait locate Manage wide 1/64 of picture;
Full connection operation is carried out to the second feature figure in 128 channels, obtains 128 data;
Global splicing is carried out to 128 data, obtains the third feature figure in 128 channels, the third feature figure A length of picture to be processed length 1/16, the width of the third feature figure is wide 1/16 of the picture to be processed;
The third feature figure in 128 channels and the fisrt feature figure in 128 channels are subjected to local splicing, obtained To the fourth feature figure in 256 channels, the 1/16 of the length of a length of picture to be processed of the fourth feature figure, the described 4th The width of characteristic pattern is wide 1/16 of the picture to be processed;
Convolution algorithm is carried out to the fourth feature figure in 256 channels, obtains the fifth feature figure in 128 channels;
The convolution algorithm for successively carry out 4 convolutional layers to the fifth feature figure in 128 channels and locally splicing, obtain The sixth feature figure in 48 channels;
Convolution algorithm is carried out to the sixth feature figure in 48 channels, obtains the seventh feature figure in 16 channels;
Convolution algorithm is carried out to the seventh feature figure in 16 channels, obtains the eighth feature figure in 3 channels;
The eighth feature figure in 3 channels and the picture to be processed are subjected to residual error operation, obtain the target figure Piece.
Optionally, whether the detection picture to be processed is high definition picture bright in luster, comprising:
The feature extraction and activation processing that five convolutional layers are carried out to picture to be processed, obtain 64 characteristic patterns, the volume Lamination uses 64 3 × 3 convolution kernels, and the activation primitive that the activation processing uses is amendment linear unit;
Global average pondization processing is carried out to 64 characteristic patterns, obtains 64 data;
Full connection processing is carried out to 64 data, so that output is two data;
Classified by Sigmoid function to described two data, obtaining output result is to be or no.
Second aspect according to the present invention provides a kind of picture processing unit, comprising:
High definition detection module, for detecting whether picture to be processed is high definition picture bright in luster;
Processed detection module, for detecting whether the picture to be processed is the picture for having been subjected to enhancing processing;
Categorization module, for not being high definition picture bright in luster in the picture to be processed and not being to have been subjected to enhancing processing Picture when, classifying content is carried out to the picture to be processed, obtains the content type of the picture to be processed;
Color enhancement module, for passing through color enhancement mould corresponding with the content type according to the content type Type carries out color enhancement processing to the picture to be processed, obtains Target Photo.
Optionally, the categorization module includes:
Taxon, for the pre-generated picture classification model of the picture input to be processed to be carried out feature extraction simultaneously Classifying content is carried out, the content type of the picture to be processed is obtained, the picture classification model is based on convolutional neural networks Model.
Optionally, the taxon is specifically used for:
Feature is carried out to the picture to be processed by the conventional part of the MobileNet in the picture classification model to mention It takes, obtains feature extraction data;
The feature extraction data are successively carried out with global average pondization processing and full connection is handled, is obtained and setting content The identical categorical data of the quantity of classification;
The categorical data is normalized by normalizing exponential function, the picture to be processed is obtained and belongs to The probability of each setting content classification;
The probability for belonging to each setting content classification according to the picture to be processed determines the content of the picture to be processed Classification.
Optionally, further includes:
First denoising module, for passing through color enhancement corresponding with the content type according to the content type Before model carries out color enhancement processing to the picture to be processed, according to the content type, by with the content type Corresponding denoising model carries out denoising to the picture to be processed, the picture after being denoised;
The color enhancement module is specifically used for:
According to the content type, by color enhancement model corresponding with the content type to the figure after the denoising Piece carries out color enhancement processing, obtains Target Photo.
Optionally, the first denoising module is specifically used for:
According to the content type, mentioned by the feature that corresponding denoising model carries out six convolutional layers to picture to be processed It takes and activation is handled, obtain 64 characteristic patterns, six convolutional layers use 64 3 × 3 convolution kernels, and the activation processing uses Activation primitive be to correct linear unit function;
Feature extraction is carried out to 64 characteristic patterns by 33 × 3 convolution kernels, the picture after being denoised.
Optionally, further includes:
Denoising model training module utilizes convolutional Neural net for the picture sample of the preset quantity for pre-set categories Network is trained, and obtains the corresponding denoising model of pre-set categories.
Optionally, further includes:
Second denoising module, for passing through color enhancement mould corresponding with the content type according to the content type Type carries out color enhancement processing to the picture to be processed, after obtaining Target Photo, according to the content type, by with institute It states content type corresponding denoising model and denoising is carried out to the Target Photo, the Target Photo after being denoised.
Optionally, further includes:
Module is deteriorated, for being directed to pre-set categories, deterioration operation is originally carried out to the good pattern of preset quantity, obtains present count The poor pattern sheet of amount;
Color enhancement model training module, for the good pattern sheet and difference according to the corresponding preset quantity of the pre-set categories Pattern sheet is trained using the convolutional neural networks based on Unet structure, obtains the corresponding color enhancement model of pre-set categories.
Optionally, the color enhancement module is specifically used for:
According to the content type, by color enhancement model corresponding with the content type to the picture to be processed The convolution algorithm for carrying out 5 convolutional layers, obtains the fisrt feature figure in 128 channels, the fisrt feature figure it is a length of it is described to The 1/16 of the length of picture is handled, the width of the first response picture is wide 1/16 of the picture to be processed, each convolution The length of the characteristic pattern of layer output is the 1/2 of the length of the characteristic pattern of upper convolutional layer output, the feature of each convolutional layer output Figure it is wide be the output of a upper convolutional layer characteristic pattern wide 1/2, the picture to be processed includes the number of R, G and B triple channel According to;
After carrying out convolution algorithm twice to the fisrt feature figure in 128 channels, the second feature in 128 channels is obtained Figure, the 1/64 of the length of a length of picture to be processed of the second feature figure, the width of the second feature figure is described wait locate Manage wide 1/64 of picture;
Full connection operation is carried out to the second feature figure in 128 channels, obtains 128 data;
Global splicing is carried out to 128 data, obtains the third feature figure in 128 channels, the third feature figure A length of picture to be processed length 1/16, the width of the third feature figure is wide 1/16 of the picture to be processed;
The third feature figure in 128 channels and the fisrt feature figure in 128 channels are subjected to local splicing, obtained To the fourth feature figure in 256 channels, the 1/16 of the length of a length of picture to be processed of the fourth feature figure, the described 4th The width of characteristic pattern is wide 1/16 of the picture to be processed;
Convolution algorithm is carried out to the fourth feature figure in 256 channels, obtains the fifth feature figure in 128 channels;
The convolution algorithm for successively carry out 4 convolutional layers to the fifth feature figure in 128 channels and locally splicing, obtain The sixth feature figure in 48 channels;
Convolution algorithm is carried out to the sixth feature figure in 48 channels, obtains the seventh feature figure in 16 channels;
Convolution algorithm is carried out to the seventh feature figure in 16 channels, obtains the eighth feature figure in 3 channels;
The eighth feature figure in 3 channels and the picture to be processed are subjected to residual error operation, obtain the target figure Piece.
Optionally, the high definition detection module is specifically used for:
The feature extraction and activation processing that five convolutional layers are carried out to picture to be processed, obtain 64 characteristic patterns, the volume Lamination uses 64 3 × 3 convolution kernels, and the activation primitive that the activation processing uses is amendment linear unit;
Global average pondization processing is carried out to 64 characteristic patterns, obtains 64 data;
Full connection processing is carried out to 64 data, so that output is two data;
Classified by Sigmoid function to described two data, obtaining output result is to be or no.
The third aspect according to the present invention provides a kind of server, comprising: processor, memory and is stored in described It is real when the computer program is executed by the processor on memory and the computer program that can run on the processor The now image processing method as described in first aspect.
Fourth aspect according to the present invention provides a kind of computer readable storage medium, the computer-readable storage It is stored with computer program on medium, the picture as described in first aspect is realized when the computer program is executed by processor Processing method.
For first technology, the present invention has following advantage:
Image processing method, device, server and storage medium provided by the invention, by whether detecting picture to be processed For high definition picture bright in luster, detect whether picture to be processed is the picture for having been subjected to enhancing processing, in the picture to be processed It is not high definition picture bright in luster and is not when having been subjected to the picture of enhancing processing, classifying content to be carried out to picture to be processed, is obtained To the content type of picture to be processed, according to content type, color is carried out to picture to be processed by corresponding color enhancement model Color enhancing processing, obtains Target Photo, realizes the automatic enhancing processing to picture, improves the efficiency of enhancing processing, reduce Human cost.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.
Fig. 1 is a kind of step flow chart of image processing method provided in an embodiment of the present invention;
Fig. 2 is the structural schematic diagram for the convolutional neural networks that the high definition picture detector in the embodiment of the present invention uses;
Fig. 3 be detection picture to be processed in the embodiment of the present invention whether be high definition picture bright in luster flow chart;
Fig. 4 is the step flow chart of another image processing method provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram for the convolutional neural networks that the picture classification model in the embodiment of the present invention uses;
Fig. 6 is the flow chart classified by picture classification model to picture to be processed in the embodiment of the present invention;
Fig. 7 is the step flow chart of another image processing method provided in an embodiment of the present invention;
Fig. 8 is the structural schematic diagram of the denoising model in the embodiment of the present invention;
Fig. 9 is the step flow chart of another image processing method provided in an embodiment of the present invention;
Figure 10 is the step flow chart of another image processing method provided in an embodiment of the present invention;
Figure 11 is the schematic diagram of the Unet structure in the embodiment of the present invention;
Figure 12 be in the embodiment of the present invention by color enhancement model corresponding with content type to picture to be processed into The flow chart of row color enhancement processing;
Figure 13 is the step flow chart of another image processing method provided in an embodiment of the present invention;
Figure 14 is the block diagram of picture processing system used by image processing method provided in an embodiment of the present invention;
Figure 15 is a kind of structural block diagram of picture processing unit provided in an embodiment of the present invention;
Figure 16 is a kind of structural block diagram of server provided in an embodiment of the present invention.
Specific embodiment
The exemplary embodiment that the present invention will be described in more detail below with reference to accompanying drawings.Although showing the present invention in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the present invention without should be by embodiments set forth here It is limited.It is to be able to thoroughly understand the present invention on the contrary, providing these embodiments, and can be by the scope of the present invention It is fully disclosed to those skilled in the art.
Fig. 1 is a kind of step flow chart of image processing method provided in an embodiment of the present invention, and this method can be applied to Server, as shown in Figure 1, this method may include:
Step 101, detect whether picture to be processed is high definition picture bright in luster.
It can detect whether picture to be processed is high definition picture bright in luster by pre-generated high definition picture detector, If picture to be processed is high definition and bright picture, does not need to carry out enhancing processing to the picture to be processed again, keep away It is excessively bright-coloured to exempt from color caused by continuing enhancing processing.Wherein, high definition picture detector can use a variety of depth convolution Neural network model, for detecting whether picture to be processed is high definition and bright picture.High definition picture bright in luster is Refer to that picture is high definition and bright picture.
Illustratively, Fig. 2 is the structure for the convolutional neural networks that the high definition picture detector in the embodiment of the present invention uses Schematic diagram uses in 5 layers of convolutional layer as shown in Fig. 2, 5 layers of convolutional neural networks can be used in high definition picture detector 64 3 × 3 convolution kernels extract 64 features, and input picture passes through ReLU after extracting 64 features by a convolutional layer (The Rectified Linear Unit corrects linear unit) is activated, and enters back into next convolutional layer, passes through convolution Layer carries out entering pond layer after feature extraction, and pond layer enters full articulamentum using global average pond later, so that output As a result become two kinds, and by Sigmoid function classified to obtain final output result to be or no.To convolutional Neural It, can be low using 100,000 Zhang Gaoqing and bright picture and 100,000 when network is trained generation high definition picture detector Clear and gloomy color picture is trained as sample.
Fig. 3 be detection picture to be processed in the embodiment of the present invention whether be high definition picture bright in luster flow chart, inspection It surveys when whether picture to be processed is high definition picture bright in luster and is detected using high definition picture detector shown in Fig. 2, such as Fig. 3 It is shown, detect picture to be processed whether be high definition picture bright in luster the following steps are included:
Step 1011, the feature extraction of five convolutional layers is carried out to picture to be processed and activation is handled, obtain 64 features Figure, the convolutional layer use 64 3 × 3 convolution kernels, and the activation primitive that the activation processing uses is amendment linear unit.
Step 1012, global average pondization processing is carried out to 64 characteristic patterns, obtains 64 data.
Step 1013, full connection processing is carried out to 64 data, so that output is two data.
Step 1014, classified by Sigmoid function to described two data, obtain output result be or It is no.
Wherein, Sigmoid function is used as the threshold function table of neural network, by variable mappings between 0 and 1, thus root According to Sigmoid function export value be compared with threshold value, determine output as a result, as Sigmoid function export value be greater than or When equal to threshold value, determine output result be it is yes, Sigmoid function output value be less than threshold value when, determine export result be no.
Step 102, detect whether the picture to be processed is the picture for having been subjected to enhancing processing.
Picture detector can be handled by pre-generated enhancing detect whether picture to be processed is to have been subjected to enhancing The picture of processing does not need again to increase the picture to be processed if picture to be processed is the picture for having been subjected to enhancing processing Strength reason, avoids reprocessing, and can save the processing time.Wherein, a variety of depths can be used by having enhanced processing picture detector Spend convolutional neural networks model.
Illustratively, having enhanced processing picture detector also can be used the knot of 5 layers of convolutional neural networks shown in Fig. 2 Structure, extracts 64 features using 64 3 × 3 convolution kernels in 5 layers of convolutional layer, and input picture is extracted by a convolutional layer Pass through ReLU after 64 features to be activated, enter back into next convolutional layer, the laggard of feature extraction is carried out by convolutional layer Enter pond layer, pond layer enters full articulamentum using global average pond later, so that output result becomes two kinds, and passes through Sigmoid function, which is classified to obtain final output result and is, is or no.Generation has been trained to convolutional neural networks When enhancing processing picture detector, enhancing processing can be passed through without the pictures of enhancing processing and 100,000 using 100,000 Picture is trained as sample.
Optionally, detect whether the picture to be processed is the picture for having been subjected to enhancing processing, comprising: to picture to be processed The feature extraction and activation processing for carrying out five convolutional layers, obtain 64 characteristic patterns, the convolutional layer uses 64 3 × 3 convolution Core, the activation primitive that the activation processing uses is amendment linear unit;Global average pond is carried out to 64 characteristic patterns Processing, obtains 64 data;Full connection processing is carried out to 64 data, so that output is two data;Pass through Sigmoid Function classifies to described two data, and obtaining output result is to be or no.
It should be noted that the sequencing of above-mentioned steps 101 and step 102 is unlimited, it can be and first carry out step 101, Execute step 102 again, alternatively, being also possible to first carry out step 102, then execute step 101, alternatively, can also be step 101 and Step 102 is performed simultaneously.As long as picture to be processed is one in high definition picture bright in luster and the picture for having been subjected to enhancing processing Kind, so that it may no longer carry out subsequent color enhancement processing.
Step 103, it is not high definition picture bright in luster in the picture to be processed and is not the figure for having been subjected to enhancing processing When piece, classifying content is carried out to the picture to be processed, obtains the content type of the picture to be processed.
Wherein, the content type can be one of setting content classification, and setting content classification is preset picture The classification of content, for example, setting content classification may include personage, cartoon, landscape and other etc. four classifications, then the content Classification can for personage, cartoon, landscape or other one of.The picture to be processed is the figure for carrying out enhancing processing Piece such as can be the surface plot in UGC.
A kind of feasible embodiment is that the corresponding feature of content type can be stored in advance, and extracts picture to be processed Feature, feature corresponding with pre-stored classification are matched, and determine content type belonging to picture to be processed.Alternatively, another A kind of feasible embodiment is can to train corresponding disaggregated model in advance, and picture to be processed is inputted point that training is completed Class model obtains the classification of picture to be processed.
Step 104, according to the content type, by color enhancement model corresponding with the content type to it is described to It handles picture and carries out color enhancement processing, obtain Target Photo.
Wherein, color enhancement model can be the model based on artificial intelligence, such as based on the model of convolutional neural networks, color The setting content classification of coloured silk enhancing model and picture is correspondingly that is, a kind of setting content classification corresponds to a kind of color enhancement Model.
By the way that picture to be processed is inputted color enhancement model corresponding with content type, treated by color enhancement model It handles picture and carries out color enhancement processing, obtain Target Photo, Target Photo improves colour vividness compared to picture to be processed.
Whether image processing method provided in this embodiment is high definition picture bright in luster by detecting picture to be processed, Detect whether picture to be processed is the picture for having been subjected to enhancing processing, is not high definition picture bright in luster and not in picture to be processed It is when having been subjected to the picture of enhancing processing, classifying content to be carried out to picture to be processed, obtains the content type of picture to be processed, root According to the content type, picture to be processed is carried out at color enhancement by color enhancement model corresponding with the content type Reason, obtain Target Photo, realize to picture automatic enhancing processing, improve enhancing processing efficiency, reduce manpower at This.
Fig. 4 is the step flow chart of another image processing method provided in an embodiment of the present invention, and the present embodiment is above-mentioned On the basis of embodiment, classifying content is carried out to picture to be processed, obtains the content type of the picture to be processed, is further wrapped It includes: the pre-generated picture classification model of the picture input to be processed being subjected to feature extraction and carries out classifying content, is obtained The content type of the picture to be processed, the picture classification model are the model based on convolutional neural networks.As shown in figure 4, This method may include:
Step 401, detect whether picture to be processed is high definition picture bright in luster.
The particular content of this step is identical as the particular content of the step 101 in above-described embodiment, and which is not described herein again.
Step 402, detect whether the picture to be processed is the picture for having been subjected to enhancing processing.
The particular content of this step is identical as the particular content of the step 102 in above-described embodiment, and which is not described herein again.
Step 403, it is not high definition picture bright in luster in the picture to be processed and is not the figure for having been subjected to enhancing processing When piece, the pre-generated picture classification model of the picture input to be processed is subjected to feature extraction and carries out classifying content, is obtained To the content type of the picture to be processed, the picture classification model is the model based on convolutional neural networks.
Wherein, convolutional neural networks are a kind of feedforward neural networks, and artificial neuron can respond surrounding cells, Ke Yijin Row large size image procossing.Convolutional neural networks may include convolutional layer, pond layer and full articulamentum.Determine figure as needed first The great amount of samples of known class is inputted picture classification model, to picture point by the neuron of the number of plies of piece disaggregated model and each layer Class model is trained, and obtains the weight of each neuron, to obtain the picture classification model of training completion.
When needing to classify to picture to be processed, picture to be processed is inputted into the picture classification model that training is completed, The content type of picture to be processed is determined by exporting result.
Wherein, Fig. 5 is the structural schematic diagram for the convolutional neural networks that the picture classification model in the embodiment of the present invention uses, As shown in figure 5, the picture classification model may include: the conventional part of MobileNet, global average pondization part, Quan Lian Connect layer and normalized part.MobileNet is a kind of deep layer for lightweight that Google is proposed for embedded devices such as mobile phones Neural network, comparative maturity, calculating speed are very fast.Picture is inputted to use in picture classification model by convolutional layer The convolution of MobileNet carries out feature extraction, and pond layer is using global average pond, the number and setting of the output of full articulamentum The quantity of content type is identical, and normalized part can use class number normalization index identical with the quantity of setting content classification Function (Softmax) is normalized, and is such as normalized using the normalization exponential function of 4 classes, to obtain in specific Hold classification.For example, when being trained to picture classification model, can be added according to setting content classification in practical application Sample graph is trained, and 200,000 sample graphs can such as be added and be trained, and is 50,000 personage's pictures, 50,000 cartoon figures respectively Piece, 50,000 scenery pictures, 50,000 other pictures are trained.
Fig. 6 is the flow chart classified by picture classification model to picture to be processed in the embodiment of the present invention, figure Piece disaggregated model uses network structure shown in fig. 5, as shown in fig. 6, the picture that the picture input to be processed is pre-generated Disaggregated model carries out feature extraction and carries out classifying content, obtains the content type of the picture to be processed, comprising the following steps:
Step 2031, by the conventional part of the MobileNet in the picture classification model to the picture to be processed Feature extraction is carried out, feature extraction data are obtained;
Step 2032, the feature extraction data are successively carried out with global average pondization processing and full connection processing, is obtained Categorical data identical with the quantity of setting content classification;
Step 2033, the categorical data is normalized by normalizing exponential function, is obtained described wait locate Reason picture belongs to the probability of each setting content classification;
Step 2034, the probability that each setting content classification is belonged to according to the picture to be processed determines described to be processed The content type of picture.
After determining that picture to be processed belongs to the probability of each setting content classification, the setting content class of maximum probability is determined Not Wei picture to be processed content type.
Step 404, according to the content type, by color enhancement model corresponding with the content type to it is described to It handles picture and carries out color enhancement processing, obtain Target Photo.
The particular content of this step is identical as the particular content of the step 104 in above-described embodiment, and which is not described herein again.
Image processing method provided in this embodiment passes through the picture classification that the picture input to be processed is pre-generated Model carries out feature extraction and carries out classifying content, obtains the content type of the picture to be processed, and classification results are more accurate, To be conducive to the processing of subsequent color enhancement, so that treated that effect is more preferable for color enhancement.
Fig. 7 is the step flow chart of another image processing method provided in an embodiment of the present invention, as shown in fig. 7, the party Method may include:
Step 701, detect whether picture to be processed is high definition picture bright in luster.
The particular content of this step is identical as the particular content of the step 101 in above-described embodiment, and which is not described herein again.
Step 702, detect whether the picture to be processed is the picture for having been subjected to enhancing processing.
The particular content of this step is identical as the particular content of the step 102 in above-described embodiment, and which is not described herein again.
Step 703, it is not high definition picture bright in luster in the picture to be processed and is not the figure for having been subjected to enhancing processing When piece, classifying content is carried out to the picture to be processed, obtains the content type of the picture to be processed.
The particular content of this step is identical as the particular content of the step 103 in above-described embodiment, and which is not described herein again.
Step 704, it for the picture sample of the preset quantity of pre-set categories, is trained, is obtained using convolutional neural networks To the corresponding denoising model of pre-set categories.
Wherein, pre-set categories are content types belonging to preset picture, are one of setting content classifications, can With include personage, cartoon, landscape or other etc..The picture sample includes high definition picture sample and corresponding noise sample.
For different pre-set categories, the high definition picture sample of preset quantity is chosen, is added in high definition picture sample high This noise and JEPG (Joint Photographic Experts Group, Joint Photographic Experts Group) noise, are preset The noise sample of quantity.Wherein, JEPG noise is added to refer to the progress JEPG compression of high definition picture sample.
The feature for needing to extract is determined according to a pre-set categories, determines corresponding convolution kernel, is based on convolution to establish The denoising model of neural network.Using the noise sample of preset quantity as the input of convolutional neural networks, the high definition of preset quantity Output of the picture sample as convolutional neural networks, is trained convolutional neural networks, obtains the weight of each convolution kernel, from And form the corresponding denoising model of pre-set categories.
Wherein, denoising model is chosen as the structure of full convolution.Fig. 8 is that the structure of the denoising model in the embodiment of the present invention is shown It is intended to, as shown in figure 8, denoising model can be the structure of 7 layers of full convolution, it is a ReLU excitation layer after a convolutional layer, 64 features are extracted by 64 3 × 3 convolution kernels in every layer of convolutional layer of the first six layer, are for the last time 33 × 3 convolution The output result of convolutional layer is done Nonlinear Mapping, ReLU (The Rectified Linear Unit, modified line by core, excitation layer Property unit) be convolutional neural networks excitation function, have the characteristics that convergence is fast, ask gradient simple.
Step 705, according to the content type, by denoising model corresponding with the content type to described to be processed Picture carries out denoising, the picture after being denoised.
Wherein, denoising model can be the model based on convolutional neural networks, and setting content classification and denoising model are one One is corresponding, i.e., a kind of setting content classification corresponds to a kind of denoising model.
After the content type for determining picture to be processed, picture to be processed is inputted into denoising mould corresponding with the content type Type carries out denoising to picture to be processed, the picture after being denoised.Picture to be processed, can after denoising model denoises To remove the ambient noise and other noises that shoot introducing under half-light.
Optionally, according to the content type, the picture to be processed is carried out at denoising by corresponding denoising model Reason, the picture after being denoised, comprising:
According to the content type, mentioned by the feature that corresponding denoising model carries out six convolutional layers to picture to be processed It takes and activation is handled, obtain 64 characteristic patterns, six convolutional layers use 64 3 × 3 convolution kernels, and the activation processing uses Activation primitive be to correct linear unit function;
Feature extraction is carried out to 64 characteristic patterns by 33 × 3 convolution kernels, the picture after being denoised.
When being denoised by denoising model shown in Fig. 8 to picture to be processed, first according to determining picture to be processed Content type, determine corresponding with content type denoising model, picture to be processed inputted into the denoising model, denoising mould After type carries out feature extraction and activation processing to the picture to be processed, 64 characteristic patterns are obtained, pass through a convolutional layer again later Feature extraction is carried out, the feature in 3 channels is extracted, that is, the feature of R, G and channel B is extracted, thus the picture after being denoised.
Step 706, it according to the content type, is gone by color enhancement model corresponding with the content type to described Picture after making an uproar carries out color enhancement processing, obtains Target Photo.
Image processing method provided in this embodiment, by treating place by corresponding denoising model according to content type Manage picture carry out denoising, the picture after denoise, progress color enhancement processing when, according to content type to denoising after Picture carries out color enhancement processing, so as to further promote the presentation effect of picture.
Fig. 9 is the step flow chart of another image processing method provided in an embodiment of the present invention, as shown in figure 9, the party Method may include:
Step 901, detect whether picture to be processed is high definition picture bright in luster.
The particular content of this step is identical as the particular content of the step 101 in above-described embodiment, and which is not described herein again.
Step 902, detect whether the picture to be processed is the picture for having been subjected to enhancing processing.
The particular content of this step is identical as the particular content of the step 102 in above-described embodiment, and which is not described herein again.
Step 903, it is not high definition picture bright in luster in the picture to be processed and is not the figure for having been subjected to enhancing processing When piece, classifying content is carried out to the picture to be processed, obtains the content type of the picture to be processed.
The particular content of this step is identical as the particular content of the step 103 in above-described embodiment, and which is not described herein again.
Step 904, according to the content type, by color enhancement model corresponding with the content type to it is described to It handles picture and carries out color enhancement processing, obtain Target Photo.
The particular content of this step is identical as the particular content of the step 104 in above-described embodiment, and which is not described herein again.
Step 905, it for the picture sample of the preset quantity of pre-set categories, is trained, is obtained using convolutional neural networks To denoising model.
The particular content of this step is identical as the particular content of the step 704 in above-described embodiment, and which is not described herein again.
Step 906, according to the content type, by denoising model corresponding with the content type to the target figure Piece carries out denoising, the Target Photo after being denoised.
Wherein, denoising model and content type are one-to-one, and different content types corresponds to different denoising models.
It carries out obtained Target Photo after color enhancement and is possible to that there is also make an uproar due to shooting the background of introducing under half-light At this moment Target Photo can be inputted denoising model corresponding with content type further according to content type by sound or other noises Denoising is carried out to Target Photo, thus the Target Photo after being denoised.
Image processing method provided in this embodiment passes through the mesh for obtain after color enhancement processing to picture to be processed Piece of marking on a map carries out denoising, can further promote the presentation effect of picture.
Figure 10 is the step flow chart of another image processing method provided in an embodiment of the present invention, and the present embodiment is above-mentioned On the basis of embodiment, whether before detecting picture to be processed and being high definition picture bright in luster, also optional includes: for default Classification originally carries out deterioration operation to the good pattern of preset quantity, obtains the poor pattern sheet of preset quantity;According to the preset quantity Good pattern sheet and poor pattern sheet, the convolutional neural networks based on Unet structure are trained, the color enhancement mould is obtained Type.As shown in Figure 10, this method may include:
Step 1001, detect whether picture to be processed is high definition picture bright in luster.
The particular content of this step is identical as the particular content of the step 101 in above-described embodiment, and which is not described herein again.
Step 1002, detect whether the picture to be processed is the picture for having been subjected to enhancing processing.
The particular content of this step is identical as the particular content of the step 102 in above-described embodiment, and which is not described herein again.
Step 1003, it is not high definition picture bright in luster in the picture to be processed and is not the figure for having been subjected to enhancing processing When piece, classifying content is carried out to the picture to be processed, obtains the content type of the picture to be processed.
The particular content of this step is identical as the particular content of the step 103 in above-described embodiment, and which is not described herein again.
Step 1004, to the good pattern sheet of the preset quantity of pre-set categories, deterioration operation is carried out, the difference of preset quantity is obtained Pattern sheet.
Wherein, pre-set categories are classifications belonging to preset picture, may include personage, cartoon, landscape or other Deng.Good pattern sheet be bright picture, poor pattern sheet by good pattern is originally carried out deterioration operation obtain, poor pattern sheet with Good pattern is originally pairs of.
For different pre-set categories, choose the good pattern sheet of different number, deterioration operation originally carried out to good pattern, such as into Row change color curve, increase contrast, turn down the deteriorations such as brightness operation at least one of, obtain poor pattern sheet.
Step 1005, according to the good pattern sheet of the corresponding preset quantity of the pre-set categories and poor pattern sheet, using being based on The convolutional neural networks of Unet structure are trained, and obtain the corresponding color enhancement model of pre-set categories.
For pre-set categories, using the poor pattern of corresponding preset quantity this as the input of convolutional neural networks, present count This output as convolutional neural networks of the good pattern of amount, is trained the convolutional neural networks based on Unet structure, obtains The corresponding color enhancement model of pre-set categories.
Step 1006, according to the content type, by color enhancement model corresponding with the content type to described Picture to be processed carries out color enhancement processing, obtains Target Photo.
Figure 11 is the schematic diagram of the Unet structure in the embodiment of the present invention.Figure 12 be in the embodiment of the present invention by with it is interior Hold the flow chart that the corresponding color enhancement model of classification carries out color enhancement processing to picture to be processed, color enhancement model uses Structure shown in Figure 11 according to the content type, passes through color enhancement mould corresponding with the content type as shown in figure 12 Type carries out color enhancement processing to the picture to be processed, obtains Target Photo, comprising:
Step 1201, according to the content type, by color enhancement model corresponding with the content type to described Picture to be processed carries out the convolution algorithm of 5 convolutional layers, obtains the fisrt feature figure in 128 channels.
Wherein, the 1/16 of the length of a length of picture to be processed of the fisrt feature figure, the width of the fisrt feature figure For wide 1/16 of the picture to be processed, the length of the characteristic pattern of each convolutional layer output is the spy of upper convolutional layer output Levy the 1/2 of the length of figure, the characteristic pattern of each convolutional layer output it is wide be the output of a upper convolutional layer characteristic pattern wide 1/ 2, the picture to be processed includes the data of R, G and B triple channel.
As shown in figure 11, H, W=512 indicate picture length and it is wide be respectively 512, i.e., resolution sizes be 512 × 512, The resolution ratio for inputting picture 1101 is 512 × 512, and port number 3, i.e. R, G, channel B, input picture 1101 is by the 1st volume Lamination carries out convolution algorithm and SELU (Scaled Exponential Linear Units scales index linear unit) activation After operation, port number becomes 16, and long and width is still H and W;Later by the convolution algorithm and SELU activation fortune of three convolutional layers Calculate, the length of obtained characteristic pattern and it is wide reduce one times respectively, and port number doubles, specifically, by the 2nd convolutional layer into After row convolution algorithm and SELU activation operation, the length of characteristic pattern and it is wide be reduced into H/2 and W/2 respectively, port number 32, by the After 3 convolutional layers carry out convolution algorithm and SELU activation operation, the length and width of characteristic pattern are reduced into H/4 and W/4, port number respectively It is 64, after the 4th convolutional layer carries out convolution algorithm and SELU activation operation, the length and width of characteristic pattern are reduced into H/8 respectively And W/8, port number 128;Later after the 5th convolutional layer carries out convolution algorithm and SELU activation operation, obtain first The length and width of characteristic pattern 1102 are reduced into H/16 and W/16 respectively, and port number is still 128.
Step 1202, after carrying out convolution algorithm twice to the fisrt feature figure in 128 channels, 128 channels are obtained Second feature figure.
Wherein, the 1/64 of the length of a length of picture to be processed of the second feature figure, the width of the second feature figure For wide 1/64 of the picture to be processed.
As shown in figure 11, fisrt feature Figure 110 2 carries out convolution algorithm using the 6th convolutional layer and SELU activates operation Afterwards, the length of characteristic pattern and it is wide be reduced into 16 and 16 respectively, port number is still 128, carries out convolution algorithm using the 7th layer of convolutional layer With SELU activate operation after, the length of obtained second feature Figure 110 3 and it is wide be reduced into 8 and 8 respectively, port number is still 128.
Step 1203, full connection operation is carried out to the second feature figure in 128 channels, obtains 128 data.
As shown in figure 11, the length of the characteristic pattern obtained by full connection operation of second feature Figure 110 3 and width are reduced respectively For 1 and 1, port number is still 128 to get to 128 data.
Step 1204, global splicing is carried out to 128 data, obtains the third feature figure in 128 channels.
Wherein, the 1/16 of the length of a length of picture to be processed of the third feature figure, the width of the third feature figure For wide 1/16 of the picture to be processed.
As shown in figure 11,128 data are subjected to global splicing (global concat) again, are spliced into long and wide point point Not Wei H/16 and W/16 third feature Figure 110 4, port number is still 128.
Step 1205, the third feature figure in 128 channels and the fisrt feature figure in 128 channels are carried out this Ground splicing, obtains the fourth feature figure in 256 channels.
Wherein, the 1/16 of the length of a length of picture to be processed of the fourth feature figure, the width of the fourth feature figure For wide 1/16 of the picture to be processed.
As shown in figure 11, third feature Figure 110 4 is subjected to local splicing with the result obtained after the 5th layer of convolutional layer, The length of obtained fourth feature Figure 110 5 and wide respectively H/16 and W/16, port number 256.
Step 1206, convolution algorithm is carried out to the fourth feature figure in 256 channels, obtains the 5th of 128 channels Characteristic pattern.
As shown in figure 11, fourth feature Figure 110 5 is subjected to convolution algorithm, obtained fifth feature by the 8th layer of convolutional layer The length of Figure 110 6 and wide respectively H/16 and W/16, port number 128.
Step 1207, the convolution algorithm and local of 4 convolutional layers are successively carried out to the fifth feature figure in 128 channels Splicing, obtains the sixth feature figure in 48 channels.
As shown in figure 11, the rectangle of dash area represents and carries out the knot that convolution algorithm obtains after fifth feature Figure 110 6 Fruit, immediately blank rectangular indicates the characteristic pattern for indicating identical length before and wide blank rectangular to the rectangle of dash area It carries out local splicing (local concat) to obtain, i.e., carries out the length for the characteristic pattern that convolution algorithm obtains by the 9th layer of convolutional layer Be respectively H/8 and W/8 with width, port number 128, by this feature figure and long before and width be similarly the characteristic pattern of H/8 and W/8 into The local splicing of row, obtained port number are 256, and the length and width of the characteristic pattern that convolution algorithm obtains are carried out by the 10th layer of convolutional layer Respectively H/4 and W/4, port number 128, by this feature figure and the characteristic pattern for being similarly H/4 and W/4 with width long before carries out this Ground splicing, obtained port number are 192, and the length and width for carrying out the characteristic pattern that convolution algorithm obtains by 11th layer convolutional layer are respectively For H/2 and W/2, port number 64, by this feature figure and the characteristic pattern for being similarly H/2 and W/2 with width long before carries out local spelling It connects, obtained port number is 96, carries out convolution algorithm by the 12nd layer of convolutional layer and up-samples the sixth feature figure that operation obtains Length and it is wide be respectively H and W, port number 32, long and width is similarly the feature of H and W by sixth feature Figure 110 7 and before Figure carries out local splicing, and obtained port number is 48.
Step 1208, convolution algorithm is carried out to the sixth feature figure in 48 channels, obtain 16 channels the 7th is special Sign figure.
As shown in figure 11, sixth feature Figure 110 7 is subjected to convolution algorithm by the 13rd layer of convolutional layer, the 7th obtained is special The length and width for levying Figure 110 8 are respectively H and W, port number 16.
Step 1209, convolution algorithm is carried out to the seventh feature figure in 16 channels, obtains the eighth feature in 3 channels Figure.
As shown in figure 11, seventh feature Figure 110 8 is subjected to convolution algorithm by the 14th layer of convolutional layer, the 8th obtained is special The length and width for levying Figure 110 9 are respectively H and W, port number 3.
Step 1210, the eighth feature figure in 3 channels and the picture to be processed are subjected to residual error operation, obtain institute State Target Photo.
As shown in figure 11, eighth feature Figure 110 9 and input picture 1101 are subjected to residual error operation (residual), obtained Target Photo 1110.
Image processing method provided in this embodiment gives color enhancement model on the basis of the above embodiments Generating process.
Figure 13 is the step flow chart of another image processing method provided in an embodiment of the present invention, and the present embodiment is upper State an optional example on the basis of embodiment.Figure 14 is figure used by image processing method provided in an embodiment of the present invention The block diagram of piece processing system, as shown in figure 14, picture processing system may include high definition picture detector 1401 bright in luster, Enhancing processing picture detector 1402, picture classification model 1403 and denoising and color enhancement model 1404.Wherein, denoising and Color enhancement model 1404 includes denoising model corresponding with content type and color enhancement model, and content type may include people Object, cartoon, landscape and other four classifications, then denoising and color enhancement model 1404 may include: denoising corresponding with personage With color enhancement model 1, it is corresponding with cartoon denoising and color enhancement model 2, it is corresponding with landscape denoise and color enhancement mould Type 3, denoising corresponding with other and color enhancement model 4.
As shown in Figure 13 and Figure 14, which may include:
Step 1301, the high definition picture detector that the picture input to be processed is pre-generated, is detected described to be processed Whether picture is high definition picture bright in luster.
Detect whether picture to be processed is high definition and bright picture by high definition picture detector, if it is high definition And bright picture directly returns to picture to be processed then without carrying out subsequent processing.
Step 1302, the pre-generated enhancing of the picture to be processed input is handled into picture detector, described in detection Whether picture to be processed is the picture for having been subjected to enhancing processing.
Detect whether picture to be processed is the picture for having been subjected to enhancing processing by having enhanced processing picture detector, if Enhancing processing is had been subjected to, then no longer needs to carry out subsequent processing, directly returns to picture to be processed.
It should be noted that the sequence of above-mentioned steps 1301 and step 1302 is not limited to above-mentioned sequence, it can first hold Row step 1301, then execute step 1302;It can also be and first carry out step 1302, then execute step 1301;Alternatively, can also be Step 1301 and step 1302 are performed simultaneously.
Step 1303, when the picture to be processed is not high definition picture bright in luster and is not by the figure of enhancing processing When piece, by the pre-generated picture classification model of the picture input to be processed, classifying content is carried out to picture to be processed, is obtained The content type of the picture to be processed, the picture classification model are the model based on convolutional neural networks.
Picture to be processed is not high definition and bright picture, and when no processing by enhancing, then is waited for this It handles picture and carries out color enhancement processing, when carrying out color enhancement processing, need to be carried out according to the content type of picture to be processed Enhancing processing, it is thus necessary to determine that picture to be processed can be inputted the picture classification that training is completed in advance by the classification of picture to be processed Model carries out feature extraction to picture to be processed by picture classification model and carries out classifying content, so that it is determined that figure to be processed The content type of piece.
Step 1304, according to the content type, the picture to be processed is sequentially input corresponding with the content type Denoising model and color enhancement model, respectively carry out denoising and color enhancement processing, obtain Target Photo.
The denoising model corresponding with the classification and color enhancement model that picture to be processed input is successively pre-generated, Denoising and color enhancement processing are carried out respectively, obtain Target Photo.It should be noted that sequentially inputting denoising model and color When coloured silk enhancing model, a kind of feasible embodiment is that picture to be processed input denoising model is first carried out denoising, will be gone Picture of making an uproar that treated inputs color enhancement model again and carries out color enhancement processing, to obtain Target Photo;Alternatively, another Feasible embodiment is picture to be processed input color enhancement model first to be carried out color enhancement processing, at color enhancement Picture after reason inputs denoising model again and carries out denoising, to obtain Target Photo.
Image processing method provided in this embodiment, by detect picture to be processed whether be high definition picture bright in luster and Whether it is the picture for having been subjected to enhancing processing, color caused by enhancing is handled can be continued to avoid to high definition picture bright in luster Loss in detail caused by color excessively bright-coloured and denoising, avoids reprocessing same picture, when picture to be processed is not High definition picture bright in luster and when not being the picture by enhancing processing, then carries out enhancing processing to picture to be processed, passes through Classifying content is carried out to processing picture, obtains the content type of picture to be processed, by picture to be processed input and the content class Not corresponding denoising model and color enhancement model, obtain Target Photo, realize the automatic enhancing processing to picture, improve The efficiency for enhancing processing, reduces human cost.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented Necessary to example.
Figure 15 is a kind of structural block diagram of picture processing unit provided in an embodiment of the present invention, as shown in figure 15, the picture Processing unit 1500 may include:
High definition detection module 1501, for detecting whether picture to be processed is high definition picture bright in luster;
Processed detection module 1502, for detecting whether the picture to be processed is the picture for having been subjected to enhancing processing;
Categorization module 1503, for not being high definition picture bright in luster in the picture to be processed and not being to have been subjected to enhancing When the picture of processing, classifying content is carried out to the picture to be processed, obtains the content type of the picture to be processed;
Color enhancement module 1504, for being increased by color corresponding with the content type according to the content type Strong model carries out color enhancement processing to the picture to be processed, obtains Target Photo.
Optionally, the categorization module includes:
Taxon, for the pre-generated picture classification model of the picture input to be processed to be carried out feature extraction simultaneously Classifying content is carried out, the content type of the picture to be processed is obtained, the picture classification model is based on convolutional neural networks Model.
Optionally, the taxon is specifically used for:
Feature is carried out to the picture to be processed by the conventional part of the MobileNet in the picture classification model to mention It takes, obtains feature extraction data;
The feature extraction data are successively carried out with global average pondization processing and full connection is handled, is obtained and setting content The identical categorical data of the quantity of classification;
The categorical data is normalized by normalizing exponential function, the picture to be processed is obtained and belongs to The probability of each setting content classification;
The probability for belonging to each setting content classification according to the picture to be processed determines the content of the picture to be processed Classification.
Optionally, further includes:
First denoising module, for passing through color enhancement corresponding with the content type according to the content type Before model carries out color enhancement processing to the picture to be processed, according to the content type, by with the content type Corresponding denoising model carries out denoising to the picture to be processed, the picture after being denoised;
The color enhancement module is specifically used for:
According to the content type, by color enhancement model corresponding with the content type to the figure after the denoising Piece carries out color enhancement processing, obtains Target Photo.
Optionally, the first denoising module is specifically used for:
According to the content type, mentioned by the feature that corresponding denoising model carries out six convolutional layers to picture to be processed It takes and activation is handled, obtain 64 characteristic patterns, six convolutional layers use 64 3 × 3 convolution kernels, and the activation processing uses Activation primitive be to correct linear unit function;
Feature extraction is carried out to 64 characteristic patterns by 33 × 3 convolution kernels, the picture after being denoised.
Optionally, further includes:
Denoising model training module utilizes convolutional Neural net for the picture sample of the preset quantity for pre-set categories Network is trained, and obtains the corresponding denoising model of pre-set categories.
Optionally, further includes:
Second denoising module, for passing through color enhancement mould corresponding with the content type according to the content type Type carries out color enhancement processing to the picture to be processed, after obtaining Target Photo, according to the content type, by with institute It states content type corresponding denoising model and denoising is carried out to the Target Photo, the Target Photo after being denoised.
Optionally, further includes:
Module is deteriorated, for being directed to pre-set categories, deterioration operation is originally carried out to the good pattern of preset quantity, obtains present count The poor pattern sheet of amount;
Color enhancement model training module, for the good pattern sheet and difference according to the corresponding preset quantity of the pre-set categories Pattern sheet is trained using the convolutional neural networks based on Unet structure, obtains the corresponding color enhancement model of pre-set categories.
Optionally, the color enhancement module is specifically used for:
According to the content type, by color enhancement model corresponding with the content type to the picture to be processed The convolution algorithm for carrying out 5 convolutional layers, obtains the fisrt feature figure in 128 channels, the fisrt feature figure it is a length of it is described to The 1/16 of the length of picture is handled, the width of the first response picture is wide 1/16 of the picture to be processed, each convolution The length of the characteristic pattern of layer output is the 1/2 of the length of the characteristic pattern of upper convolutional layer output, the feature of each convolutional layer output Figure it is wide be the output of a upper convolutional layer characteristic pattern wide 1/2, the picture to be processed includes the number of R, G and B triple channel According to;
After carrying out convolution algorithm twice to the fisrt feature figure in 128 channels, the second feature in 128 channels is obtained Figure, the 1/64 of the length of a length of picture to be processed of the second feature figure, the width of the second feature figure is described wait locate Manage wide 1/64 of picture;
Full connection operation is carried out to the second feature figure in 128 channels, obtains 128 data;
Global splicing is carried out to 128 data, obtains the third feature figure in 128 channels, the third feature figure A length of picture to be processed length 1/16, the width of the third feature figure is wide 1/16 of the picture to be processed;
The third feature figure in 128 channels and the fisrt feature figure in 128 channels are subjected to local splicing, obtained To the fourth feature figure in 256 channels, the 1/16 of the length of a length of picture to be processed of the fourth feature figure, the described 4th The width of characteristic pattern is wide 1/16 of the picture to be processed;
Convolution algorithm is carried out to the fourth feature figure in 256 channels, obtains the fifth feature figure in 128 channels;
The convolution algorithm for successively carry out 4 convolutional layers to the fifth feature figure in 128 channels and locally splicing, obtain The sixth feature figure in 48 channels;
Convolution algorithm is carried out to the sixth feature figure in 48 channels, obtains the seventh feature figure in 16 channels;
Convolution algorithm is carried out to the seventh feature figure in 16 channels, obtains the eighth feature figure in 3 channels;
The eighth feature figure in 3 channels and the picture to be processed are subjected to residual error operation, obtain the target figure Piece.
Optionally, the high definition detection module is specifically used for:
The feature extraction and activation processing that five convolutional layers are carried out to picture to be processed, obtain 64 characteristic patterns, the volume Lamination uses 64 3 × 3 convolution kernels, and the activation primitive that the activation processing uses is amendment linear unit;
Global average pondization processing is carried out to 64 characteristic patterns, obtains 64 data;
Full connection processing is carried out to 64 data, so that output is two data;
Classified by Sigmoid function to described two data, obtaining output result is to be or no.
Picture processing unit provided in this embodiment detects whether picture to be processed is high definition color by high definition detection module Color bright-coloured picture, processed detection module detect whether picture to be processed is the picture for having been subjected to enhancing processing, and categorization module exists The picture to be processed be not high definition picture bright in luster and be not have been subjected to enhancing processing picture when, to picture to be processed into Row classifying content, obtains the content type of picture to be processed, and color enhancement module is increased according to content type by corresponding color Strong model carries out color enhancement processing to picture to be processed, obtains Target Photo, realizes the automatic enhancing processing to picture, mentions The high efficiency of enhancing processing, reduces human cost.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple Place illustrates referring to the part of embodiment of the method.
Figure 16 is a kind of structural block diagram of server provided in an embodiment of the present invention.As shown in figure 16, the server 1600 It may include the processor 1601 being connect with one or more data storage facilities, which may include interior deposit receipt Member 1602 and storage medium 1603.Server 1600 can also include input interface 1604 and output interface 1605, for it is another One device or system is communicated.It is storable in internal storage location 1602 or is deposited by the CPU of processor 1601 program code executed In storage media 1604.
Processor 1601 in server 1600 calls the program generation for being stored in internal storage location 1602 or storage medium 1603 Code, to execute the image processing method in above-described embodiment.
Wherein, storage medium can be read-only memory (Read-Only Memory, ROM) or read-write, such as Hard disk, flash memory.Internal storage location can be random access memory (Random Access Memory, RAM).Internal storage location can be with place Reason device physical integration integrates in memory or is configured to individual unit.
Processor is the control centre of above-mentioned server, and provides processing unit, for executing instruction, carries out interruption behaviour Make, clocking capability and various other functions are provided.Optionally, processor includes one or more central processing unit (CPU). It include one or more processor in above-mentioned server.Processor can be monokaryon (single CPU) processor or multicore (multi -CPU) Processor.Unless otherwise stated, the component for being described as such as processor or memory for executing task can realize to be general Component is temporarily used for executing task in given time, or is embodied as being manufactured specifically for executing the particular elements of the task.This The term " processor " in place refers to one or more devices, circuit and/or processing core, for handling data, such as computer Program instruction.
It is storable in internal storage location or storage medium by the CPU of the processor program code executed.Optionally, it is stored in Program code in storage medium can be copied into internal storage location so that the CPU of processor is executed.Processor is executable at least One kernel (such as LINUXTM、UNIXTM、WINDOWSTM、ANDROIDTM、IOSTM), it is well known that the kernel is for passing through control It makes the execution, control and the communication of peripheral unit of other programs or process and controls the use of computer device resources to control The operation of above-mentioned server.
Said elements in above-mentioned server can be connected to each other by bus, bus such as data/address bus, address bus, control One of bus, expansion bus and local bus processed or any combination thereof.
According to one embodiment of present invention, a kind of computer readable storage medium is additionally provided, it is described computer-readable Computer program is stored on storage medium, storage medium can be read-only memory (Read-Only Memory, ROM), or It is read-write, such as hard disk, flash memory.The picture processing of previous embodiment is realized when the computer program is executed by processor Method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can provide as method, apparatus or calculate Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart And/or in one or more blocks of the block diagram specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Above to a kind of image processing method provided by the present invention, device, server and storage medium, carry out in detail It introduces, used herein a specific example illustrates the principle and implementation of the invention, the explanation of above embodiments It is merely used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, according to this The thought of invention, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification is not answered It is interpreted as limitation of the present invention.

Claims (22)

1. a kind of image processing method characterized by comprising
Detect whether picture to be processed is high definition picture bright in luster;
Detect whether the picture to be processed is the picture for having been subjected to enhancing processing;
The picture to be processed be not high definition picture bright in luster and not be have been subjected to enhancing processing picture when, to it is described to It handles picture and carries out classifying content, obtain the content type of the picture to be processed;
According to the content type, the picture to be processed is carried out by color enhancement model corresponding with the content type Color enhancement processing, obtains Target Photo.
2. obtaining institute the method according to claim 1, wherein described carry out classifying content to picture to be processed State the content type of picture to be processed, comprising:
The pre-generated picture classification model of the picture input to be processed is subjected to feature extraction and carries out classifying content, is obtained The content type of the picture to be processed, the picture classification model are the model based on convolutional neural networks.
3. according to the method described in claim 2, it is characterized in that, the picture point that the picture input to be processed is pre-generated Class model carries out feature extraction and carries out classifying content, obtains the content type of the picture to be processed, comprising:
Feature extraction is carried out to the picture to be processed by the conventional part of the MobileNet in the picture classification model, Obtain feature extraction data;
The feature extraction data are successively carried out with global average pondization processing and full connection processing, is obtained and setting content classification The identical categorical data of quantity;
By normalize exponential function the categorical data is normalized, obtain the picture to be processed belong to it is each The probability of setting content classification;
The probability for belonging to each setting content classification according to the picture to be processed determines the content class of the picture to be processed Not.
4. the method according to claim 1, wherein
According to the content type, by color enhancement model corresponding with the content type to the picture to be processed into Before the processing of row color enhancement, further includes: according to the content type, pass through denoising model pair corresponding with the content type The picture to be processed carries out denoising, the picture after being denoised;
According to the content type, the picture to be processed is carried out by color enhancement model corresponding with the content type Color enhancement processing, obtains Target Photo, comprising: according to the content type, pass through color corresponding with the content type Enhance model and color enhancement processing is carried out to the picture after the denoising, obtains Target Photo.
5. according to the method described in claim 4, it is characterized in that, according to the content type, by with the content type Corresponding denoising model carries out denoising to the picture to be processed, the picture after being denoised, comprising:
According to the content type, by corresponding denoising model to picture to be processed carry out six convolutional layers feature extraction and Activation processing obtains 64 characteristic patterns, and six convolutional layers use 64 3 × 3 convolution kernels, and what the activation processing used swashs Function living is to correct linear unit function;
Feature extraction is carried out to 64 characteristic patterns by 33 × 3 convolution kernels, the picture after being denoised.
6. method according to claim 4 or 5, which is characterized in that according to the content type, by with the content The corresponding denoising model of classification carries out the picture to be processed before denoising, further includes:
It is trained according to the picture sample of preset quantity using convolutional neural networks for pre-set categories, obtains pre-set categories Corresponding denoising model.
7. the method according to claim 1, wherein according to the content type, by with the content class Not corresponding color enhancement model carries out color enhancement processing to the picture to be processed, after obtaining Target Photo, further includes:
According to the content type, the Target Photo is carried out at denoising by denoising model corresponding with the content type Reason, the Target Photo after being denoised.
8. the method according to claim 1, wherein according to the content type, by with the content class Not corresponding color enhancement model carries out the picture to be processed before color enhancement processing, further includes:
For pre-set categories, deterioration operation is originally carried out to the good pattern of preset quantity, obtains the poor pattern sheet of preset quantity;
According to the good pattern sheet of the corresponding preset quantity of the pre-set categories and poor pattern sheet, the convolution based on Unet structure is utilized Neural network is trained, and obtains the corresponding color enhancement model of pre-set categories.
9. the method according to claim 1, wherein according to the content type, by with the content type Corresponding color enhancement model carries out color enhancement processing to the picture to be processed, obtains Target Photo, comprising:
According to the content type, the picture to be processed is carried out by color enhancement model corresponding with the content type The convolution algorithm of 5 convolutional layers, obtains the fisrt feature figure in 128 channels, the fisrt feature figure it is a length of described to be processed The 1/16 of the length of picture, the width of the fisrt feature figure are wide 1/16 of the picture to be processed, the output of each convolutional layer Characteristic pattern length be the output of a upper convolutional layer characteristic pattern length 1/2, the width of the characteristic pattern of each convolutional layer output It is wide 1/2 of the characteristic pattern of upper convolutional layer output, the picture to be processed includes the data of R, G and B triple channel;
After carrying out convolution algorithm twice to the fisrt feature figure in 128 channels, the second feature figure in 128 channels is obtained, The 1/64 of the length of a length of picture to be processed of the second feature figure, the width of the second feature figure are the figure to be processed Wide 1/64 of piece;
Full connection operation is carried out to the second feature figure in 128 channels, obtains 128 data;
Global splicing is carried out to 128 data, obtains the third feature figure in 128 channels, the length of the third feature figure It is the 1/16 of the length of the picture to be processed, the width of the third feature figure is wide 1/16 of the picture to be processed;
The third feature figure in 128 channels and the fisrt feature figure in 128 channels are subjected to local splicing, obtained The fourth feature figure in 256 channels, the 1/16 of the length of a length of picture to be processed of the fourth feature figure, the described 4th is special The width for levying figure is wide 1/16 of the picture to be processed;
Convolution algorithm is carried out to the fourth feature figure in 256 channels, obtains the fifth feature figure in 128 channels;
The convolution algorithm for successively carry out 4 convolutional layers to the fifth feature figure in 128 channels and locally splicing, obtain 48 The sixth feature figure in channel;
Convolution algorithm is carried out to the sixth feature figure in 48 channels, obtains the seventh feature figure in 16 channels;
Convolution algorithm is carried out to the seventh feature figure in 16 channels, obtains the eighth feature figure in 3 channels;
The eighth feature figure in 3 channels and the picture to be processed are subjected to residual error operation, obtain the Target Photo.
10. the method according to claim 1, wherein whether the detection picture to be processed is that high definition color is fresh Gorgeous picture, comprising:
The feature extraction and activation processing that five convolutional layers are carried out to picture to be processed, obtain 64 characteristic patterns, the convolutional layer Using 64 3 × 3 convolution kernels, the activation primitive that the activation processing uses is amendment linear unit;
Global average pondization processing is carried out to 64 characteristic patterns, obtains 64 data;
Full connection processing is carried out to 64 data, so that output is two data;
Classified by Sigmoid function to described two data, obtaining output result is to be or no.
11. a kind of picture processing unit characterized by comprising
High definition detection module, for detecting whether picture to be processed is high definition picture bright in luster;
Processed detection module, for detecting whether the picture to be processed is the picture for having been subjected to enhancing processing;
Categorization module, for not being high definition picture bright in luster in the picture to be processed and not being the figure for having been subjected to enhancing processing When piece, classifying content is carried out to the picture to be processed, obtains the content type of the picture to be processed;
Color enhancement module, for passing through color enhancement model pair corresponding with the content type according to the content type The picture to be processed carries out color enhancement processing, obtains Target Photo.
12. device according to claim 11, which is characterized in that the categorization module includes:
Taxon, for the pre-generated picture classification model of the picture input to be processed to be carried out feature extraction and is carried out Classifying content, obtains the content type of the picture to be processed, and the picture classification model is the mould based on convolutional neural networks Type.
13. device according to claim 12, which is characterized in that the taxon is specifically used for:
Feature extraction is carried out to the picture to be processed by the conventional part of the MobileNet in the picture classification model, Obtain feature extraction data;
The feature extraction data are successively carried out with global average pondization processing and full connection processing, is obtained and setting content classification The identical categorical data of quantity;
By normalize exponential function the categorical data is normalized, obtain the picture to be processed belong to it is each The probability of setting content classification;
The probability for belonging to each setting content classification according to the picture to be processed determines the content class of the picture to be processed Not.
14. device according to claim 11, which is characterized in that further include:
First denoising module, for passing through color enhancement model corresponding with the content type according to the content type Before carrying out color enhancement processing to the picture to be processed, according to the content type, by corresponding with the content type Denoising model denoising is carried out to the picture to be processed, the picture after being denoised;
The color enhancement module is specifically used for:
According to the content type, by color enhancement model corresponding with the content type to the picture after the denoising into The processing of row color enhancement, obtains Target Photo.
15. device according to claim 14, which is characterized in that the first denoising module is specifically used for:
According to the content type, by corresponding denoising model to picture to be processed carry out six convolutional layers feature extraction and Activation processing obtains 64 characteristic patterns, and six convolutional layers use 64 3 × 3 convolution kernels, and what the activation processing used swashs Function living is to correct linear unit function;
Feature extraction is carried out to 64 characteristic patterns by 33 × 3 convolution kernels, the picture after being denoised.
16. device according to claim 14 or 15, which is characterized in that further include:
Denoising model training module, for for pre-set categories preset quantity picture sample, using convolutional neural networks into Row training, obtains the corresponding denoising model of pre-set categories.
17. device according to claim 11, which is characterized in that further include:
Second denoising module, for passing through color enhancement model pair corresponding with the content type according to the content type The picture to be processed carries out color enhancement processing, after obtaining Target Photo, according to the content type, by with it is described interior Hold classification corresponding denoising model and denoising is carried out to the Target Photo, the Target Photo after being denoised.
18. device according to claim 11, which is characterized in that further include:
Module is deteriorated, for being directed to pre-set categories, deterioration operation is originally carried out to the good pattern of preset quantity, obtains preset quantity Poor pattern sheet;
Color enhancement model training module, for the good pattern sheet and difference pattern according to the corresponding preset quantity of the pre-set categories This, is trained using the convolutional neural networks based on Unet structure, obtains the corresponding color enhancement model of pre-set categories.
19. device according to claim 11, which is characterized in that the color enhancement module is specifically used for:
According to the content type, the picture to be processed is carried out by color enhancement model corresponding with the content type The convolution algorithm of 5 convolutional layers, obtains the fisrt feature figure in 128 channels, the fisrt feature figure it is a length of described to be processed The width of the 1/16 of the length of picture, the first response picture is wide 1/16 of the picture to be processed, each convolutional layer is defeated The length of characteristic pattern out is the 1/2 of the length of the characteristic pattern of upper convolutional layer output, the characteristic pattern of each convolutional layer output Wide is wide 1/2 of the characteristic pattern of upper convolutional layer output, and the picture to be processed includes the data of R, G and B triple channel;
After carrying out convolution algorithm twice to the fisrt feature figure in 128 channels, the second feature figure in 128 channels is obtained, The 1/64 of the length of a length of picture to be processed of the second feature figure, the width of the second feature figure are the figure to be processed Wide 1/64 of piece;
Full connection operation is carried out to the second feature figure in 128 channels, obtains 128 data;
Global splicing is carried out to 128 data, obtains the third feature figure in 128 channels, the length of the third feature figure It is the 1/16 of the length of the picture to be processed, the width of the third feature figure is wide 1/16 of the picture to be processed;
The third feature figure in 128 channels and the fisrt feature figure in 128 channels are subjected to local splicing, obtained The fourth feature figure in 256 channels, the 1/16 of the length of a length of picture to be processed of the fourth feature figure, the described 4th is special The width for levying figure is wide 1/16 of the picture to be processed;
Convolution algorithm is carried out to the fourth feature figure in 256 channels, obtains the fifth feature figure in 128 channels;
The convolution algorithm for successively carry out 4 convolutional layers to the fifth feature figure in 128 channels and locally splicing, obtain 48 The sixth feature figure in channel;
Convolution algorithm is carried out to the sixth feature figure in 48 channels, obtains the seventh feature figure in 16 channels;
Convolution algorithm is carried out to the seventh feature figure in 16 channels, obtains the eighth feature figure in 3 channels;
The eighth feature figure in 3 channels and the picture to be processed are subjected to residual error operation, obtain the Target Photo.
20. device according to claim 11, which is characterized in that the high definition detection module is specifically used for:
The feature extraction and activation processing that five convolutional layers are carried out to picture to be processed, obtain 64 characteristic patterns, the convolutional layer Using 64 3 × 3 convolution kernels, the activation primitive that the activation processing uses is amendment linear unit;
Global average pondization processing is carried out to 64 characteristic patterns, obtains 64 data;
Full connection processing is carried out to 64 data, so that output is two data;
Classified by Sigmoid function to described two data, obtaining output result is to be or no.
21. a kind of server characterized by comprising processor, memory and be stored on the memory and can be described The computer program run on processor realizes that claim 1-10 such as appoints when the computer program is executed by the processor Image processing method described in one.
22. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program realizes such as claim 1-10 described in any item image processing methods when the computer program is executed by processor.
CN201811474346.XA 2018-12-04 2018-12-04 A kind of image processing method, device, server and storage medium Pending CN109801224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811474346.XA CN109801224A (en) 2018-12-04 2018-12-04 A kind of image processing method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811474346.XA CN109801224A (en) 2018-12-04 2018-12-04 A kind of image processing method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN109801224A true CN109801224A (en) 2019-05-24

Family

ID=66556430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811474346.XA Pending CN109801224A (en) 2018-12-04 2018-12-04 A kind of image processing method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN109801224A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378852A (en) * 2019-07-11 2019-10-25 北京奇艺世纪科技有限公司 Image enchancing method, device, computer equipment and storage medium
CN111815529A (en) * 2020-06-30 2020-10-23 上海电力大学 Low-quality image classification enhancement method based on model fusion and data enhancement
CN112884662A (en) * 2021-01-05 2021-06-01 杭州国测测绘技术有限公司 Three-dimensional digital map image processing method based on aerial image of aircraft

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060031081A1 (en) * 2004-08-04 2006-02-09 Arne Jon F Method and apparatus for information storage, customization and delivery at a service-delivery site such as a beauty salon
CN106204468A (en) * 2016-06-27 2016-12-07 深圳市未来媒体技术研究院 A kind of image de-noising method based on ReLU convolutional neural networks
CN107730473A (en) * 2017-11-03 2018-02-23 中国矿业大学 A kind of underground coal mine image processing method based on deep neural network
CN107995428A (en) * 2017-12-21 2018-05-04 广东欧珀移动通信有限公司 Image processing method, device and storage medium and mobile terminal
CN108230323A (en) * 2018-01-30 2018-06-29 浙江大学 A kind of Lung neoplasm false positive screening technique based on convolutional neural networks
CN108513672A (en) * 2017-07-27 2018-09-07 深圳市大疆创新科技有限公司 Enhance method, equipment and the storage medium of picture contrast
CN108764347A (en) * 2018-05-30 2018-11-06 大连理工大学 Tellurion National Imagery recognition methods based on convolutional neural networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060031081A1 (en) * 2004-08-04 2006-02-09 Arne Jon F Method and apparatus for information storage, customization and delivery at a service-delivery site such as a beauty salon
CN106204468A (en) * 2016-06-27 2016-12-07 深圳市未来媒体技术研究院 A kind of image de-noising method based on ReLU convolutional neural networks
CN108513672A (en) * 2017-07-27 2018-09-07 深圳市大疆创新科技有限公司 Enhance method, equipment and the storage medium of picture contrast
CN107730473A (en) * 2017-11-03 2018-02-23 中国矿业大学 A kind of underground coal mine image processing method based on deep neural network
CN107995428A (en) * 2017-12-21 2018-05-04 广东欧珀移动通信有限公司 Image processing method, device and storage medium and mobile terminal
CN108230323A (en) * 2018-01-30 2018-06-29 浙江大学 A kind of Lung neoplasm false positive screening technique based on convolutional neural networks
CN108764347A (en) * 2018-05-30 2018-11-06 大连理工大学 Tellurion National Imagery recognition methods based on convolutional neural networks

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378852A (en) * 2019-07-11 2019-10-25 北京奇艺世纪科技有限公司 Image enchancing method, device, computer equipment and storage medium
CN111815529A (en) * 2020-06-30 2020-10-23 上海电力大学 Low-quality image classification enhancement method based on model fusion and data enhancement
CN111815529B (en) * 2020-06-30 2023-02-07 上海电力大学 Low-quality image classification enhancement method based on model fusion and data enhancement
CN112884662A (en) * 2021-01-05 2021-06-01 杭州国测测绘技术有限公司 Three-dimensional digital map image processing method based on aerial image of aircraft

Similar Documents

Publication Publication Date Title
CN112232476B (en) Method and device for updating test sample set
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN106156781B (en) Sort convolutional neural networks construction method and its image processing method and device
CN110349136A (en) A kind of tampered image detection method based on deep learning
Yang et al. Single image haze removal via region detection network
CN108961220B (en) Image collaborative saliency detection method based on multilayer convolution feature fusion
CN110148088B (en) Image processing method, image rain removing method, device, terminal and medium
CN105654066A (en) Vehicle identification method and device
CN111160102B (en) Training method of face anti-counterfeiting recognition model, face anti-counterfeiting recognition method and device
CN111257341A (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN111985281B (en) Image generation model generation method and device and image generation method and device
CN109801224A (en) A kind of image processing method, device, server and storage medium
CN109472193A (en) Method for detecting human face and device
CN114359526A (en) Cross-domain image style migration method based on semantic GAN
CN108710893A (en) A kind of digital image cameras source model sorting technique of feature based fusion
CN111582397A (en) CNN-RNN image emotion analysis method based on attention mechanism
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN109598301B (en) Detection area removing method, device, terminal and storage medium
CN108197669A (en) The feature training method and device of convolutional neural networks
CN116188790A (en) Camera shielding detection method and device, storage medium and electronic equipment
CN111046213B (en) Knowledge base construction method based on image recognition
CN111539456B (en) Target identification method and device
CN109086737A (en) Shipping cargo monitoring video frequency identifying method and system based on convolutional neural networks
CN115393698A (en) Digital image tampering detection method based on improved DPN network
CN114241587A (en) Evaluation method and device for human face living body detection confrontation robustness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190524

RJ01 Rejection of invention patent application after publication