CN107240085A - A kind of image interfusion method and system based on convolutional neural networks model - Google Patents

A kind of image interfusion method and system based on convolutional neural networks model Download PDF

Info

Publication number
CN107240085A
CN107240085A CN201710317578.3A CN201710317578A CN107240085A CN 107240085 A CN107240085 A CN 107240085A CN 201710317578 A CN201710317578 A CN 201710317578A CN 107240085 A CN107240085 A CN 107240085A
Authority
CN
China
Prior art keywords
image information
fused
gradient
style
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710317578.3A
Other languages
Chinese (zh)
Inventor
胡建国
商家煜
黄俊威
李仕仁
梁津铨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Smart City Development Research Institute
Guangzhou Shizhen Information Technology Co Ltd
Original Assignee
Guangzhou Smart City Development Research Institute
Guangzhou Shizhen Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Smart City Development Research Institute, Guangzhou Shizhen Information Technology Co Ltd filed Critical Guangzhou Smart City Development Research Institute
Priority to CN201710317578.3A priority Critical patent/CN107240085A/en
Publication of CN107240085A publication Critical patent/CN107240085A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of image interfusion method based on convolutional neural networks model and system, wherein, described image fusion method includes:Obtain at least one style image information to be fused and at least one content image information to be fused, and by the style image information to be fused and the content image information scaling to be fused to uniformly size;The style image information to be fused and the content image information to be fused to unified size carry out initialization fusion treatment, obtain original fusion image information;The style image information to be fused and the content image information loss gradient to be fused of the original fusion image information and unified size are calculated in convolutional neural networks model, total losses gradient is obtained;The original fusion image information is updated according to the total losses gradient and parameter preservation is carried out to the convolutional neural networks model.In embodiments of the present invention, user is met to image co-registration demand, improves image co-registration speed.

Description

A kind of image interfusion method and system based on convolutional neural networks model
Technical field
The present invention relates to image processing techniques, more particularly to a kind of image interfusion method based on convolutional neural networks model And system.
Background technology
Today's society, because mobile phone photographic is popular all the more, the software of various later image processing with prevailing, software The demand for repairing the image fusion technologies such as figure provided also expands all the more, and one quickly, effectively, and with interesting Image fusion technology turns into the target that such industry is pursued, so as to further attract more users to use the production of itself Product.
But existing image fusion technology is not very perfect, algorithm complex, validity is time-consuming to be all still within one The individual level for being badly in need of optimization.Great user's request amount, which needs more to improve more rich technical support, can just be filled up, still Current technology model can not but complete to give crowd more in the richness of fusion and the picture fusion of picture in a short time Good Consumer's Experience.
Therefore, in which case it is desirable to there is a kind of technology, the picture fusion demand of user can be solved in a short time, And substantial amounts of fusion style can be provided simultaneously allow user independently to be selected, and then complete the applicability of Related product and reliable Property, so that the problem of overcoming above-mentioned.
The content of the invention
It is an object of the invention to overcome the deficiencies in the prior art, convolutional neural networks mould is based on the invention provides one kind The image interfusion method and system of type, in embodiments of the present invention, meet user to image co-registration demand, improve image co-registration speed Degree.
In order to solve the above-mentioned technical problem, the embodiments of the invention provide a kind of image based on convolutional neural networks model Fusion method, described image fusion method includes:
At least one style image information to be fused and at least one content image information to be fused are obtained, and is treated described Style image information and the content image information scaling to be fused are merged to unified size;
The style image information to be fused and the content image information to be fused to unified size are initialized Fusion treatment, obtains original fusion image information;
The wind to be fused of the original fusion image information and unified size is calculated in convolutional neural networks model Table images information and the content image information loss gradient to be fused, obtain total losses gradient;
The original fusion image information is updated according to the total losses gradient and to the convolutional neural networks Model carries out parameter preservation.
Preferably, the style image information to be fused and the content image information to be fused of described pair of unified size Initialization fusion treatment is carried out, including:
The style image information to be fused and the content image information to be fused are carried out initially using being uniformly distributed Change fusion treatment, obtain original fusion image information;
It is described be uniformly distributed forK=1,2 ..., m, then claim X obediences are discrete to be uniformly distributed, will The discrete uniform Distribution Value adds 128, obtains the original fusion image information pixel value, the pixel value 0 to 256 it Between;
Wherein, P represents distribution probability, and X represents the style image information to be fused and the content images letter to be fused The pixel value of breath, K=1,2 ..., m.
Preferably, the convolutional neural networks model is the convolutional neural networks framework of 21 neural net layers, wherein wrapping Include 16 convolutional layers, 5 down-sampled layers.
Preferably, the institute that the original fusion image information and unified size are calculated in convolutional neural networks model Style image information to be fused and the content image information loss gradient to be fused are stated, including:
Parameter setting is carried out to the convolutional neural networks model, the parameter includes the shadow of style image information to be fused Ring proportion, content image information to be fused influence proportion and cycle-index;
The loss gradient of the original fusion image information and the style image information to be fused of unified size is calculated, Obtain first-loss gradient;
The original fusion image information and the loss gradient of the content image information to be fused of unified size are calculated, Obtain second and lose gradient;
According to the first-loss gradient and the second loss gradient calculation, total losses gradient is obtained.
Preferably, the influence proportion of the style image information to be fused is 0.5, the content image information shadow to be fused It is 120 to ring proportion 0.5 and the cycle-index.
In addition, the embodiment of the present invention additionally provides a kind of image fusion system based on convolutional neural networks model, it is described Image fusion system includes:
Data obtaining module:For obtaining at least one style image information to be fused and at least one content graph to be fused As information, and will the style image information to be fused and the content image information scaling to be fused to uniformly size;
Fusion Module:Believe for the style image information to be fused to unified size and the content images to be fused Breath carries out initialization fusion treatment, obtains original fusion image information;
Gradient calculation module:It is big with unification for calculating the original fusion image information in convolutional neural networks model The small style image information to be fused and the content image information loss gradient to be fused, obtains total losses gradient;
Information updating module:For being updated and right to the original fusion image information according to the total losses gradient The convolutional neural networks model carries out parameter preservation.
Preferably, the Fusion Module includes:
Uniform integrated unit:It is uniformly distributed for using to the style image information to be fused and the content to be fused Image information carries out initialization fusion treatment, obtains original fusion image information;
It is described be uniformly distributed forK=1,2 ..., m, then claim X obediences are discrete to be uniformly distributed, will The discrete uniform Distribution Value adds 128, obtains the original fusion image information pixel value, the pixel value 0 to 256 it Between;
Wherein, P represents distribution probability, and X represents the style image information to be fused and the content images letter to be fused The pixel value of breath, K=1,2 ..., m.
Preferably, the convolutional neural networks model is the convolutional neural networks framework of 21 neural net layers, wherein wrapping Include 16 convolutional layers, 5 down-sampled layers.
Preferably, the gradient calculation module includes:
Parameter setting unit:For carrying out parameter setting to the convolutional neural networks model, the parameter includes waiting to melt Close the influence proportion, content image information to be fused influence proportion and cycle-index of style image information;
First gradient computing unit:The wind to be fused for calculating the original fusion image information and unified size The loss gradient of table images information, obtains first-loss gradient;
Second gradient calculation unit:For calculating the original fusion image information with unifying the described to be fused interior of size Hold the loss gradient of image information, obtain second and lose gradient;
Total gradient computing unit:For according to the first-loss gradient and the second loss gradient calculation, obtaining total Lose gradient.
Preferably, the influence proportion of the style image information to be fused is 0.5, the content image information shadow to be fused It is 120 to ring proportion 0.5 and the cycle-index.
In inventive embodiments, demand of the user to image co-registration style is met using the embodiment of the present invention, while the mould The strong adaptability of type, it is only necessary to which different style images are provided, it is possible to obtain the image co-registration processing of different modes, greatly Enrich Consumer's Experience;The model is because be a self learning model simultaneously, it is only necessary to thrown during model early stage learns Enter amount of calculation and operation time cost, after a model learns to finish and preserved corresponding parameter completely, Carry out that the extra time need not be consumed during new image co-registration again, therefore image all greatly improved than conventional model melting The efficiency of conjunction, caters to the demand of the instant image co-registration of user.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing used required in technology description to be briefly described, it is clear that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of the image interfusion method based on convolutional neural networks model in the embodiment of the present invention;
Fig. 2 is the system composition structure of the image fusion system based on convolutional neural networks model in the embodiment of the present invention Schematic diagram.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is all other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
Fig. 1 is the schematic flow sheet of the image interfusion method based on convolutional neural networks model in the embodiment of the present invention, As described in Figure 1, described image fusion method includes:
S11:Obtain at least one style image information to be fused and at least one content image information to be fused, and by institute Style image information to be fused and the content image information scaling to be fused are stated to unified size;
S12:The style image information to be fused and the content image information to be fused to unified size are carried out just Beginningization fusion treatment, obtains original fusion image information;
S13:The original fusion image information is calculated in convolutional neural networks model with waiting to melt described in unified size Style image information and the content image information loss gradient to be fused are closed, total losses gradient is obtained;
S14:The original fusion image information is updated according to the total losses gradient and to the convolutional Neural Network model carries out parameter preservation.
S11 is described further:
At least one style image information to be fused and at least one content image information to be fused are obtained, and is treated described Style image information and the content image information scaling to be fused are merged to unified size.
Further, user's selection needs at least one style image information merged and at least one content image information (multiple can be selected according to the demand of user, expansion explanation only is carried out to individual selection one in embodiments of the present invention), because The style image information of acquisition and the size of the picture of content image information there may be it is inconsistent, accordingly, it would be desirable to style Image information and content image information size are unified, i.e., carry out image information scaling processing using image zooming method, will Image zooming is to defined size, and such as regulation image size is the wide respectively 10*5cm of long *, i.e., by image information scaling to 10* 5cm sizes.
S12 is described further:
The style image information to be fused and the content image information to be fused to unified size are initialized Fusion treatment, obtains original fusion image information.
Further, during initialization fusion, the pixel value of two images is initialized, use here is uniformly distributed pair The style image information to be fused and the content image information to be fused carry out initialization fusion treatment, obtain original fusion Image information;It is described be uniformly distributed forK=1,2 ..., m, then claim X obediences are discrete to be uniformly distributed, The discrete uniform Distribution Value is added 128, the original fusion image information pixel value is obtained, the pixel value 0 to 256 it Between;Wherein, P represents distribution probability, and X represents the picture of the style image information to be fused and the content image information to be fused Element value, K=1,2 ..., m.
S13 is further described:
The wind to be fused of the original fusion image information and unified size is calculated in convolutional neural networks model Table images information and the content image information loss gradient to be fused, obtain total losses gradient.
Further, the convolutional neural networks model is the convolutional neural networks framework of 21 neural net layers, wherein Including 16 convolutional layers, 5 down-sampled layers.The parameter of the convolutional neural networks model has style image information to be fused respectively Influence proportion weight-style, content image information to be fused influence proportion weight-content and cycle-index num- Iterations, wherein in embodiments of the present invention, optimal parameter setting is the influence proportion of style image information to be fused Weight-style is that 0.5, content image information to be fused influence proportion weight-content is 0.5 and cycle-index num- Iterations is 120.
Parameter setting is carried out to the convolutional neural networks model, the parameter includes the shadow of style image information to be fused Ring proportion, content image information to be fused influence proportion and cycle-index;Calculate the original fusion image information big with unification The loss gradient of the small style image information to be fused, obtains first-loss gradient;Calculate the original fusion image letter The loss gradient of breath and the content image information to be fused of unified size, obtains second and loses gradient;According to described first Gradient and the second loss gradient calculation are lost, total losses gradient is obtained.
First according to the demand of user, user can voluntarily set the influence proportion of style image information to be fused Weight-style, content image information to be fused influence proportion weight-content and cycle-index num-iterations Parameter, if being not provided with, be configured by the parameter for thinking optimal in the embodiment of the present invention.
The corresponding IQ of each layer of convolutional neural networks in convolutional neural networks model, builds a style and represents, use Correlation between different filter responses are calculated, wherein expecting to extend depending on the control of input picture, these feature phases Closing property is by Gram matrixesObtain, whereinIt is the inner product between vector characteristics the figure i and j in layer l;Formula is such as Under:
In order to generate the texture matched with the pattern of given image, using white noise acoustic image gradient decline find with The pattern of original image represents another image matched;This entry by minimizing the Gram matrixes from original image Mean square distance between the Gram matrixes for the image to be generated is realized;So allowingWithIt is the figure of original image and generation Picture, and AlAnd GlTheir respective patterns in layer l are represented.So contribution of that layer to total losses is exactly following formula:
Generally, each layer in network defines a nonlinear filter group, the complexity of its nonlinear filter group Increase with the position in network middle level.Therefore, by the filter response to the image, given input pictureIn convolution god It is encoded in each layer through network.With NlThe layer of individual different filters has each size M1N1Characteristic pattern, wherein M1It is Highly it is multiplied by the width of characteristic pattern;Therefore, the response in l layers can be stored in matrixIn;WhereinIt is to represent The activation response of i-th of wave filter at l layers of position j;In order to visualize the image letter in the different levels coding of hierarchical structure Breath, dialogue noise image performs gradual change and declines to find another image matched with the characteristic response of original image so allowingWithIt is the image of original image and generation, and PlAnd FlIt is the respective character representation in layer l;Define two character representations it Between square error loss
Relative to the derivative of this loss activated in layer l, standard error backpropagation can be used calculate relative to ImageGradient.Therefore, we can change initial random mixed imageUntil it is in one layer of convolutional neural networks Middle generation is responded with original image identical.
Overall loss gradient is as follows:
Wherein w1It is the weight factor of every layer of contribution to total losses.E is calculated so as to analyzelActivated relative in layer l Derivative be can be used standard error backpropagation readily calculate ElThe Grad activated relative to network lower floor.
S14 is described further:
The original fusion image information is updated according to the total losses gradient and to the convolutional neural networks Model carries out parameter preservation.
Further, total loss gradient is arrived according to above-mentioned acquisition, then using total loss gradient to original fusion Image information is updated;The regular hour can be consumed during model is trained study, and needs input different Style image carry out obtain different image co-registration styles;Model running needs to preserve the parameter of model after terminating, And different parameters is classified;Afterwards in use, only needing to carry out melting for image by the parameter that has currently had Close, without carrying out a learning process again, so that the operation time that this model greatly reduces long cost, so as to meet The demand of the instant image co-registration of user.
Fig. 2 is the system composition structure of the image fusion system based on convolutional neural networks model in the embodiment of the present invention Schematic diagram, as shown in Fig. 2 described image emerging system includes:
Data obtaining module:For obtaining at least one style image information to be fused and at least one content graph to be fused As information, and will the style image information to be fused and the content image information scaling to be fused to uniformly size;
Fusion Module:Believe for the style image information to be fused to unified size and the content images to be fused Breath carries out initialization fusion treatment, obtains original fusion image information;
Gradient calculation module:It is big with unification for calculating the original fusion image information in convolutional neural networks model The small style image information to be fused and the content image information loss gradient to be fused, obtains total losses gradient;
Information updating module:For being updated and right to the original fusion image information according to the total losses gradient The convolutional neural networks model carries out parameter preservation.
Preferably, the Fusion Module includes:
Uniform integrated unit:It is uniformly distributed for using to the style image information to be fused and the content to be fused Image information carries out initialization fusion treatment, obtains original fusion image information;
It is described to be uniformly distributed then to claim X obediences are discrete to be uniformly distributed, add 128 by the discrete uniform Distribution Value, obtain The original fusion image information pixel value, the pixel value is between 0 to 256;
Wherein, P represents distribution probability, and X represents the style image information to be fused and the content images letter to be fused The pixel value of breath,.
Preferably, the convolutional neural networks model is the convolutional neural networks framework of 21 neural net layers, wherein wrapping Include 16 convolutional layers, 5 down-sampled layers.
Preferably, the gradient calculation module includes:
Parameter setting unit:For carrying out parameter setting to the convolutional neural networks model, the parameter includes waiting to melt Close the influence proportion, content image information to be fused influence proportion and cycle-index of style image information;
First gradient computing unit:The wind to be fused for calculating the original fusion image information and unified size The loss gradient of table images information, obtains first-loss gradient;
Second gradient calculation unit:For calculating the original fusion image information with unifying the described to be fused interior of size Hold the loss gradient of image information, obtain second and lose gradient;
Total gradient computing unit:For according to the first-loss gradient and the second loss gradient calculation, obtaining total Lose gradient.
Preferably, the influence proportion of the style image information to be fused is 0.5, the content image information shadow to be fused It is 120 to ring proportion 0.5 and the cycle-index.
Specifically, the operation principle of the system related functions module of the embodiment of the present invention can be found in the correlation of embodiment of the method Description, is repeated no more here.
In inventive embodiments, demand of the user to image co-registration style is met using the embodiment of the present invention, while the mould The strong adaptability of type, it is only necessary to which different style images are provided, it is possible to obtain the image co-registration processing of different modes, greatly Enrich Consumer's Experience;The model is because be a self learning model simultaneously, it is only necessary to thrown during model early stage learns Enter amount of calculation and operation time cost, after a model learns to finish and preserved corresponding parameter completely, Carry out that the extra time need not be consumed during new image co-registration again, therefore image all greatly improved than conventional model melting The efficiency of conjunction, caters to the demand of the instant image co-registration of user.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable recording medium, storage Medium can include:Read-only storage (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
In addition, a kind of image interfusion method based on convolutional neural networks model provided above the embodiment of the present invention And system is described in detail, specific case should be employed herein the principle and embodiment of the present invention are explained State, the explanation of above example is only intended to the method and its core concept for helping to understand the present invention;Simultaneously for this area Those skilled in the art, according to the thought of the present invention, will change, to sum up institute in specific embodiments and applications State, this specification content should not be construed as limiting the invention.

Claims (10)

1. a kind of image interfusion method based on convolutional neural networks model, peculiar to be, described image fusion method includes:
At least one style image information to be fused and at least one content image information to be fused are obtained, and will be described to be fused Style image information and the content image information scaling to be fused are to unified size;
The style image information to be fused and the content image information to be fused to unified size carry out initialization fusion Processing, obtains original fusion image information;
The style figure to be fused of the original fusion image information and unified size is calculated in convolutional neural networks model As information and the content image information loss gradient to be fused, total losses gradient is obtained;
The original fusion image information is updated according to the total losses gradient and to the convolutional neural networks model Carry out parameter preservation.
2. image interfusion method according to claim 1, peculiar to be, the style to be fused of described pair of unified size Image information and the content image information to be fused carry out initialization fusion treatment, including:
The style image information to be fused and the content image information progress initialization to be fused are melted using being uniformly distributed Conjunction is handled, and obtains original fusion image information;
It is described be uniformly distributed forThen claim X obediences are discrete to be uniformly distributed, will The discrete uniform Distribution Value adds 128, obtains the original fusion image information pixel value, the pixel value 0 to 256 it Between;
Wherein, P represents distribution probability, and X represents the style image information to be fused and the content image information to be fused Pixel value, K=1,2 ..., m.
3. image interfusion method according to claim 1, peculiar to be, the convolutional neural networks model is 21 nerves The convolutional neural networks framework of Internet, including 16 convolutional layers, 5 down-sampled layers.
4. image interfusion method according to claim 1, peculiar to be, described that institute is calculated in convolutional neural networks model State the style image information to be fused and the content image information to be fused of original fusion image information and unified size Gradient is lost, including:
Parameter setting is carried out to the convolutional neural networks model, the parameter includes the influence ratio of style image information to be fused Weight, content image information to be fused influence proportion and cycle-index;
The loss gradient of the original fusion image information and the style image information to be fused of unified size is calculated, is obtained First-loss gradient;
The original fusion image information and the loss gradient of the content image information to be fused of unified size are calculated, is obtained Second loss gradient;
According to the first-loss gradient and the second loss gradient calculation, total losses gradient is obtained.
5. image interfusion method according to claim 4, peculiar to be, the influence ratio of the style image information to be fused Weight is that the 0.5, content image information to be fused influences proportion 0.5 and the cycle-index to be 120.
6. a kind of image fusion system based on convolutional neural networks model, peculiar to be, described image emerging system includes:
Data obtaining module:For obtaining at least one style image information to be fused and at least one content images letter to be fused Breath, and will the style image information to be fused and the content image information scaling to be fused to uniformly size;
Fusion Module:Enter for the style image information to be fused and the content image information to be fused to unified size Row initialization fusion treatment, obtains original fusion image information;
Gradient calculation module:For calculating the original fusion image information and unified size in convolutional neural networks model The style image information to be fused and the content image information loss gradient to be fused, obtain total losses gradient;
Information updating module:For being updated and the original fusion image information to described according to the total losses gradient Convolutional neural networks model carries out parameter preservation.
7. image fusion system according to claim 6, peculiar to be, the Fusion Module includes:
Uniform integrated unit:It is uniformly distributed for using to the style image information to be fused and the content images to be fused Information carries out initialization fusion treatment, obtains original fusion image information;
It is described be uniformly distributed forThen claim X obediences are discrete to be uniformly distributed, will The discrete uniform Distribution Value adds 128, obtains the original fusion image information pixel value, the pixel value 0 to 256 it Between;
Wherein, P represents distribution probability, and X represents the style image information to be fused and the content image information to be fused Pixel value, K=1,2 ..., m.
8. image fusion system according to claim 6, peculiar to be, the convolutional neural networks model is 21 nerves The convolutional neural networks framework of Internet, including 16 convolutional layers, 5 down-sampled layers.
9. image fusion system according to claim 6, peculiar to be, the gradient calculation module includes:
Parameter setting unit:For carrying out parameter setting to the convolutional neural networks model, the parameter includes wind to be fused Influence proportion, content image information to be fused influence proportion and the cycle-index of table images information;
First gradient computing unit:The style figure to be fused for calculating the original fusion image information and unified size As the loss gradient of information, first-loss gradient is obtained;
Second gradient calculation unit:The content graph to be fused for calculating the original fusion image information and unified size As the loss gradient of information, obtain second and lose gradient;
Total gradient computing unit:For according to the first-loss gradient and the second loss gradient calculation, obtaining total losses Gradient.
10. image fusion system according to claim 9, peculiar to be, the influence of the style image information to be fused Proportion is that the 0.5, content image information to be fused influences proportion 0.5 and the cycle-index to be 120.
CN201710317578.3A 2017-05-08 2017-05-08 A kind of image interfusion method and system based on convolutional neural networks model Pending CN107240085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710317578.3A CN107240085A (en) 2017-05-08 2017-05-08 A kind of image interfusion method and system based on convolutional neural networks model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710317578.3A CN107240085A (en) 2017-05-08 2017-05-08 A kind of image interfusion method and system based on convolutional neural networks model

Publications (1)

Publication Number Publication Date
CN107240085A true CN107240085A (en) 2017-10-10

Family

ID=59985017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710317578.3A Pending CN107240085A (en) 2017-05-08 2017-05-08 A kind of image interfusion method and system based on convolutional neural networks model

Country Status (1)

Country Link
CN (1) CN107240085A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845072A (en) * 2017-10-13 2018-03-27 深圳市迅雷网络技术有限公司 Image generating method, device, storage medium and terminal device
CN107948529A (en) * 2017-12-28 2018-04-20 北京麒麟合盛网络技术有限公司 Image processing method and device
CN109272024A (en) * 2018-08-29 2019-01-25 昆明理工大学 A kind of image interfusion method based on convolutional neural networks
CN109325549A (en) * 2018-10-25 2019-02-12 电子科技大学 A kind of facial image fusion method
WO2019087033A1 (en) * 2017-11-01 2019-05-09 International Business Machines Corporation Protecting cognitive systems from gradient based attacks through the use of deceiving gradients
WO2020140421A1 (en) * 2019-01-03 2020-07-09 Boe Technology Group Co., Ltd. Computer-implemented method of training convolutional neural network, convolutional neural network, computer-implemented method using convolutional neural network, apparatus for training convolutional neural network, and computer-program product
CN111402181A (en) * 2020-03-13 2020-07-10 北京奇艺世纪科技有限公司 Image fusion method and device and computer readable storage medium
US10790432B2 (en) 2018-07-27 2020-09-29 International Business Machines Corporation Cryogenic device with multiple transmission lines and microwave attenuators
US11023593B2 (en) 2017-09-25 2021-06-01 International Business Machines Corporation Protecting cognitive systems from model stealing attacks
US12050993B2 (en) 2020-12-08 2024-07-30 International Business Machines Corporation Dynamic gradient deception against adversarial examples in machine learning models

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408595A (en) * 2016-08-31 2017-02-15 上海交通大学 Neural network painting style learning-based image rendering method
CN106548208A (en) * 2016-10-28 2017-03-29 杭州慕锐科技有限公司 A kind of quick, intelligent stylizing method of photograph image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408595A (en) * 2016-08-31 2017-02-15 上海交通大学 Neural network painting style learning-based image rendering method
CN106548208A (en) * 2016-10-28 2017-03-29 杭州慕锐科技有限公司 A kind of quick, intelligent stylizing method of photograph image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEON A. GATYS 等: "Image Style Transfer Using Convolutional Neural Networks", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11853436B2 (en) 2017-09-25 2023-12-26 International Business Machines Corporation Protecting cognitive systems from model stealing attacks
US11023593B2 (en) 2017-09-25 2021-06-01 International Business Machines Corporation Protecting cognitive systems from model stealing attacks
CN107845072A (en) * 2017-10-13 2018-03-27 深圳市迅雷网络技术有限公司 Image generating method, device, storage medium and terminal device
GB2580579A (en) * 2017-11-01 2020-07-22 Ibm Protecting cognitive systems from gradient based attacks through the use of deceiving gradients
US10657259B2 (en) 2017-11-01 2020-05-19 International Business Machines Corporation Protecting cognitive systems from gradient based attacks through the use of deceiving gradients
WO2019087033A1 (en) * 2017-11-01 2019-05-09 International Business Machines Corporation Protecting cognitive systems from gradient based attacks through the use of deceiving gradients
CN107948529A (en) * 2017-12-28 2018-04-20 北京麒麟合盛网络技术有限公司 Image processing method and device
CN107948529B (en) * 2017-12-28 2020-11-06 麒麟合盛网络技术股份有限公司 Image processing method and device
US10790432B2 (en) 2018-07-27 2020-09-29 International Business Machines Corporation Cryogenic device with multiple transmission lines and microwave attenuators
CN109272024A (en) * 2018-08-29 2019-01-25 昆明理工大学 A kind of image interfusion method based on convolutional neural networks
CN109272024B (en) * 2018-08-29 2021-08-20 昆明理工大学 Image fusion method based on convolutional neural network
CN109325549B (en) * 2018-10-25 2022-03-04 电子科技大学 Face image fusion method
CN109325549A (en) * 2018-10-25 2019-02-12 电子科技大学 A kind of facial image fusion method
US11537849B2 (en) 2019-01-03 2022-12-27 Boe Technology Group Co., Ltd. Computer-implemented method of training convolutional neural network, convolutional neural network, computer-implemented method using convolutional neural network, apparatus for training convolutional neural network, and computer-program product
WO2020140421A1 (en) * 2019-01-03 2020-07-09 Boe Technology Group Co., Ltd. Computer-implemented method of training convolutional neural network, convolutional neural network, computer-implemented method using convolutional neural network, apparatus for training convolutional neural network, and computer-program product
CN111402181A (en) * 2020-03-13 2020-07-10 北京奇艺世纪科技有限公司 Image fusion method and device and computer readable storage medium
US12050993B2 (en) 2020-12-08 2024-07-30 International Business Machines Corporation Dynamic gradient deception against adversarial examples in machine learning models

Similar Documents

Publication Publication Date Title
CN107240085A (en) A kind of image interfusion method and system based on convolutional neural networks model
Wang et al. Multimodal transfer: A hierarchical deep convolutional neural network for fast artistic style transfer
CN111325681A (en) Image style migration method combining meta-learning mechanism and feature fusion
CN109544662B (en) Method and system for coloring cartoon style draft based on SRUnet
CN106447626A (en) Blurred kernel dimension estimation method and system based on deep learning
CN111598968B (en) Image processing method and device, storage medium and electronic equipment
CN106847294A (en) Audio-frequency processing method and device based on artificial intelligence
CN108682017A (en) Super-pixel method for detecting image edge based on Node2Vec algorithms
CN112991502B (en) Model training method, device, equipment and storage medium
CN109376852A (en) Arithmetic unit and operation method
CN109903236A (en) Facial image restorative procedure and device based on VAE-GAN to similar block search
CN111986075A (en) Style migration method for target edge clarification
CN110895795A (en) Improved semantic image inpainting model method
CN111582094B (en) Method for identifying pedestrian by parallel selecting hyper-parameter design multi-branch convolutional neural network
CN113869503B (en) Data processing method and storage medium based on depth matrix decomposition completion
CN112837212B (en) Image arbitrary style migration method based on manifold alignment
Bounareli et al. One-Shot Neural Face Reenactment via Finding Directions in GAN’s Latent Space
Gao et al. Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation
CN116484073A (en) Node classification method based on mixed regular graph neural network
CN104504719A (en) Image edge detection method and equipment
CN113343121B (en) Lightweight graph convolution collaborative filtering recommendation method based on multi-granularity popularity characteristics
CN113808275B (en) Single image three-dimensional reconstruction method based on GCN and topology modification
CN115936108A (en) Knowledge distillation-based neural network compression method for multivariate time series prediction graph
CN111429342B (en) Photo style migration method based on style corpus constraint
CN113344771A (en) Multifunctional image style migration method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171010

WD01 Invention patent application deemed withdrawn after publication