CN111161175A - Method and system for removing image reflection component - Google Patents
Method and system for removing image reflection component Download PDFInfo
- Publication number
- CN111161175A CN111161175A CN201911344650.7A CN201911344650A CN111161175A CN 111161175 A CN111161175 A CN 111161175A CN 201911344650 A CN201911344650 A CN 201911344650A CN 111161175 A CN111161175 A CN 111161175A
- Authority
- CN
- China
- Prior art keywords
- image
- processing model
- network
- image processing
- decoding network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000005540 biological transmission Effects 0.000 claims abstract description 43
- 239000013598 vector Substances 0.000 claims description 31
- 230000004913 activation Effects 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 9
- 230000006872 improvement Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 239000011521 glass Substances 0.000 description 5
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a method and a system for removing image reflection components, wherein the method comprises the steps of obtaining a preset image processing model; inputting an image to be processed into the preset image processing model, after a first decoding network, a second decoding network and a third decoding network in the image processing model receive data processed by an encoding network, respectively generating a predicted transmission layer image, an alpha mixing mask image with the same number as an original image channel and a predicted reflection layer image, taking the alpha mixing mask image as a bridge, and performing alpha mixing on the predicted transmission layer image and the predicted reflection layer image according to alpha parameters on corresponding pixel points in the alpha mixing mask image as mixing parameters to obtain an image with reflection components removed. The invention uses the alpha mask as a bridge, and enables the transmission layer and the reflection layer which are finally needed to be obtained to be better combined at the stage of training the image processing model.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a system for removing image reflection components.
Background
Single image de-reflection is a very significant problem in the field of computer vision. When the scene behind the glass is imaged through the glass, the image often includes a reflection image of the glass. However, how to obtain two images (a transmission image behind glass and a reflection image in front of glass) from a single mixed image containing a reflection image by decomposition is a problem of ill-posed because infinite groups of solutions can be obtained in practice due to lack of image prior information. The current prior art proposes a variety of different image priors to constrain the range of the solution to as close as possible to the ideal result. But most of these a priori knowledge are based on lower layers of pixel detail, which makes them unsuitable for more complex, more generalized scenes.
Disclosure of Invention
In order to solve the problems, the invention provides a method and a system for removing image reflection components, which train an image processing model by using data from an actual application scene, and improve the universality on the application scene.
In order to achieve the technical purpose and achieve the technical effects, the invention is realized by the following technical scheme:
in a first aspect, the present invention provides a method for removing image reflection components, comprising:
acquiring a preset image processing model; the image processing model comprises an encoding network, a first decoding network, a second decoding network and a third decoding network which are respectively connected with the encoding network;
inputting an image to be processed into the preset image processing model, and generating vector data after the image to be processed is preprocessed by the coding network; and after receiving the vector data output by the coding network, the first decoding network, the second decoding network and the third decoding network respectively generate a predicted transmission layer image, an alpha mixing mask image with the same number as the original image channels and a predicted reflection layer image, taking the alpha mixing mask image as a bridge, and performing alpha mixing on the predicted transmission layer image and the predicted reflection layer image according to alpha parameters on corresponding pixel points in the alpha mixing mask image as mixing parameters to obtain an image with reflection components removed.
As a further improvement of the invention, the coding network comprises a plurality of convolutional layers connected in series, and a batch normalization layer and an activation layer connected with the output ends of the convolutional layers, and the coding network converts the setting information in the received input image into a series of vectors.
As a further improvement of the present invention, the first decoding network, the second decoding network and the third decoding network each comprise a plurality of deconvolution layers connected in series, and a batch normalization layer and an activation layer connected to an output of the deconvolution layers.
As a further improvement of the present invention, the image processing model is constructed by the following steps:
setting a loss function;
establishing an initial image processing model, wherein the initial image processing model comprises a coding network, a first decoding network, a second decoding network and a third decoding network;
acquiring a data set of a practical scene, wherein the data set of the practical scene comprises a triple image of a transmission layer, a reflection layer and a mixed image;
and inputting the data set of the practical scene into an initial image processing model, and training the parameters of the initial image processing model by taking the set loss function as constraint to obtain a final image processing model.
As a further improvement of the present invention, the inputting the data set of the practical scene into an initial image processing model, taking the set loss function as a constraint, and training the parameters of the initial image processing model to obtain a final image processing model specifically includes:
inputting the dataset of the utility scene to an initial image processing model;
the coding network generates a series of vectors based on the received data of the practical scene;
the first decoding network, the second decoding network and the third decoding network receive a series of vectors output by the encoding network and respectively generate a predicted transmission layer image, an alpha mask image with the same number as that of original image channels and a predicted reflection layer image;
performing alpha mixing on the predicted transmission layer and the predicted reflection layer according to alpha parameters on corresponding pixel points in an alpha mixed mask image as mixed parameters to generate a mixed image, comparing the mixed image with an original image to calculate a loss function, comparing the calculated loss function with a preset loss function, and updating the network weight of the image processing model based on a set updating rule;
and continuously executing the process to obtain a final image processing model.
In a second aspect, the present invention provides a system for removing image reflection components, said system comprising:
the acquisition unit is used for acquiring a preset image processing model; the image processing model comprises an encoding network, a first decoding network, a second decoding network and a third decoding network which are respectively connected with the encoding network;
the processing unit is used for inputting an image to be processed into the preset image processing model, and the coding network generates vector data after preprocessing the image to be processed; and after receiving the vector data output by the coding network, the first decoding network, the second decoding network and the third decoding network respectively generate a predicted transmission layer image, an alpha mixing mask image with the same number as the original image channels and a predicted reflection layer image, taking the alpha mixing mask image as a bridge, and performing alpha mixing on the predicted transmission layer image and the predicted reflection layer image according to alpha parameters on corresponding pixel points in the alpha mixing mask image as mixing parameters to obtain an image with reflection components removed.
As a further improvement of the present invention, the coding network includes a plurality of convolutional layers connected in series, and a batch normalization layer and an activation layer connected to the output end of the convolutional layers, and the coding network converts the setting information in the received input image into a series of vectors; the first decoding network, the second decoding network and the third decoding network respectively comprise a plurality of deconvolution layers connected in series, and a batch normalization layer and an activation layer which are connected with the output ends of the deconvolution layers.
As a further improvement of the present invention, the image processing model is constructed by the following steps:
setting a loss function;
establishing an initial image processing model, wherein the initial image processing model comprises a coding network, a first decoding network, a second decoding network and a third decoding network;
acquiring a data set of a practical scene, wherein the data set of the practical scene comprises a triple image of a transmission layer, a reflection layer and a mixed image;
and inputting the data set of the practical scene into an initial image processing model, and training the parameters of the initial image processing model by taking the set loss function as constraint to obtain a final image processing model.
As a further improvement of the present invention, the inputting the data set of the practical scene into an initial image processing model, taking the set loss function as a constraint, and training the parameters of the initial image processing model to obtain a final image processing model specifically includes:
inputting the dataset of the utility scene to an initial image processing model;
the coding network generates a series of vectors based on the received data of the practical scene;
the first decoding network, the second decoding network and the third decoding network receive a series of vectors output by the encoding network and respectively generate a predicted transmission layer image, an alpha mask image with the same number as that of original image channels and a predicted reflection layer image;
performing alpha mixing on the predicted transmission layer and the predicted reflection layer according to alpha parameters on corresponding pixel points in an alpha mixed mask image as mixed parameters to generate a mixed image, comparing the mixed image with an original image to calculate a loss function, comparing the calculated loss function with a preset loss function, and updating the network weight of the image processing model based on a set updating rule;
and continuously executing the process to obtain a final image processing model.
Compared with the prior art, the invention has the beneficial effects that:
(1) compared with the traditional method, the method adopts data from an actual application scene to train the image processing model, and improves the universality of the algorithm on the application scene.
(2) Compared with the traditional method using low-level image information, the method applies higher-level image information and has better result recovery effect.
(3) The invention uses the alpha mask as a bridge, and enables the transmission layer and the reflection layer which are finally needed to be obtained to be better combined at the stage of training the image processing model.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference is now made to the following detailed description of the present disclosure taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a method for removing image reflection components according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the scope of the invention.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
Example 1
An embodiment of the present invention provides a method for removing image reflection components, as shown in fig. 1, including the following steps:
(1) acquiring a preset image processing model; the image processing model comprises an encoding network, a first decoding network, a second decoding network and a third decoding network which are respectively connected with the encoding network; the first decoding network, the second decoding network and the third decoding network are used for activating a series of vectors extracted from the coding network in the previous step after a series of deconvolution operations, and deducing to obtain an RGB value of each pixel point, so that a transmission layer image to be generated, an alpha mixed mask image with the same number as an original image channel and a predicted reflection layer image are recovered;
(2) inputting an image to be processed into the preset image processing model, and generating vector data after the image to be processed is preprocessed by the coding network; after receiving the vector data output by the encoding network, the first decoding network, the second decoding network and the third decoding network respectively generate a predicted transmission layer image, an alpha mixing mask image with the same number as the original image channels and a predicted reflection layer image, the alpha mixing mask image is taken as a bridge, the predicted transmission layer image and the predicted reflection layer image are subjected to alpha mixing according to alpha parameters on corresponding pixel points in the alpha mixing mask image as mixing parameters to obtain an image without reflection components, namely the alpha mixing mask image is taken as the bridge, the generated transmission layer image and reflection layer image are subjected to alpha mixing according to RGB values on the corresponding pixel points and the value of the point in the alpha mixing mask image as the mixed alpha parameter to obtain RGB values of the mixed image, and pixel point mixing is traversed according to the rule to obtain the predicted mixed image.
In a specific implementation manner of the embodiment of the present invention, the coding network includes a plurality of convolutional layers connected in series, and a batch normalization layer and an activation layer connected to output ends of the convolutional layers, and the coding network converts setting information in a received input image into a series of vectors.
In a specific implementation manner of the embodiment of the present invention, each of the first decoding network, the second decoding network, and the third decoding network includes a plurality of deconvolution layers connected in series, and a batch normalization layer and an activation layer connected to an output end of the deconvolution layer.
In a specific implementation manner of the embodiment of the present invention, the image processing model is constructed by the following steps:
setting a loss function for measuring the sum of pixel differences among the predicted transmission layer, the reflection layer, the mixed image and the real images respectively corresponding to the transmission layer, the reflection layer and the mixed image;
establishing an initial image processing model, wherein the initial image processing model comprises a coding network, a first decoding network, a second decoding network and a third decoding network;
acquiring a data set of a practical scene, wherein the data set of the practical scene comprises a triple image of a transmission layer, a reflection layer and a mixed image, namely a real mixed image in the image 1;
and inputting the data set of the practical scene into an initial image processing model, and training the parameters of the initial image processing model by taking the set loss function as constraint to obtain a final image processing model.
As a further improvement of the present invention, the inputting the data set of the practical scene into an initial image processing model, taking the set loss function as a constraint, and training the parameters of the initial image processing model to obtain a final image processing model specifically includes:
inputting the dataset of the utility scene to an initial image processing model;
the coding network generates a series of vectors based on the received data of the practical scene;
the first decoding network, the second decoding network and the third decoding network receive a series of vectors output by the encoding network and respectively generate a predicted transmission layer image, an alpha mask image with the same number as that of original image channels and a predicted reflection layer image;
performing alpha mixing on the predicted transmission layer and the predicted reflection layer according to alpha parameters on corresponding pixel points in an alpha mixed mask image as mixed parameters to generate a mixed image, comparing the mixed image with an original image to calculate a loss function, comparing the calculated loss function with a preset loss function, and updating the network weight of the image processing model based on a set updating rule;
and continuously executing the process to obtain a final image processing model.
Example 2
Based on the same inventive concept as embodiment 1, an embodiment of the present invention provides a system for removing an image reflection component, the system including:
the acquisition unit is used for acquiring a preset image processing model; the image processing model comprises an encoding network, a first decoding network, a second decoding network and a third decoding network which are respectively connected with the encoding network;
the processing unit is used for inputting an image to be processed into the preset image processing model, and the coding network generates vector data after preprocessing the image to be processed; and after receiving the vector data output by the coding network, the first decoding network, the second decoding network and the third decoding network respectively generate a predicted transmission layer image, an alpha mixing mask image with the same number as the original image channels and a predicted reflection layer image, taking the alpha mixing mask image as a bridge, and performing alpha mixing on the predicted transmission layer image and the predicted reflection layer image according to alpha parameters on corresponding pixel points in the alpha mixing mask image as mixing parameters to obtain an image with reflection components removed.
In a specific implementation manner of the embodiment of the present invention, the coding network includes a plurality of convolutional layers connected in series, and a batch normalization layer and an activation layer connected to output ends of the convolutional layers, and the coding network converts setting information in a received input image into a series of vectors; the first decoding network, the second decoding network and the third decoding network respectively comprise a plurality of deconvolution layers connected in series, and a batch normalization layer and an activation layer which are connected with the output ends of the deconvolution layers.
In a specific implementation manner of the embodiment of the present invention, the image processing model is constructed by the following steps:
setting a loss function;
establishing an initial image processing model, wherein the initial image processing model comprises a coding network, a first decoding network, a second decoding network and a third decoding network;
acquiring a data set of a practical scene, wherein the data set of the practical scene comprises a triple image of a transmission layer, a reflection layer and a mixed image;
and inputting the data set of the practical scene into an initial image processing model, and training the parameters of the initial image processing model by taking the set loss function as constraint to obtain a final image processing model.
In a specific implementation manner of the embodiment of the present invention, the inputting the data set of the practical scene into an initial image processing model, taking the set loss function as a constraint, training parameters of the initial image processing model, and obtaining a final image processing model specifically includes:
inputting the dataset of the utility scene to an initial image processing model;
the coding network generates a series of vectors based on the received data of the practical scene;
the first decoding network, the second decoding network and the third decoding network receive a series of vectors output by the encoding network and respectively generate a predicted transmission layer image, an alpha mask image with the same number as that of original image channels and a predicted reflection layer image;
performing alpha mixing on the predicted transmission layer and the predicted reflection layer according to alpha parameters on corresponding pixel points in an alpha mixed mask image as mixed parameters to generate a mixed image, comparing the mixed image with an original image to calculate a loss function, comparing the calculated loss function with a preset loss function, and updating the network weight of the image processing model based on a set updating rule;
and continuously executing the process to obtain a final image processing model.
In summary, the following steps:
the method comprises the steps of training an encoder, a first decoder, a second decoder and a third decoder by using an alpha mask as a bridge, respectively generating a predicted transmission layer image, an alpha mask image with the same number as an original image channel and a predicted reflection layer image, mixing the generated transmission layer and reflection layer according to the alpha mask to generate a mixed image, comparing the mixed image with an original mixed image, calculating a loss function and updating a network weight. Because the introduced alpha mask can better simulate the forming process of a mixed image under the real condition than simple Gaussian mixture, the transmission layer generated by the neural network of the bridge can be better close to the real transmission layer.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (9)
1. A method for removing image reflection components, comprising:
acquiring a preset image processing model; the image processing model comprises an encoding network, a first decoding network, a second decoding network and a third decoding network which are respectively connected with the encoding network;
inputting an image to be processed into the preset image processing model, and generating vector data after the image to be processed is preprocessed by the coding network; and after receiving the vector data output by the coding network, the first decoding network, the second decoding network and the third decoding network respectively generate a predicted transmission layer image, an alpha mixing mask image with the same number as the original image channels and a predicted reflection layer image, taking the alpha mixing mask image as a bridge, and performing alpha mixing on the predicted transmission layer image and the predicted reflection layer image according to alpha parameters on corresponding pixel points in the alpha mixing mask image as mixing parameters to obtain an image with reflection components removed.
2. A method for removing a reflection component of an image according to claim 1, wherein: the encoding network comprises a plurality of convolutional layers connected in series, and a batch normalization layer and an activation layer which are connected with the output ends of the convolutional layers, and converts the received setting information in the input image into a series of vectors.
3. A method for removing a reflection component of an image according to claim 2, wherein: the first decoding network, the second decoding network and the third decoding network respectively comprise a plurality of deconvolution layers connected in series, and a batch normalization layer and an activation layer which are connected with the output ends of the deconvolution layers.
4. A method for removing a reflection component of an image according to claim 1, wherein: the image processing model is constructed by the following steps:
setting a loss function;
establishing an initial image processing model, wherein the initial image processing model comprises a coding network, a first decoding network, a second decoding network and a third decoding network;
acquiring a data set of a practical scene, wherein the data set of the practical scene comprises a triple image of a transmission layer, a reflection layer and a mixed image;
and inputting the data set of the practical scene into an initial image processing model, and training the parameters of the initial image processing model by taking the set loss function as constraint to obtain a final image processing model.
5. A method for removing a reflection component of an image according to claim 4, wherein: inputting the data set of the practical scene into an initial image processing model, training the parameters of the initial image processing model by taking the set loss function as constraint, and obtaining a final image processing model, specifically:
inputting the dataset of the utility scene to an initial image processing model;
the coding network generates a series of vectors based on the received data of the practical scene;
the first decoding network, the second decoding network and the third decoding network receive a series of vectors output by the encoding network and respectively generate a predicted transmission layer image, an alpha mask image with the same number as that of original image channels and a predicted reflection layer image;
performing alpha mixing on the predicted transmission layer and the predicted reflection layer according to alpha parameters on corresponding pixel points in an alpha mixed mask image as mixed parameters to generate a mixed image, comparing the mixed image with an original image to calculate a loss function, comparing the calculated loss function with a preset loss function, and updating the network weight of the image processing model based on a set updating rule;
and continuously executing the process to obtain a final image processing model.
6. A system for removing image reflectance components, the system comprising:
the acquisition unit is used for acquiring a preset image processing model; the image processing model comprises an encoding network, a first decoding network, a second decoding network and a third decoding network which are respectively connected with the encoding network;
the processing unit is used for inputting an image to be processed into the preset image processing model, and the coding network generates vector data after preprocessing the image to be processed; and after receiving the vector data output by the coding network, the first decoding network, the second decoding network and the third decoding network respectively generate a predicted transmission layer image, an alpha mixing mask image with the same number as the original image channels and a predicted reflection layer image, taking the alpha mixing mask image as a bridge, and performing alpha mixing on the predicted transmission layer image and the predicted reflection layer image according to alpha parameters on corresponding pixel points in the alpha mixing mask image as mixing parameters to obtain an image with reflection components removed.
7. A system for removing image reflection components according to claim 6, wherein: the encoding network comprises a plurality of convolutional layers connected in series, and a batch normalization layer and an activation layer which are connected with the output ends of the convolutional layers, and converts the received setting information in the input image into a series of vectors; the first decoding network, the second decoding network and the third decoding network respectively comprise a plurality of deconvolution layers connected in series, and a batch normalization layer and an activation layer which are connected with the output ends of the deconvolution layers.
8. A system for removing image reflection components according to claim 6, wherein: the image processing model is constructed by the following steps:
setting a loss function;
establishing an initial image processing model, wherein the initial image processing model comprises a coding network, a first decoding network, a second decoding network and a third decoding network;
acquiring a data set of a practical scene, wherein the data set of the practical scene comprises a triple image of a transmission layer, a reflection layer and a mixed image;
and inputting the data set of the practical scene into an initial image processing model, and training the parameters of the initial image processing model by taking the set loss function as constraint to obtain a final image processing model.
9. A system for removing image reflection components according to claim 8, wherein: inputting the data set of the practical scene into an initial image processing model, training the parameters of the initial image processing model by taking the set loss function as constraint, and obtaining a final image processing model, specifically: inputting the dataset of the utility scene to an initial image processing model;
the coding network generates a series of vectors based on the received data of the practical scene;
the first decoding network, the second decoding network and the third decoding network receive a series of vectors output by the encoding network and respectively generate a predicted transmission layer image, an alpha mask image with the same number as that of original image channels and a predicted reflection layer image; performing alpha mixing on the predicted transmission layer and the predicted reflection layer according to alpha parameters on corresponding pixel points in an alpha mixed mask image as mixed parameters to generate a mixed image, comparing the mixed image with an original image to calculate a loss function, comparing the calculated loss function with a preset loss function, and updating the network weight of the image processing model based on a set updating rule;
and continuously executing the process to obtain a final image processing model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911344650.7A CN111161175A (en) | 2019-12-24 | 2019-12-24 | Method and system for removing image reflection component |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911344650.7A CN111161175A (en) | 2019-12-24 | 2019-12-24 | Method and system for removing image reflection component |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111161175A true CN111161175A (en) | 2020-05-15 |
Family
ID=70557865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911344650.7A Pending CN111161175A (en) | 2019-12-24 | 2019-12-24 | Method and system for removing image reflection component |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161175A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112802076A (en) * | 2021-03-23 | 2021-05-14 | 苏州科达科技股份有限公司 | Reflection image generation model and training method of reflection removal model |
CN112861997A (en) * | 2021-03-15 | 2021-05-28 | 北京小米移动软件有限公司 | Information processing method and device, storage medium and electronic equipment |
WO2022222080A1 (en) * | 2021-04-21 | 2022-10-27 | 浙江大学 | Single-image reflecting layer removing method based on position perception |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345487A (en) * | 2018-10-25 | 2019-02-15 | 厦门美图之家科技有限公司 | A kind of image enchancing method and calculate equipment |
-
2019
- 2019-12-24 CN CN201911344650.7A patent/CN111161175A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345487A (en) * | 2018-10-25 | 2019-02-15 | 厦门美图之家科技有限公司 | A kind of image enchancing method and calculate equipment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112861997A (en) * | 2021-03-15 | 2021-05-28 | 北京小米移动软件有限公司 | Information processing method and device, storage medium and electronic equipment |
CN112802076A (en) * | 2021-03-23 | 2021-05-14 | 苏州科达科技股份有限公司 | Reflection image generation model and training method of reflection removal model |
WO2022222080A1 (en) * | 2021-04-21 | 2022-10-27 | 浙江大学 | Single-image reflecting layer removing method based on position perception |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113658051B (en) | Image defogging method and system based on cyclic generation countermeasure network | |
CN111161175A (en) | Method and system for removing image reflection component | |
CN110633748B (en) | Robust automatic face fusion method | |
CN110310229A (en) | Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing | |
CN111127331B (en) | Image denoising method based on pixel-level global noise estimation coding and decoding network | |
CN110189260B (en) | Image noise reduction method based on multi-scale parallel gated neural network | |
US20220156987A1 (en) | Adaptive convolutions in neural networks | |
CN115049556A (en) | StyleGAN-based face image restoration method | |
CN112233012A (en) | Face generation system and method | |
CN116701692B (en) | Image generation method, device, equipment and medium | |
CN112949553A (en) | Face image restoration method based on self-attention cascade generation countermeasure network | |
CN114299573A (en) | Video processing method and device, electronic equipment and storage medium | |
CN113160286A (en) | Near-infrared and visible light image fusion method based on convolutional neural network | |
CN112669431B (en) | Image processing method, apparatus, device, storage medium, and program product | |
CN111862343B (en) | Three-dimensional reconstruction method, device, equipment and computer readable storage medium | |
CN111260706B (en) | Dense depth map calculation method based on monocular camera | |
CN117408910A (en) | Training method of three-dimensional model completion network, three-dimensional model completion method and device | |
Arora et al. | Augmentation of Images through DCGANs | |
CN112084855A (en) | Outlier elimination method for video stream based on improved RANSAC method | |
CN104320659A (en) | Background modeling method, device and apparatus | |
CN116264606A (en) | Method, apparatus and computer program product for processing video | |
CN114004974A (en) | Method and device for optimizing images shot in low-light environment | |
CN113869141A (en) | Feature extraction method and device, encoder and communication system | |
Vasiliu et al. | Coherent rendering of virtual smile previews with fast neural style transfer | |
CN116342800B (en) | Semantic three-dimensional reconstruction method and system for multi-mode pose optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200515 |
|
RJ01 | Rejection of invention patent application after publication |