CN110443763B - Convolutional neural network-based image shadow removing method - Google Patents

Convolutional neural network-based image shadow removing method Download PDF

Info

Publication number
CN110443763B
CN110443763B CN201910705551.0A CN201910705551A CN110443763B CN 110443763 B CN110443763 B CN 110443763B CN 201910705551 A CN201910705551 A CN 201910705551A CN 110443763 B CN110443763 B CN 110443763B
Authority
CN
China
Prior art keywords
shadow
image
neural network
training
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910705551.0A
Other languages
Chinese (zh)
Other versions
CN110443763A (en
Inventor
范辉
韩梦
李晋江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Technology and Business University
Original Assignee
Shandong Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Technology and Business University filed Critical Shandong Technology and Business University
Priority to CN201910705551.0A priority Critical patent/CN110443763B/en
Publication of CN110443763A publication Critical patent/CN110443763A/en
Application granted granted Critical
Publication of CN110443763B publication Critical patent/CN110443763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an image shadow removing method based on a convolutional neural network, which comprises the following steps: collecting and collecting shadow images and shadowless images in a real scene to form an image shadow removal data set; preprocessing the image shadow removal dataset; constructing an end-to-end convolutional neural network structure; randomly selecting shadow images in the data set to form a training set required by network training; training an end-to-end convolutional neural network in a diversified manner by using a training set; forming a test set by using the real image and the shadow image in the randomly selected data set; and performing shadow removal by using the trained end-to-end convolutional neural network by using the test set to obtain a high-quality shadow-free image. The method of the invention adopts a full-automatic end-to-end method to remove the shadow of the image, thus obtaining a clearer shadow-free image consistent with the color and texture of the original image, and having better detail processing effect.

Description

Convolutional neural network-based image shadow removing method
Technical Field
The invention belongs to the technical field of image processing, relates to a shadow removing method, and in particular relates to an image shadow removing method based on a convolutional neural network. Background
When an image is acquired as multimedia information, the image is susceptible to various conditions, so that the image is generally degraded. Shadow is one of the phenomena, namely quality degradation caused by imaging conditions, can lead the information quantity reflected by a target to be defective or disturbed, reduce the interpretation precision of an image and seriously affect various quantitative analysis and application of the image.
Shadow detection and removal is one of the most fundamental but challenging problems in the fields of computer graphics and computer vision, and shadow removal of images is an important preprocessing stage for computer vision and image enhancement. The existence of shadows not only affects the visual interpretation effect of the image, but also affects the analysis and subsequent processing results of the image. Therefore, it is necessary to perform shadow detection and analysis on the image, thereby eliminating or weakening the influence of image shadows and increasing the visual reality and physical reality of image editing and processing.
Shadows are generated by different lighting conditions, and shadowless images can be usedProduct of the shadow ratio(product of pixel level) To represent shadow imagesAs shown in the following formula.
(1)
Shadow removal aims to generate a high quality non-shadow image given a single shadow image, such that the texture, color, etc. of the original shadow image region is restored to a condition consistent with the non-shadow image. Existing methods of removing shadow areas generally include two steps: shadow detection and shadow removal. The method is characterized in that firstly, shadow areas are positioned through shadow detection or are manually marked by a user, and then, a model is built to reconstruct the shadow areas, so that shadow removal is realized.
Shadow detection, however, is itself an extremely challenging task. While conventional physics-based methods can only be applied to high quality images, statistical learning-based methods rely on features that are manually annotated by the user. With the development of neural networks, convolutional Neural Networks (CNNs) learn the characteristics of shadow detection, which overcomes the drawbacks of the traditional methods of requiring high quality images and manually labeling the characteristics, but they are still limited to small network architectures due to their low training data.
Likewise, even if shadow areas are known, it is still a challenge to remove them again. This is because the effect of shadow detection may seriously affect the shadow removal result, and if the shadow detection effect is not good, then the subsequent removal effect is not possible to obtain a high quality shadow-free image.
The method is based on a statistical and interactive shadow removal method, adopts a rough manual labeling mode to detect shadows, and sacrifices finer shadows and complete autonomy of wide and simpler user input; the Poisson equation-based shadow removal algorithm does not consider the influence of ambient illumination and object material changes, so that the texture recovery effect of a shadow area is poor. The gradient domain-based image shadow removal algorithm solves some of the disadvantages of poisson equation-based but has poor processing effect on discontinuous or smaller shadow areas.
From an analysis summary, current shadow removal methods do not effectively restore the texture of the shadow area or take no account of the environmental and object material effects, thus maintaining visual consistency, and most methods are interactive rather than fully automatic, which greatly reduces the efficiency of use.
Disclosure of Invention
The invention aims to obtain high-quality shadowless images, and provides an end-to-end depth convolution neural network for removing shadows of images, which can be used for removing shadows of images of intelligent traffic systems, medicine and the like.
In order to achieve the above object, the present invention uses the following technical scheme:
the image shadow removing method based on convolutional neural network includes collecting image shadow removing data set, preprocessing, training and learning by using two-layer network structure of shallow neural network and deep neural network, and inputting original image (shadow image) through trained network structure to realize full-automatic shadow removing and finally obtain high-quality shadow-free image.
The method comprises the following specific steps:
1) Collecting shadow images and shadowless images in a real scene to form an image shadow removal data set;
2) Preprocessing the image shadow removal dataset;
3) Constructing an end-to-end convolutional neural network structure;
4) Randomly selecting shadow images in the data set to form a training set required by network training;
5) Training an end-to-end convolutional neural network in a diversified manner by using a training set;
6) Forming a test set by using the real image and the shadow image in the randomly selected data set;
7) And performing shadow removal by using the trained end-to-end convolutional neural network by using the test set to obtain a high-quality shadow-free image.
In the step 1), a shadow image and a non-shadow image in a real scene are acquired to obtain a data set with image shadow removed:
in order to ensure the diversity of the image shadow removal data set, a fixed camera is used for shooting shadow images and non-shadow images cast by different objects under different conditions of illumination intensity, scenes and the like, and the image shadow removal data set is constructed. Specifically, a plurality of scenes such as grasslands, campuses and streets can be selected, shadow images cast by different objects are shot at different weather at the same moment and at different moments of the same weather, meanwhile, shadow-free images corresponding to the original images, namely the shadow images, are shot to form shadow image pairs and shadow-free image pairs, and collection of an image shadow removal dataset is completed.
In the step 2), preprocessing is performed on the image shadow removal dataset:
2-1) classifying and sorting the acquired image shadow data set according to soft and hard shadows and scene characteristics, forming an image pair by the shadow image and a corresponding non-shadow image, and expanding the data set of the same scene in a cutting and rotating mode;
2-2) sorting the images of different sizes and pixels into a plurality of layers of pixels, wherein the pixel level isFor exampleEtc., where n may be a positive integer.
The step 3) comprises a two-layer network structure of a shallow neural network and a deep neural network:
3-1) the shallow neural network is used for extracting rough image features, extracting global semantic scene information and obtaining image shadow mask factors from a fine-to-coarse mode. The shallow neural network is constructed based on a VGG16 network, fine-tuning is carried out on an original network structure, the original network structure is used for obtaining shadow mask factors, all connection layers in the network are replaced by convolution layers, a sub-sampling layer is not used any more, and one shadow mask factor is addedIs used for the prediction layer of (1). Finally, the shallow neural network includes 16 convolutional layers, 5 max pooling layers, and 1 prediction layer.
3-2) the multi-context mechanism for local detail correction of the deep neural network is used together with the previous shallow network, so that the result is further improved, the prediction result of the whole network structure is more accurate, the edge processing effect is finer, and the shadow mask factor is obtained in a coarse-to-fine mode. In order to avoid the burden of network training, the invention defines a small network structure as a deep neural network, comprising 5 convolution layers, 2 pooling layers and 1 prediction layer.
3-3) to mitigate the occurrence of overfitting, to some extent, regularization is achieved using dropout after each convolutional layer of the network, the activation function used in the present invention is Rectified linear unit (ReLU), which is defined as follows:
(2)
in the step 4), shadow images in the data set are randomly selected to form a training set required by network training;
in the step 5), training the end-to-end convolutional neural network in a diversified manner by using the training set:
5-1) training the network in a diversified mode such as stage-by-stage, hierarchical and the like instead of training in a single mode, and finally realizing rapid convergence of the network and effectively preventing overfitting;
5-2) shadow imageAnd its shadow maskThe relation between the two is given by the formula (1), in the training, the real shadow mask is calculated according to a given shadow and non-shadow image pairThen try to learn a mapping function to build up shadow imagesShadow maskBetween themTying;
5-3) during training, the network is constrained by the following overall loss, the overall loss function comprising two loss functions and their fusion as follows:
the first loss is calledPredicting the loss, approximating it as a loss function using the following formula:
(3)
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the true value at pixel q, if this is the shaded regionOtherwise, 0;representing network structure based on parametersThe prediction result obtained at pixel q.
The second loss, called the composition loss, is the true shadow image RGB color and the shadow mask factor output by the shadow free image and the prediction layerThe difference between the colors of the synthesized predicted shadow image RGB is approximated using the following formula:
(4)
wherein, the liquid crystal display device comprises a liquid crystal display device,Nrepresenting the total number of training samples in the batch,representing shadow mask factors by predictionThe resulting predicted shadow image RGB channels,representing the actual shadow image RGB channels.
The overall penalty suffered by the network is a linear fusion of the first two, following the following formula:
(5)
here will beSet to 0.5.
In the step 6), a test set is formed by utilizing the real image and the shadow image in the randomly selected data set;
in the step 7), shadow removal is performed by using a trained end-to-end convolutional neural network by using a test set, so that a high-quality shadow-free image is obtained.
The invention has the beneficial effects that:
(1) The quality degradation phenomenon caused by shadow is eliminated to a great extent, and a satisfactory shadow removing effect is obtained;
(2) Because a double-layer network is adopted as an integral grid structure, the network obtains the integral semantic information of the image in two modes of from fine to coarse and from coarse to fine, the prediction result of the integral network structure is more accurate, the detail processing is better, and the quality of the obtained shadowless image is higher;
(3) The invention inputs shadow images and outputs shadow-free images, thereby realizing a full-automatic end-to-end shadow removing method without interactively obtaining or inputting shadow masks through shadow detection.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a diagram of the network architecture of the present invention;
FIG. 3 is an image shadow removal result of the present invention applied in a simple scene;
fig. 4 is an image shadow removal result of the present invention applied to a complex scene.
Detailed Description
The invention will be further described with reference to the drawings and examples.
As shown in fig. 1, the method comprises the following steps:
1) The method comprises the steps of collecting shadow images and shadowless images in a real scene to obtain a data set with image shadows removed, and the method comprises the following steps:
1-1) selecting different scenes and different illumination intensities to collect the data sets according to the diversified characteristics of the data sets, specifically, selecting scenes such as a land making scene, a road scene, a campus scene and the like, and collecting the image shadow removing data sets at the same moment in different weather or at different moments in the same weather respectively;
1-2) fixing a camera at a designated position by using a tripod according to a selected scene, and setting parameters such as fixed exposure compensation, focal length and the like, wherein the focal length is 4 mm, and the exposure compensation is 0 step;
1-3) casting shadows to a designated area by using a schoolbag, an umbrella, a human body and the like, and shooting shadow images by using a Bluetooth remote controller to connect a camera, so as to obtain multi-shape shadow images and ensure shape diversity characteristics of an image shadow removal data set;
1-4) withdrawing the cast object, removing the cast object, and connecting a Bluetooth remote controller with a camera to shoot a corresponding background image of the shadow image, namely the shadow-free image, so as to form an image shadow removal data set.
2) Preprocessing an image shadow removal dataset:
2-1) classifying and sorting the acquired image shadow data set according to soft and hard shadows and scene characteristics, forming an image pair by the shadow image and a corresponding non-shadow image, and expanding the data set of the same scene in a cutting and rotating mode;
2-2) sorting the images of different sizes and pixels into a plurality of layers of pixels, wherein the pixel level isFor exampleEtc., where n may be a positive integer.
3) The end-to-end convolutional neural network structure comprises a shallow neural network and a deep neural network two-layer network structure:
3-1) the shallow neural network is used for extracting rough image features, extracting global semantic scene information and obtaining image shadow mask factors from a fine-to-coarse mode. The shallow neural network is constructed based on the VGG16 network, fine tuning is carried out on an original network structure, the original network structure is used for obtaining shadow mask factors, all connection layers in the network are replaced by convolution layers, a sub-sampling layer is not used any more, and a prediction layer which is the shadow mask factors is added. Finally, the shallow neural network includes 16 convolutional layers, 5 max pooling layers, and 1 prediction layer.
3-2) the multi-context mechanism for local detail correction of the deep neural network is used together with the previous shallow network, so that the result is further improved, the prediction result of the whole network structure is more accurate, the edge processing effect is finer, and the shadow mask factor is obtained in a coarse-to-fine mode. In order to avoid the burden of network training, the invention defines a small network structure as a deep neural network, comprising 5 convolution layers, 2 pooling layers and 1 prediction layer.
3-3) to mitigate the occurrence of overfitting, to some extent, regularization is achieved using dropout after each convolutional layer of the network, the activation function used in the present invention is Rectified linear unit (ReLU), which is defined as follows:
(2)
4) Randomly selecting shadow images in the data set to form a training set required by network training;
5) Training an end-to-end convolutional neural network in a diversified manner by using a training set:
5-1) training a shallow neural network and a deep neural network independently, and training the two networks in cascade when the two networks reach a certain precision, so as to finally realize the combined optimization of the two networks and realize the effect of staged training;
5-2) setting original images of shadow conditions of different layers according to the size of shadow scale factors, respectively training, for example, training an image dataset of hard shadows first, training an image dataset of soft shadows second, and finally combining the two to form a dataset for training to realize a multi-level training effect;
5-3) taking the difference of the pixel sizes of the image input by the user into consideration, dividing the image with different pixel sizes into a plurality of layers for training, realizing a multi-layer training effect, finally realizing quick convergence, preventing overfitting and ensuring the diversification of training modes;
5-4) shadow imageAnd its shadow maskThe relation between the two is given by the formula (1), in the training, the real shadow mask is calculated according to a given shadow and non-shadow image pairThen try to learn a mapping function to build up shadow imagesShadow maskA relationship between;
5-5) the network is constrained by the overall penalty as follows, the overall penalty function comprising two penalty functions and their fusion as follows:
the first loss is calledPredicting the loss, approximating it as a loss function using the following formula:
(3)
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the true value at pixel q, if this is the shaded regionOtherwise, 0;representing network structure based on parametersThe prediction result obtained at pixel q.
The second loss, called the composition loss, is the true shadow image RGB color and the shadow mask factor output by the shadow free image and the prediction layerThe difference between the colors of the synthesized predicted shadow image RGB is approximated using the following formula:
(4)
wherein, the liquid crystal display device comprises a liquid crystal display device,Nrepresenting the total number of training samples in the batch,representing shadow mask factors by predictionThe resulting predicted shadow image RGB channels,representing the actual shadow image RGB channels.
The overall penalty suffered by the network is a linear fusion of the first two, following the following formula:
(5)
here will beSet to 0.5.
6) Forming a test set by using the real image and the shadow image in the randomly selected data set;
7) And performing shadow removal by using the trained end-to-end convolutional neural network by using the test set to obtain a high-quality shadow-free image.
The content of the present invention can be further explained by the following simulation results.
1. The simulation content: by applying the method, the shadows of the images in different scenes are removed.
2. Simulation results
Fig. 3 shows a shadow image of the method of the invention applied in a simple scene. (a), (d), and (g) in fig. 3 represent shadow images in a simple scene, respectively; (c) (f) and (i) respectively represent real shadowless images corresponding to the scenes (a), (d) and (g); (b) (e) and (h) represent the image shading removal results in a simple scene obtained by using the present invention, respectively. It can be seen that the shadow removing effect of the invention is better for a simple scene, and a high-quality shadow-free image is obtained.
Fig. 4 shows a shadow image of the application of the method of the invention in a complex scene. (a), (d) and (g) in fig. 4 represent shadow images in a complex scene, respectively; (c) (f) and (i) respectively represent real shadowless images corresponding to the scenes (a), (d) and (g); (b) (e) and (h) respectively represent the image shadow removal results under the complex scene obtained by using the method. The method provided by the invention aims at good shadow removal effect in complex scenes, effectively enables the shadow image to be restored to the shadow-free image consistent with the texture, color and the like of the real shadow-free image, and is particularly good in detail processing. As can be seen from an integrated analysis of fig. 3 and 4, the present invention exhibits a desirable removal effect, both in simple and complex scenarios, reducing the quality degradation effects due to the presence of shadows.
In summary, the invention provides a full-automatic image shadow removal model based on a convolutional neural network. The method can realize full-automatic image shadow removal through the deep convolutional neural network, reduces interaction behaviors, obtains ideal image shadow removal effect, improves the efficiency of the method, well eliminates quality degradation phenomenon caused by shadow, and has great application value for later target identification and target tracking.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (4)

1. The image shadow removing method based on the convolutional neural network is characterized in that an image shadow removing data set is firstly collected and preprocessed, training and learning are carried out by using a two-layer network structure of a shallow neural network and a deep neural network, then an original image with a shadow image is input through the trained network structure, so that full-automatic shadow removing is realized, and finally a high-quality shadow-free image is obtained;
the convolutional neural network-based image shadow removing method mainly comprises the following steps of:
step 1) acquiring shadow images and shadowless images in real scenes to form an image shadow removal data set, specifically, fixing a camera by using a tripod under different illumination intensities, shooting shadow images and shadowless image pairs generated by different projection objects in different scenes by using a Bluetooth remote controller, and acquiring the image shadow removal data set;
step 2) preprocessing an image shadow removal data set;
step 3) constructing an end-to-end convolutional neural network structure;
step 4) randomly selecting shadow images in the data set to form a training set required by network training;
step 5) training an end-to-end convolutional neural network by using a training set in a diversified mode, specifically, training the network in a diversified training mode such as staged training, layering and the like, wherein the staged training is to train a shallow layer network and a deep layer neural network independently, and the two networks are trained in cascade when both reach a certain precision, the layered training is to train original images of shadow conditions of different layers respectively according to the size of shadow scale factors, firstly training an image dataset of hard shadows, then training an image dataset of soft shadows, combining the two image datasets to form a dataset for training, and training images of different pixel sizes into a plurality of layers to realize rapid convergence and prevent overfitting; meanwhile, linear fusion of the predicted loss and the composition loss is used as a total loss function of the network;
step 6) forming a test set by utilizing the real image and the shadow image in the randomly selected data set;
and 7) performing shadow removal by using the trained end-to-end convolutional neural network by using a test set to obtain a high-quality shadow-free image.
2. The method for removing image shadows based on convolutional neural network as set forth in claim 1, wherein said step 2) performs preprocessing on the image shadow removal dataset: the data set is expanded by adopting modes such as cutting, rotating and the like, the collected image shadow data set is classified and arranged according to soft and hard shadows and scene characteristics to form an image pair, and meanwhile, images with different sizes and different pixels are arranged into images with specified pixels.
3. The method for removing shadows from an image based on a convolutional neural network according to claim 1, wherein the step 3) is to construct an end-to-end convolutional neural network structure comprising a two-layer network structure of a shallow neural network and a deep neural network: the shallow neural network comprises 16 convolution layers, 5 maximum pooling layers and 1 prediction layer; the two layers of the deep neural network comprise 5 convolution layers, 2 pooling layers and 1 prediction layer; dropout is applied after each convolutional layer of the network, with the activation function used being ReLU.
4. The method for removing the shadow of the image based on the convolutional neural network according to claim 1, wherein the characteristic of the convolutional neural network is utilized, and the shadow region existing in the shadow image is removed according to the lack of a full-automatic end-to-end shadow removing method for removing the shadow of the image, so that the influence of the shadow on the image is effectively eliminated; meanwhile, a multi-context scene is considered, local information, edge information and the like are processed, and the network comprises a shallow layer neural network structure and a deep layer neural network structure, so that a high-quality shadowless image is obtained.
CN201910705551.0A 2019-08-01 2019-08-01 Convolutional neural network-based image shadow removing method Active CN110443763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910705551.0A CN110443763B (en) 2019-08-01 2019-08-01 Convolutional neural network-based image shadow removing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910705551.0A CN110443763B (en) 2019-08-01 2019-08-01 Convolutional neural network-based image shadow removing method

Publications (2)

Publication Number Publication Date
CN110443763A CN110443763A (en) 2019-11-12
CN110443763B true CN110443763B (en) 2023-10-13

Family

ID=68432691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910705551.0A Active CN110443763B (en) 2019-08-01 2019-08-01 Convolutional neural network-based image shadow removing method

Country Status (1)

Country Link
CN (1) CN110443763B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222826A (en) * 2020-01-21 2021-08-06 深圳富泰宏精密工业有限公司 Document shadow removing method and device
CN112115934A (en) * 2020-09-16 2020-12-22 四川长虹电器股份有限公司 Bill image text detection method based on deep learning example segmentation
CN116569207A (en) * 2020-12-12 2023-08-08 三星电子株式会社 Method and electronic device for managing artifacts of images
CN112862714A (en) * 2021-02-03 2021-05-28 维沃移动通信有限公司 Image processing method and device
CN113178010B (en) * 2021-04-07 2022-09-06 湖北地信科技集团股份有限公司 High-resolution image shadow region restoration and reconstruction method based on deep learning
CN113139917A (en) * 2021-04-23 2021-07-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113628129B (en) * 2021-07-19 2024-03-12 武汉大学 Edge attention single image shadow removing method based on semi-supervised learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6701026B1 (en) * 2000-01-26 2004-03-02 Kent Ridge Digital Labs Method and apparatus for cancelling lighting variations in object recognition
US7366323B1 (en) * 2004-02-19 2008-04-29 Research Foundation Of State University Of New York Hierarchical static shadow detection method
CN101477628A (en) * 2009-01-06 2009-07-08 青岛海信电子产业控股股份有限公司 Method and apparatus for vehicle shape removing
CN104079802A (en) * 2013-03-29 2014-10-01 现代Mnsoft公司 Method and apparatus for removing shadow from aerial or satellite photograph
CN105574821A (en) * 2015-12-10 2016-05-11 浙江传媒学院 Data-based soft shadow removal method
US9430715B1 (en) * 2015-05-01 2016-08-30 Adobe Systems Incorporated Identifying and modifying cast shadows in an image
CN106447721A (en) * 2016-09-12 2017-02-22 北京旷视科技有限公司 Image shadow detection method and device
KR20190071452A (en) * 2017-12-14 2019-06-24 동국대학교 산학협력단 Apparatus and method for object detection with shadow removed
CN109978807A (en) * 2019-04-01 2019-07-05 西北工业大学 A kind of shadow removal method based on production confrontation network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10231619B2 (en) * 2015-12-09 2019-03-19 Oregon Health & Science University Systems and methods to remove shadowgraphic flow projections in OCT angiography

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6701026B1 (en) * 2000-01-26 2004-03-02 Kent Ridge Digital Labs Method and apparatus for cancelling lighting variations in object recognition
SG103253A1 (en) * 2000-01-26 2004-04-29 Kent Ridge Digital Labs Method and apparatus for cancelling lighting variations in object recognition
US7366323B1 (en) * 2004-02-19 2008-04-29 Research Foundation Of State University Of New York Hierarchical static shadow detection method
CN101477628A (en) * 2009-01-06 2009-07-08 青岛海信电子产业控股股份有限公司 Method and apparatus for vehicle shape removing
CN104079802A (en) * 2013-03-29 2014-10-01 现代Mnsoft公司 Method and apparatus for removing shadow from aerial or satellite photograph
US9430715B1 (en) * 2015-05-01 2016-08-30 Adobe Systems Incorporated Identifying and modifying cast shadows in an image
CN105574821A (en) * 2015-12-10 2016-05-11 浙江传媒学院 Data-based soft shadow removal method
CN106447721A (en) * 2016-09-12 2017-02-22 北京旷视科技有限公司 Image shadow detection method and device
KR20190071452A (en) * 2017-12-14 2019-06-24 동국대학교 산학협력단 Apparatus and method for object detection with shadow removed
CN109978807A (en) * 2019-04-01 2019-07-05 西北工业大学 A kind of shadow removal method based on production confrontation network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"基于计算机视觉的目标检测和阴影检测算法的研究";宋全恒;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170915;I138-295 *
Xiaodong Gu 等.Image shadow removal using pulse coupled neural network.IEEE Transactions on Neural Networks.2005,692 - 698. *
徐晓燕 等.室外光源光谱辐照度与K-means结合的单幅图像阴影检测.科学技术与工程.2018,(第04期),286-291. *
熊俊涛 等.自然光照条件下采摘机器人果实识别的表面阴影去除方法.农业工程学报.2018,(第22期),147-154. *
识别阴影中智能车辆导航路径的神经网络方法研究;王荣本等;《公路交通科技》;20021020(第05期);99-102 *
闫凤 等.纹理损失最小约束下的跟踪图像阴影去除算法的改进.现代电子技术.2016,(第24期),104-108. *

Also Published As

Publication number Publication date
CN110443763A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110443763B (en) Convolutional neural network-based image shadow removing method
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
Li et al. Luminance-aware pyramid network for low-light image enhancement
CN113065558B (en) Lightweight small target detection method combined with attention mechanism
Ram Prabhakar et al. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs
Fu et al. LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss
CN105894484B (en) A kind of HDR algorithm for reconstructing normalized based on histogram with super-pixel segmentation
CN109241982A (en) Object detection method based on depth layer convolutional neural networks
CN108960404B (en) Image-based crowd counting method and device
CN109255758A (en) Image enchancing method based on full 1*1 convolutional neural networks
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN110070517A (en) Blurred picture synthetic method based on degeneration imaging mechanism and generation confrontation mechanism
Fan et al. Multiscale cross-connected dehazing network with scene depth fusion
Garg et al. LiCENt: Low-light image enhancement using the light channel of HSL
Lv et al. Low-light image enhancement via deep Retinex decomposition and bilateral learning
Feng et al. Low-light image enhancement algorithm based on an atmospheric physical model
Song et al. Multi-scale joint network based on Retinex theory for low-light enhancement
Cheng et al. A highway traffic image enhancement algorithm based on improved GAN in complex weather conditions
Xue et al. TC-net: transformer combined with cnn for image denoising
Zheng et al. Low-light image and video enhancement: A comprehensive survey and beyond
Tan et al. High dynamic range imaging for dynamic scenes with large-scale motions and severe saturation
CN111832508B (en) DIE _ GA-based low-illumination target detection method
Chen et al. Improving dynamic hdr imaging with fusion transformer
Zheng et al. Overwater image dehazing via cycle-consistent generative adversarial network
Xu et al. Multi-scale dehazing network via high-frequency feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant