CN112085680B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112085680B
CN112085680B CN202010940502.8A CN202010940502A CN112085680B CN 112085680 B CN112085680 B CN 112085680B CN 202010940502 A CN202010940502 A CN 202010940502A CN 112085680 B CN112085680 B CN 112085680B
Authority
CN
China
Prior art keywords
rain
picture
processed
fusion
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010940502.8A
Other languages
Chinese (zh)
Other versions
CN112085680A (en
Inventor
张凯皓
罗文寒
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010940502.8A priority Critical patent/CN112085680B/en
Publication of CN112085680A publication Critical patent/CN112085680A/en
Application granted granted Critical
Publication of CN112085680B publication Critical patent/CN112085680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a picture to be processed, carrying out feature detection on the picture to be processed to obtain a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed, then carrying out rain removing treatment on the picture to be processed by utilizing the rain line feature map and the rain drop feature map to obtain a target picture for removing the rain line and the rain drop, and simultaneously removing the rain line and the rain drop in the picture based on the computer vision and the machine learning technology of artificial intelligence so as to improve the rain removing effect of the image.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
Rain is a natural phenomenon, and rain lines in the sky fall on glass or a lens and become rain drops, so that when a photo is taken in overcast and rainy weather, the taken picture is easy to carry rain (such as rain lines and rain drops), and the attractiveness of the picture is seriously affected. The existing image rain removing method mainly focuses on raindrops or rain lines in an image, inputs a picture with the raindrops into a neural network, uses the convolutional neural network to extract picture information, uses a reconstructed loss function as a supervision signal, and carries out rain removing treatment on the picture. However, in practical applications, it is found that raindrops or raindrops remain in an image obtained by processing through the current image rain removing method, so that it is difficult to thoroughly remove the raindrops and raindrops in the image, and the image rain removing effect is poor.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, which can remove rain lines and rain drops in a picture at the same time and improve the rain removing effect of the image.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
and obtaining a picture to be processed.
And carrying out feature detection on the picture to be processed to obtain a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, and the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed.
And carrying out rain removing treatment on the picture to be treated by utilizing the rain line characteristic diagram and the raindrop characteristic diagram so as to obtain a target picture for removing rain lines and raindrops.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
and the acquisition module is used for acquiring the picture to be processed.
The detection module is used for carrying out feature detection on the picture to be processed so as to obtain a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, and the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed.
And the processing module is used for carrying out rain removing processing on the picture to be processed by utilizing the rain line characteristic diagram and the raindrop characteristic diagram so as to obtain a target picture for removing the rain line and the raindrops.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a processor and a storage device, where the processor and the storage device are connected to each other, and the storage device is configured to store a computer program, where the computer program includes program instructions, and where the processor is configured to invoke the program instructions to perform the image processing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing a computer program, the computer program including program instructions, the program instructions being executable by a processor to perform the image processing method according to the first aspect.
In a fifth aspect, the present implementations disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the image processing method described in the first aspect.
In the embodiment of the invention, a picture to be processed can be obtained, the picture to be processed is subjected to feature detection to obtain a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed, then the picture to be processed is subjected to rain removal processing by utilizing the rain line feature map and the rain drop feature map to obtain a target picture for removing the rain line and the rain drop, the rain line and the rain drop in the picture can be removed at the same time, and the rain removal effect of the image is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an implementation framework for simultaneously removing rain lines and rain drops provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of an implementation framework for removing rain wires according to an embodiment of the present invention;
fig. 5 is a schematic structural view of an image processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
The scheme provided by the embodiment of the application relates to an artificial intelligence computer vision and machine learning technology, and is specifically described by the following embodiments:
the electronic device described in the embodiment of the present application may be a terminal device such as a smart phone, a tablet computer, a digital camera, or a background server, which is not limited in the embodiment of the present application.
The image processing method provided by the embodiment of the application can be used for providing a picture rain removing service to remove rainwater (including rain lines and raindrops) in the picture. Specific usage scenarios may include: when a mobile phone or a digital camera shoots pictures, the pictures with raindrops and rainlines are shot often due to weather conditions, and the image processing method is applied to the mobile phone and the digital camera, so that the shot pictures with raindrops and rainlines can be removed, and the pictures become clearer; in addition, the method can be deployed in a background server, and when the user uploads some pictures with raindrops and rainlines shot by the user without clarity, the pictures of the user can be free from the influence of the raindrops and the rainlines by using the image processing method.
In summary, the image processing method provided by the embodiment of the invention can send a picture with raindrops and rainlines into a neural network, perform feature extraction and integration on the picture by using the neural network, extract structural information hidden in the input picture by using the convolutional neural network, remove the raindrops and the rainlines in the picture by using the convolutional network, and perform equal-proportion recovery on the picture. In order to obtain a better picture rain removing effect, the method introduces a detection network with a double-attention mechanism, and the position of a raindrop along a rain line is detected firstly by using the network, so that a later rain removing network is assisted to complete a rain removing task. In addition, in the training process, not only rain drops or rain lines are focused, but also the two rain removal operations are simultaneously concerned, so that one rain removal model can remove the rain drops and the rain lines at the same time, and two rain removal models do not need to be built. In addition, not only the heavy rain condition is concerned, but also the region which is not easy to detect such as the light rain is concerned, so that more perfect picture of detail can be obtained, the picture information can be effectively extracted to process the input picture, automatic removal of raindrops and rainlines is realized, and recovery of clean picture is completed.
The implementation details of the technical scheme of the embodiment of the invention are described in detail below:
fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the invention. The image processing method comprises the following steps:
101. and the electronic equipment acquires the picture to be processed.
The picture to be processed can be a picture shot by electronic equipment such as a mobile phone, a digital camera and the like, can be a picture shot in advance and stored locally or in a cloud, and can also be a picture corresponding to a frame of picture in a video.
102. And the electronic equipment performs feature detection on the picture to be processed to obtain a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, and the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed.
Specifically, the electronic device may perform feature detection on the image to be processed to obtain feature information of the image to be processed, predict positions of rain lines and rain drops distributed in the image according to the feature information, determine a rain line image area and a rain drop image area in the image to be processed, and then use the rain line image area as a rain line feature map of the image to be processed and the rain drop image area as a rain drop feature map of the image to be processed.
In some possible embodiments, the electronic device may input the image to be processed into a feature extraction network of the rain removal model to perform feature detection to obtain feature information of the image to be processed, where the feature extraction network may specifically be a recurrent neural network (Recurrent Neural Network, RNN), and it is seen that the position of a rain line in the image may be predicted by using the feature extraction network, and the position of a rain drop in the image may also be predicted.
In some possible embodiments, the method for obtaining the rain line feature map by using the feature extraction network to perform feature detection on the image to be processed may include: the position and the direction of the rain line in the image to be processed are obtained by utilizing the feature extraction network, for example, the image to be processed can be subjected to frequency domain analysis to determine the direction of the rain line, the position of the rain line can refer to the pixel coordinates under a pixel coordinate system, and the image area corresponding to the rain line in the image to be processed, namely the rain line feature map, can be determined according to the position and the direction of the rain line.
103. And the electronic equipment carries out rain removing treatment on the picture to be treated by utilizing the rain line characteristic diagram and the raindrop characteristic diagram so as to obtain a target picture for removing the rain line and the raindrops.
Specifically, the electronic device may remove the rain line from the image area where the rain line exists by using the rain line feature map, and remove the rain drop from the image area where the rain drop exists by using the rain drop feature map, for example, the image to be processed may be subtracted from the rain line feature map and the rain drop feature map, so as to obtain a clean image, i.e. the target image, from which the rain line is removed and the rain drop is removed.
In some possible embodiments, a target type to be subjected to rain removal may be set, when a picture to be processed is obtained, object identification may be performed on the picture to be processed first, and when an object of the target type exists in the picture to be processed, feature detection is performed on the picture to be processed, and a rain removal operation is performed. For example, in a scene of road violation shooting, a license plate or a face needs to be detected, a target type can be set as the license plate or the face, then when a picture to be processed is acquired, object recognition can be carried out on the picture to be processed first, and feature detection is carried out on the picture to be processed only when the license plate or the face exists in the picture to be processed, so that a rain line feature map and a rain drop feature map of the picture to be processed are acquired, and a rain removing operation is carried out, so that the picture can be subjected to rain removing treatment when the target appears, and the rain removing treatment efficiency can be improved.
Furthermore, when a license plate or a face exists in the picture to be processed, the license plate or the face image area in the picture to be processed can be found out, and then only the feature detection is needed for the license plate or the face image area to obtain a rain line feature map and a rain drop feature map of the license plate or the face image area, so that rain in the license plate or the face image area in the picture to be processed can be removed, the picture after the rain removal can accurately identify the license plate number and the face image of a driver, the rapid processing of violations, accidents and the like is facilitated, and the data calculation amount is reduced and the rain removal processing efficiency is further improved under the condition that the actual requirements are met.
It should be noted that, the above object types may be flexibly set according to the requirements of the actual shooting scene, and the embodiment of the present invention is not limited.
In the embodiment of the invention, the electronic equipment can acquire the picture to be processed, and perform feature detection on the picture to be processed to acquire the rain line feature map and the rain drop feature map of the picture to be processed, wherein the rain line feature map comprises the image area corresponding to the rain line in the picture to be processed, the rain drop feature map comprises the image area corresponding to the rain drop in the picture to be processed, and then the rain removal processing is performed on the picture to be processed by utilizing the rain line feature map and the rain drop feature map so as to acquire the target picture for removing the rain line and the rain drop, so that the rain line and the rain drop in the picture can be removed at the same time, and the rain removal effect of the image is improved.
Fig. 2 is a flowchart of another image processing method according to an embodiment of the invention. The image processing method comprises the following steps:
201. and the electronic equipment acquires the picture to be processed.
202. And the electronic equipment performs feature detection on the picture to be processed to obtain a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, and the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed.
The specific implementation of steps 201 to 202 may be referred to the related descriptions in steps 101 to 102 in the foregoing embodiments, and will not be repeated here.
203. And the electronic equipment carries out rain removing treatment on the picture to be treated by utilizing the characteristic information and the rain line characteristic diagram to obtain a picture with the rain line removed.
Specifically, the electronic device may perform fusion processing on the feature information and the rain line feature map to obtain a rain line fusion feature map, and then input the rain line fusion feature map into a rain line cleaning network of the rain removal model to obtain a picture of removing the rain line.
In some possible embodiments, the rain line feature map includes an image area that is generally more affected by the rain line in the image to be processed, and in order to be able to simultaneously remove the rain line in the image area that is less affected by the rain line, a difference between the preset constant (e.g., 1) and the rain line feature map may be obtained, where the difference between the preset constant and the rain line feature map includes other image areas in the image to be processed except for the image area that is more affected by the rain line, and the other image areas are less affected by the rain line or are not affected by the rain line. Specifically, the electronic device may perform fusion processing on the feature information and the rain line feature map to obtain a first fusion feature map, then obtain a difference between the preset constant and the rain line feature map, and perform fusion processing on the feature information and the difference between the preset constant and the rain line feature map to obtain a second fusion feature map, where the first fusion feature map and the second fusion feature map are used as the rain line fusion feature map, so that the rain line fusion feature map includes the feature information of the image area with the larger influence of the rain line and also includes the feature information of the image area with the smaller influence of the rain line.
In some possible embodiments, in order to effectively remove the rain lines in the image area with the larger influence of the rain lines and the image area with the smaller influence of the rain lines, the rain line cleaning network of the rain removal model may specifically include two cleaning networks, denoted as a first cleaning network and a second cleaning network, where the first cleaning network is used for cleaning the rain lines in the image area with the influence degree of the rain lines being greater than or equal to the preset degree threshold (i.e. with the larger influence of the rain lines) in the image to be processed, and the second cleaning network is used for cleaning the rain lines in the image area with the influence degree of the rain lines being less than the preset degree threshold (i.e. with the smaller influence of the rain lines) in the image to be processed. Specifically, the electronic device may input the first fusion feature map into the first cleaning network to obtain a first image with the rain lines removed according to output of the first cleaning network, and input the second fusion feature map into the second cleaning network to obtain a second image with the rain lines removed according to output of the second cleaning network, so that an image with the severe areas with the rain lines removed may be obtained, and an image with the light areas with the rain lines removed may also be obtained.
204. And the electronic equipment carries out rain removing treatment on the picture to be treated by utilizing the characteristic information and the raindrop characteristic image to obtain a picture with raindrops removed.
Specifically, the electronic device may perform fusion processing on the feature information and the raindrop feature map to obtain a raindrop fusion feature map, and then input the raindrop fusion feature map into a raindrop cleaning network of the rain removal model to obtain a raindrop-removed picture.
In some possible embodiments, the raindrop feature map includes an image area that is generally more affected by the raindrops in the image to be processed, and in order to be able to remove the raindrops in the image area that is less affected by the raindrops at the same time, a difference between the preset constant (e.g., 1) and the raindrop feature map may be obtained, where the difference between the preset constant and the raindrop feature map includes other image areas in the image to be processed except for the image area that is more affected by the raindrops, and the other image areas are less affected by the raindrops or are not affected by the raindrops. Specifically, the electronic device may perform fusion processing on the feature information and the raindrop feature map to obtain a third fusion feature map, then obtain a difference between the preset constant and the raindrop feature map, and perform fusion processing on the feature information and the difference between the preset constant and the raindrop feature map to obtain a fourth fusion feature map, where the third fusion feature map and the fourth fusion feature map are used as the raindrop fusion feature map, so that the raindrop fusion feature map includes feature information of an image area that is greatly affected by a raindrop, and also includes feature information of an image area that is less affected by the raindrop.
In some possible embodiments, in order to effectively remove the raindrops in the image area with the larger influence of the raindrops and the raindrops in the image area with the smaller influence of the raindrops, the raindrop cleaning network of the rain removal model may specifically also include two cleaning networks, denoted as a third cleaning network and a fourth cleaning network, where the third cleaning network is used for cleaning the raindrops in the image area with the influence degree of the raindrops being greater than or equal to the preset degree threshold (i.e. with the larger influence of the raindrops) in the image to be processed, and the fourth cleaning network is used for cleaning the raindrops in the image area with the influence degree of the raindrops being less than the preset degree threshold (i.e. with the smaller influence of the raindrops) in the image to be processed. Specifically, the electronic device may input the third fusion feature map into the third cleaning network to obtain a third image with the raindrops removed according to the output of the third cleaning network, and input the fourth fusion feature map into the fourth cleaning network to obtain a fourth image with the raindrops removed according to the output of the fourth cleaning network, so that an image with the regions with serious raindrops removed may be obtained, and an image with the regions with light raindrops removed may also be obtained.
In some possible embodiments, the above-described rain line clearing network, rain drop clearing network, and image synthesizing network may be convolutional neural networks (Convolutional Neural Networks, CNN).
205. And the electronic equipment generates a target picture for removing the rain lines and the rain drops according to the picture to be processed, the picture for removing the rain lines and the picture for removing the rain drops.
Specifically, after obtaining the picture for removing the rain line and the picture for removing the rain drop, the electronic device may perform equal proportion generation of the picture in combination with the original picture (i.e. the picture to be processed), and may input the picture to be processed, the picture for removing the rain line and the picture for removing the rain drop into the image synthesis network of the rain removal model, so as to obtain the target picture for removing the rain line and the rain drop.
In some possible embodiments, an image area except for the rain line feature image and the rain drop feature image in the to-be-processed image may be obtained, and then the image area is overlapped with the image with the rain line removed and the image with the rain drop removed, so that the target image with the rain line removed and the rain drop removed may be generated.
In some possible embodiments, if the pictures for removing the rain lines include a first picture and a second picture, and the pictures for removing the rain drops include a third picture and a fourth picture, the electronic device specifically inputs five pictures (including the picture to be processed, the first picture and the second picture for removing the rain lines, and the third picture and the fourth picture for removing the rain drops) into the image synthesis network to obtain the target picture for removing the rain lines and the rain drops, so that the target picture can remove the rain lines and the rain drops in the region where the rain water affects more severely, and can remove the rain lines and the rain drops in the region where the rain water affects less severely.
In some possible implementations, as shown in fig. 3, a schematic diagram of an implementation framework for removing rain lines and rain drops simultaneously is provided in an embodiment of the present invention. Specifically, the method comprises the following steps: and respectively inputting the pictures with the rain lines and the rain drops (namely the pictures to be processed) into a characteristic extraction network E, predicting the positions of the rain lines and the rain drops according to the extracted characteristics to obtain a rain line characteristic diagram and a rain drop characteristic diagram, and then obtaining the pictures with the rain lines and the rain drops removed simultaneously by utilizing the rain line characteristic diagram and the rain drop characteristic diagram.
In some possible embodiments, taking the branching of removing rain wires in the rain removal model as an example, a specific implementation framework may refer to fig. 4. Wherein the network comprises a feature extraction network E and a rain line cleaning network (comprising a first cleaning network D heavy And a second cleaning network D light ) Image composition network D global . The specific implementation flow may include: and (3) inputting a picture to be processed, which is influenced by rainwater, into a feature extraction network E, and predicting the position of a rain line by using the extracted feature F, namely a rain line feature map (noted as mask) in the map. Wherein the mask marks the region S which is obviously affected by rain lines + While other areas S - That is, a region not affected by rain lines or a region less affected by rain lines, and then fusing the mask with the features extracted by the feature extraction network E to obtain a first fused feature map (denoted as ) To network D heavy In order to remove the obvious rain line area in the picture, a first picture F with serious rain lines removed is obtained + And labeling information S of other areas - A second fused feature map fused with the feature F extracted by the feature extraction network E (denoted as +.>) To network D light In order to remove rain lines in the region with smaller influence of the rain lines in the picture, a second picture F with light and slight rain lines removed is obtained - Finally, the original image (i.e. the image to be processed) and the first image F with serious rain lines removed are processed + Second picture F with light and light rain lines removed - Input to network D global In (1) recovering the picture F without rain lines w
It should be noted that, the implementation framework of the branches for removing rain drops in the rain removal model is similar to the branches for removing rain lines, and the network involved in the specific implementation framework includesThe symptom extraction network E and the rain drop cleaning network (comprising a third cleaning network D heavy And a fourth cleaning network D light ) And shares the image synthesizing network D in the branching frame from which the rain lines are removed in FIG. 4 global That is to say the third cleaning network D heavy Output of (i.e. the third picture) and a fourth cleaning network D light The outputs of (i.e., the fourth picture) are all input to the image synthesizing network D in FIG. 4 global At this time, the original image (i.e. the image to be processed) and the first cleaning network D heavy Output of (i.e. the first picture), a second cleaning network D light Output of (i.e. the second picture), a third cleaning network D heavy Output of (i.e. the third picture) and a fourth cleaning network D light The outputs of (i.e. the fourth picture) are all input to the network D global In (1) recovering the picture F without rain lines and raindrops w
The invention can adopt a double-attention detection network to extract information in the pictures, wherein the double-attention detection network is mainly based on a deep convolutional neural network, and the deep convolutional neural network is utilized to return to a mask picture so that the mask picture corresponds to a region with rainwater. S is S + The formula is shown below:
S + =g(W*F+b)
wherein F is feature information extracted by a feature extraction network E, S + Is a mask diagram representing a stormwater area. Without or in areas S with less rain - The following is shown:
S - =1-S + the preset constant here takes 1.
From this dual-attention detection network, two sets of fused features can be obtainedAnd->The following is shown:
wherein,representing a dot product operation.
The two sets of features are then fed into the subsequent network D heavy 、D light In (3) performing picture restoration. Loss function L of this part att The following is shown:
after obtaining the mask map of the rainwater, the rainwater removing operation can be performed according to the mask map. Specifically, after fusing mask with the feature F extracted by the feature extraction network E To network D heavy In the method, marking information S of other areas - After fusion with feature F extracted by feature extraction network E ∈>To network D light In which two groups of pictures F are obtained + And F - (the first picture and the second picture with the rain lines removed or the third picture and the fourth picture with the rain drops removed) and then respectively recovering different pictures according to the two groups of pictures, wherein the loss function L of the pictures is the same as that of the first picture and the second picture heavy 、L light The following are respectively shown:
wherein I is c Is a clear picture without rain, I i Is the inputted picture with rain. Obtaining the two groups of pictures F + And F - Thereafter, they are sent to the network D global Obtaining final rain-removed picture with loss function L global The following is shown:
wherein I is o The calculation formula of (2) is as follows:
I o =D global (F + ,F - ,I i )
during training of the rain removal model, the overall loss function L DAM The following is shown:
L DAM =α·L att1 ·L heavy2 ·L light +L global
when raindrops and rain lines are simultaneously arranged in the pictures, the feature extraction network E predicts two mask images, and the mask images respectively represent the positions of the raindrops and the rain lines, namely the rainline feature images and the raindrop feature images. In this case, its loss function L DAiAM The following is shown:
L DAiAM =L streak +L drop +L global
the actual rain removing effect can be evaluated by using the loss function, and the parameters of the network model are correspondingly adjusted until the good rain removing effect is achieved by using L streak Heel L drop And (3) carrying out model training on the two loss functions, and finally reconstructing a clear and rain-free picture by using the trained rain-removing model.
Wherein L is streak Heel L drop The formula of (2) is shown below:
it can be seen that in the training process, the embodiment of the invention focuses on the raindrops and the rainlines at the same time, designs a network structure, focuses on the characteristics of the raindrops and the rainlines at the same time, and then removes two kinds of rainwater of the raindrops and the rainlines in a targeted way, so that the two network structures do not need to be deployed, and focuses on the areas with larger influence on the rainwater and the areas with smaller influence on the rainwater by using the attention mechanism, so that the detail of the generated picture is more perfect.
In the embodiment of the invention, the electronic equipment can perform feature detection on the acquired picture to be processed to obtain the rain line feature map and the rain drop feature map of the picture to be processed, wherein the rain line feature map comprises the image area corresponding to the rain line in the picture to be processed, the rain drop feature map comprises the image area corresponding to the rain drop in the picture to be processed, the picture to be processed is subjected to rain removal processing by utilizing the feature information and the rain line feature map to obtain the picture to be processed to remove the rain line, the picture to be processed is subjected to rain removal processing by utilizing the feature information and the rain drop feature map to obtain the picture to remove the rain drop, and then the target picture to remove the rain line and the rain drop can be generated according to the picture to be processed, the picture to remove the rain line and the rain drop, so that not only the rain line and the rain drop in the area with obvious influence of the rain can be removed, but also the rain line and the rain drop in the area with light influence of the rain can be removed, the complete removal of the rain line and the rain drop is realized, and the rain drop is obviously improved.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the invention. The device comprises:
the obtaining module 501 is configured to obtain a picture to be processed.
The detection module 502 is configured to perform feature detection on the to-be-processed picture to obtain a rain line feature map and a rain drop feature map of the to-be-processed picture, where the rain line feature map includes an image area corresponding to a rain line in the to-be-processed picture, and the rain drop feature map includes an image area corresponding to a rain drop in the to-be-processed picture.
And the processing module 503 is configured to perform rain removal processing on the to-be-processed picture by using the rain line feature map and the raindrop feature map, so as to obtain a target picture from which rain lines and raindrops are removed.
Optionally, the detection module 502 is specifically configured to:
and carrying out feature detection on the picture to be processed to obtain feature information of the picture to be processed.
And analyzing the characteristic information to acquire a rain line image area and a rain drop image area in the picture to be processed.
And taking the rain line image area as a rain line characteristic map of the picture to be processed and taking the rain drop image area as a rain drop characteristic map of the picture to be processed.
Optionally, the detection module 502 is specifically configured to:
inputting the picture to be processed into a feature extraction network of a rain removal model to perform feature detection, and obtaining an output result of the feature extraction network.
And taking the output result as the characteristic information of the picture to be processed.
Optionally, the feature extraction network comprises a dual-attention detection network.
Optionally, the processing module 503 is specifically configured to:
and carrying out rain removing treatment on the picture to be treated by utilizing the characteristic information and the rain line characteristic diagram to obtain a picture with the rain lines removed.
And carrying out rain removing treatment on the picture to be treated by utilizing the characteristic information and the raindrop characteristic image to obtain a picture with raindrops removed.
And generating a target picture for removing the rain lines and the rain drops according to the picture to be processed, the picture for removing the rain lines and the picture for removing the rain drops.
Optionally, the processing module 503 is specifically configured to:
and carrying out fusion processing on the characteristic information and the rain line characteristic map to obtain a rain line fusion characteristic map.
And inputting the rain line fusion characteristic diagram into a rain line cleaning network of a rain removal model to obtain a picture for removing the rain line.
Optionally, the processing module 503 is specifically configured to:
And carrying out fusion processing on the characteristic information and the raindrop characteristic map to obtain a raindrop fusion characteristic map.
And inputting the raindrop fusion characteristic diagram into a raindrop cleaning network of a raindrop removal model to obtain a raindrop-removed picture.
Optionally, the processing module 503 is specifically configured to:
inputting the to-be-processed picture, the picture with the rain lines removed and the picture with the rain drops removed into an image synthesis network of a rain removal model to obtain a target picture with the rain lines and the rain drops removed.
Optionally, the processing module 503 is specifically configured to:
and carrying out fusion processing on the characteristic information and the rain line characteristic map to obtain a first fusion characteristic map.
And obtaining the difference between a preset constant and the rain line characteristic diagram.
And carrying out fusion processing on the characteristic information and the difference between the preset constant and the rain line characteristic map to obtain a second fusion characteristic map.
And taking the first fusion characteristic diagram and the second fusion characteristic diagram as rain line fusion characteristic diagrams.
Optionally, the rain line cleaning network of the rain removal model includes a first cleaning network and a second cleaning network, the picture for removing the rain line includes a first picture and a second picture, and the processing module 503 is specifically configured to:
And inputting the first fusion feature map into the first cleaning network to obtain the first picture according to the output of the first cleaning network, wherein the first cleaning network is used for cleaning the rain lines in the image area, the influence degree of the rain lines in the image area is greater than or equal to a preset degree threshold, in the picture to be processed.
And inputting the second fusion feature map into a second cleaning network to obtain a second picture according to the output of the second cleaning network, wherein the second cleaning network is used for cleaning rain lines in an image area, the influence degree of the rain lines in the image area to be processed is smaller than the preset degree threshold value.
Optionally, the processing module 503 is specifically configured to:
and carrying out fusion processing on the characteristic information and the raindrop characteristic map to obtain a third fusion characteristic map.
And obtaining the difference between a preset constant and the raindrop characteristic map.
And carrying out fusion processing on the characteristic information and the difference between the preset constant and the raindrop characteristic map to obtain a fourth fusion characteristic map.
And taking the third fusion characteristic diagram and the fourth fusion characteristic diagram as raindrop fusion characteristic diagrams.
Optionally, the raindrop cleaning network of the rain removal model includes a third cleaning network and a fourth cleaning network, the picture for removing the raindrops includes a third picture and a fourth picture, and the processing module 503 is specifically configured to:
And inputting the third fusion feature map into a third cleaning network to obtain a third picture according to the output of the third cleaning network, wherein the third cleaning network is used for cleaning raindrops in an image area, the influence degree of the raindrops in the image area to be processed is greater than or equal to a preset degree threshold value.
And inputting the fourth fusion feature map into a fourth cleaning network to obtain a fourth picture according to the output of the fourth cleaning network, wherein the fourth cleaning network is used for cleaning raindrops in an image area, the influence degree of the raindrops in the image area to be processed is smaller than the preset degree threshold value.
It should be noted that, the functions of each functional module of the image processing apparatus according to the embodiment of the present invention may be specifically implemented according to the method in the embodiment of the method, and the specific implementation process may refer to the related description of the embodiment of the method, which is not repeated herein.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, which includes a power supply module and other structures, and includes a processor 601, a storage device 602, and a network interface 603. The processor 601, the storage device 602, and the network interface 603 may interact with each other.
The storage 602 may include volatile memory (RAM), such as random-access memory (RAM); the storage device 602 may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a Solid State Drive (SSD), etc.; the storage 602 may also include a combination of the types of memory described above.
The processor 601 may be a central processing unit 601 (central processing unit, CPU). In one embodiment, the processor 601 may also be a graphics processor 601 (Graphics Processing Unit, GPU). The processor 601 may also be a combination of a CPU and a GPU. In one embodiment, the storage 602 is configured to store program instructions. The processor 601 may call the program instructions to perform the following operations:
and obtaining a picture to be processed.
And carrying out feature detection on the picture to be processed to obtain a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, and the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed.
And carrying out rain removing treatment on the picture to be treated by utilizing the rain line characteristic diagram and the raindrop characteristic diagram so as to obtain a target picture for removing rain lines and raindrops.
Optionally, the processor 601 is specifically configured to:
and carrying out feature detection on the picture to be processed to obtain feature information of the picture to be processed.
And analyzing the characteristic information to acquire a rain line image area and a rain drop image area in the picture to be processed.
And taking the rain line image area as a rain line characteristic map of the picture to be processed and taking the rain drop image area as a rain drop characteristic map of the picture to be processed.
Optionally, the processor 601 is specifically configured to:
inputting the picture to be processed into a feature extraction network of a rain removal model to perform feature detection, and obtaining an output result of the feature extraction network.
And taking the output result as the characteristic information of the picture to be processed.
Optionally, the feature extraction network comprises a dual-attention detection network.
Optionally, the processor 601 is specifically configured to:
and carrying out rain removing treatment on the picture to be treated by utilizing the characteristic information and the rain line characteristic diagram to obtain a picture with the rain lines removed.
And carrying out rain removing treatment on the picture to be treated by utilizing the characteristic information and the raindrop characteristic image to obtain a picture with raindrops removed.
And generating a target picture for removing the rain lines and the rain drops according to the picture to be processed, the picture for removing the rain lines and the picture for removing the rain drops.
Optionally, the processor 601 is specifically configured to:
and carrying out fusion processing on the characteristic information and the rain line characteristic map to obtain a rain line fusion characteristic map.
And inputting the rain line fusion characteristic diagram into a rain line cleaning network of a rain removal model to obtain a picture for removing the rain line.
Optionally, the processor 601 is specifically configured to:
and carrying out fusion processing on the characteristic information and the raindrop characteristic map to obtain a raindrop fusion characteristic map.
And inputting the raindrop fusion characteristic diagram into a raindrop cleaning network of a raindrop removal model to obtain a raindrop-removed picture.
Optionally, the processor 601 is specifically configured to:
inputting the to-be-processed picture, the picture with the rain lines removed and the picture with the rain drops removed into an image synthesis network of a rain removal model to obtain a target picture with the rain lines and the rain drops removed.
Optionally, the processor 601 is specifically configured to:
And carrying out fusion processing on the characteristic information and the rain line characteristic map to obtain a first fusion characteristic map.
And obtaining the difference between a preset constant and the rain line characteristic diagram.
And carrying out fusion processing on the characteristic information and the difference between the preset constant and the rain line characteristic map to obtain a second fusion characteristic map.
And taking the first fusion characteristic diagram and the second fusion characteristic diagram as rain line fusion characteristic diagrams.
Optionally, the rain line cleaning network of the rain removal model includes a first cleaning network and a second cleaning network, the picture for removing the rain line includes a first picture and a second picture, and the processor 601 is specifically configured to:
and inputting the first fusion feature map into the first cleaning network to obtain the first picture according to the output of the first cleaning network, wherein the first cleaning network is used for cleaning the rain lines in the image area, the influence degree of the rain lines in the image area is greater than or equal to a preset degree threshold, in the picture to be processed.
And inputting the second fusion feature map into a second cleaning network to obtain a second picture according to the output of the second cleaning network, wherein the second cleaning network is used for cleaning rain lines in an image area, the influence degree of the rain lines in the image area to be processed is smaller than the preset degree threshold value.
Optionally, the processor 601 is specifically configured to:
and carrying out fusion processing on the characteristic information and the raindrop characteristic map to obtain a third fusion characteristic map.
And obtaining the difference between a preset constant and the raindrop characteristic map.
And carrying out fusion processing on the characteristic information and the difference between the preset constant and the raindrop characteristic map to obtain a fourth fusion characteristic map.
And taking the third fusion characteristic diagram and the fourth fusion characteristic diagram as raindrop fusion characteristic diagrams.
Optionally, the raindrop cleaning network of the rain removal model includes a third cleaning network and a fourth cleaning network, the picture for removing the raindrops includes a third picture and a fourth picture, and the processor 601 is specifically configured to:
and inputting the third fusion feature map into a third cleaning network to obtain a third picture according to the output of the third cleaning network, wherein the third cleaning network is used for cleaning raindrops in an image area, the influence degree of the raindrops in the image area to be processed is greater than or equal to a preset degree threshold value.
And inputting the fourth fusion feature map into a fourth cleaning network to obtain a fourth picture according to the output of the fourth cleaning network, wherein the fourth cleaning network is used for cleaning raindrops in an image area, the influence degree of the raindrops in the image area to be processed is smaller than the preset degree threshold value.
In particular, the processor 601, the storage device 602 and the network interface 603 described in the embodiments of the present application may perform the implementation described in the related embodiments of an image processing method provided in fig. 1 or fig. 2, or may perform the implementation described in the related embodiments of an image processing apparatus provided in fig. 5, which are not described herein again.
Those skilled in the art will appreciate that all or part of the processes in the methods of the embodiments described above may be implemented by means of hardware associated with a computer program comprising one or more instructions, and the program may be stored in a computer storage medium, where the program, when executed, may comprise processes in embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps performed in the embodiments of the methods described above.
The above disclosure is illustrative only of some embodiments of the application and is not intended to limit the scope of the application, which is defined by the claims and their equivalents.

Claims (13)

1. An image processing method, the method comprising:
acquiring a picture to be processed;
performing feature detection on the picture to be processed to obtain feature information, a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, and the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed;
carrying out fusion processing on the characteristic information and the rain line characteristic map to obtain a rain line fusion characteristic map;
inputting the rain line fusion characteristic diagram into a rain line cleaning network of a rain removal model to obtain a picture for removing the rain line;
carrying out fusion processing on the characteristic information and the raindrop characteristic map to obtain a raindrop fusion characteristic map;
inputting the raindrop fusion characteristic diagram into a raindrop cleaning network of the rain removal model to obtain a raindrop-removed picture;
and generating a target picture for removing the rain lines and the rain drops according to the picture to be processed, the picture for removing the rain lines and the picture for removing the rain drops.
2. The method according to claim 1, wherein the performing feature detection on the to-be-processed picture to obtain feature information, a rain line feature map, and a raindrop feature map of the to-be-processed picture includes:
performing feature detection on the picture to be processed to obtain feature information of the picture to be processed;
analyzing the characteristic information to obtain a rain line image area and a rain drop image area in the picture to be processed;
and taking the rain line image area as a rain line characteristic map of the picture to be processed and taking the rain drop image area as a rain drop characteristic map of the picture to be processed.
3. The method according to claim 2, wherein the performing feature detection on the to-be-processed picture to obtain feature information of the to-be-processed picture includes:
inputting the picture to be processed into a feature extraction network of a rain removal model to perform feature detection, and obtaining an output result of the feature extraction network;
and taking the output result as the characteristic information of the picture to be processed.
4. A method according to claim 3, wherein the feature extraction network comprises a dual-attention detection network.
5. The method according to claim 1, wherein the generating a target picture with rain lines and rain drops removed from the to-be-processed picture, the rain line removed picture, and the rain drop removed picture includes:
inputting the to-be-processed picture, the picture with the rain lines removed and the picture with the rain drops removed into an image synthesis network of the rain removal model to obtain a target picture with the rain lines and the rain drops removed.
6. The method of claim 1, wherein the fusing the characteristic information and the rain line characteristic map to obtain a rain line fused characteristic map comprises:
carrying out fusion processing on the characteristic information and the rain line characteristic map to obtain a first fusion characteristic map;
obtaining the difference between a preset constant and the rain line characteristic diagram;
carrying out fusion processing on the difference between the characteristic information and the difference between the preset constant and the rain line characteristic map to obtain a second fusion characteristic map;
and taking the first fusion characteristic diagram and the second fusion characteristic diagram as rain line fusion characteristic diagrams.
7. The method of claim 6, wherein the rain line cleaning network of the rain removal model includes a first cleaning network and a second cleaning network, the picture of removing the rain line includes a first picture and a second picture, and the inputting the rain line fusion feature map into the rain line cleaning network of the rain removal model to obtain the picture of removing the rain line includes:
Inputting the first fusion feature map into the first cleaning network to obtain the first picture according to the output of the first cleaning network, wherein the first cleaning network is used for cleaning rain lines in an image area, the influence degree of the rain lines in the image area to be processed is greater than or equal to a preset degree threshold value;
and inputting the second fusion feature map into a second cleaning network to obtain a second picture according to the output of the second cleaning network, wherein the second cleaning network is used for cleaning rain lines in an image area, the influence degree of the rain lines in the image area to be processed is smaller than the preset degree threshold value.
8. The method according to claim 1, wherein the fusing the feature information and the raindrop feature map to obtain a raindrop fused feature map includes:
carrying out fusion processing on the characteristic information and the raindrop characteristic map to obtain a third fusion characteristic map;
obtaining the difference between a preset constant and the raindrop characteristic map;
carrying out fusion processing on the characteristic information and the difference between the preset constant and the raindrop characteristic map to obtain a fourth fusion characteristic map;
and taking the third fusion characteristic diagram and the fourth fusion characteristic diagram as raindrop fusion characteristic diagrams.
9. The method of claim 8, wherein the raindrop cleaning network of the raindrop model includes a third cleaning network and a fourth cleaning network, the raindrop-removed picture includes a third picture and a fourth picture, and the inputting the raindrop fusion feature map into the raindrop cleaning network of the raindrop model to obtain the raindrop-removed picture includes:
inputting the third fusion feature map into a third cleaning network to obtain a third picture according to the output of the third cleaning network, wherein the third cleaning network is used for cleaning raindrops in an image area, the influence degree of the raindrops in the image area is greater than or equal to a preset degree threshold, in the picture to be processed;
and inputting the fourth fusion feature map into a fourth cleaning network to obtain a fourth picture according to the output of the fourth cleaning network, wherein the fourth cleaning network is used for cleaning raindrops in an image area, the influence degree of the raindrops in the image area to be processed is smaller than the preset degree threshold value.
10. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the picture to be processed;
the detection module is used for carrying out feature detection on the picture to be processed to obtain feature information, a rain line feature map and a rain drop feature map of the picture to be processed, wherein the rain line feature map comprises an image area corresponding to a rain line in the picture to be processed, and the rain drop feature map comprises an image area corresponding to a rain drop in the picture to be processed;
The processing module is used for carrying out fusion processing on the characteristic information and the rain line characteristic map to obtain a rain line fusion characteristic map; inputting the rain line fusion characteristic diagram into a rain line cleaning network of a rain removal model to obtain a picture for removing the rain line; carrying out fusion processing on the characteristic information and the raindrop characteristic map to obtain a raindrop fusion characteristic map; inputting the raindrop fusion characteristic diagram into a raindrop cleaning network of the rain removal model to obtain a raindrop-removed picture; and generating a target picture for removing the rain lines and the rain drops according to the picture to be processed, the picture for removing the rain lines and the picture for removing the rain drops.
11. An electronic device comprising a processor and a storage means, the processor and the storage means being interconnected, the storage means being for storing a computer program, the computer program comprising program instructions, the processor being configured to invoke the program instructions for performing the image processing method of any of claims 1-9.
12. A computer readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions for execution by a processor for performing the image processing method according to any one of claims 1-9.
13. A computer program product comprising computer instructions which, when executed by a computer processor, implement the image processing method of any of claims 1-9.
CN202010940502.8A 2020-09-09 2020-09-09 Image processing method and device, electronic equipment and storage medium Active CN112085680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010940502.8A CN112085680B (en) 2020-09-09 2020-09-09 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010940502.8A CN112085680B (en) 2020-09-09 2020-09-09 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112085680A CN112085680A (en) 2020-12-15
CN112085680B true CN112085680B (en) 2023-12-12

Family

ID=73731709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010940502.8A Active CN112085680B (en) 2020-09-09 2020-09-09 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112085680B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102020215859A1 (en) 2020-12-15 2022-06-15 Conti Temic Microelectronic Gmbh Correction of images from a camera in rain, light and dirt
CN112767274A (en) * 2021-01-25 2021-05-07 江南大学 Light field image rain stripe detection and removal method based on transfer learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537622A (en) * 2014-12-31 2015-04-22 中国科学院深圳先进技术研究院 Method and system for removing raindrop influence in single image
CN107220652A (en) * 2017-05-31 2017-09-29 北京京东尚科信息技术有限公司 Method and apparatus for handling picture
CN110544217A (en) * 2019-08-30 2019-12-06 深圳市商汤科技有限公司 image processing method and device, electronic equipment and storage medium
AU2020100196A4 (en) * 2020-02-08 2020-03-19 Juwei Guan A method of removing rain from single image based on detail supplement
CN111191606A (en) * 2019-12-31 2020-05-22 Oppo广东移动通信有限公司 Image processing method and related product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537622A (en) * 2014-12-31 2015-04-22 中国科学院深圳先进技术研究院 Method and system for removing raindrop influence in single image
CN107220652A (en) * 2017-05-31 2017-09-29 北京京东尚科信息技术有限公司 Method and apparatus for handling picture
CN110544217A (en) * 2019-08-30 2019-12-06 深圳市商汤科技有限公司 image processing method and device, electronic equipment and storage medium
CN111191606A (en) * 2019-12-31 2020-05-22 Oppo广东移动通信有限公司 Image processing method and related product
AU2020100196A4 (en) * 2020-02-08 2020-03-19 Juwei Guan A method of removing rain from single image based on detail supplement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Attentive generative adversarial network for raindrop removal from a single image;R. Qian et al.;《IEEE》;第1-10页 *

Also Published As

Publication number Publication date
CN112085680A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
Hu et al. Single-image real-time rain removal based on depth-guided non-local features
CN112163498B (en) Method for establishing pedestrian re-identification model with foreground guiding and texture focusing functions and application of method
CN110246181B (en) Anchor point-based attitude estimation model training method, attitude estimation method and system
CN112132156A (en) Multi-depth feature fusion image saliency target detection method and system
CN109543691A (en) Ponding recognition methods, device and storage medium
CN112085680B (en) Image processing method and device, electronic equipment and storage medium
CN114943876A (en) Cloud and cloud shadow detection method and device for multi-level semantic fusion and storage medium
Li et al. Edge-aware regional message passing controller for image forgery localization
CN112561879B (en) Ambiguity evaluation model training method, image ambiguity evaluation method and image ambiguity evaluation device
Zhang et al. Exploring event-driven dynamic context for accident scene segmentation
CN111833360B (en) Image processing method, device, equipment and computer readable storage medium
CN115410081A (en) Multi-scale aggregated cloud and cloud shadow identification method, system, equipment and storage medium
CN104463962B (en) Three-dimensional scene reconstruction method based on GPS information video
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
CN115222750A (en) Remote sensing image segmentation method and system based on multi-scale fusion attention
Qiu et al. Saliency detection using a deep conditional random field network
Tang et al. SDRNet: An end-to-end shadow detection and removal network
Jiang et al. Pixel-wise content attention learning for single-image deraining of autonomous vehicles
Soni et al. Deep learning based approach to generate realistic data for ADAS applications
CN113570615A (en) Image processing method based on deep learning, electronic equipment and storage medium
Chen et al. Deep trident decomposition network for single license plate image glare removal
CN111539420B (en) Panoramic image saliency prediction method and system based on attention perception features
CN113962332A (en) Salient target identification method based on self-optimization fusion feedback
CN114119757A (en) Image processing method, apparatus, device, medium, and computer program product
Zhuang et al. Dimensional transformation mixer for ultra-high-definition industrial camera dehazing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant