CN113674158A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113674158A
CN113674158A CN202010403137.7A CN202010403137A CN113674158A CN 113674158 A CN113674158 A CN 113674158A CN 202010403137 A CN202010403137 A CN 202010403137A CN 113674158 A CN113674158 A CN 113674158A
Authority
CN
China
Prior art keywords
image
scene
target
enhanced
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010403137.7A
Other languages
Chinese (zh)
Inventor
张娅楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN202010403137.7A priority Critical patent/CN113674158A/en
Publication of CN113674158A publication Critical patent/CN113674158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, image processing equipment and a storage medium. The method comprises the following steps: performing preset image degradation scene recognition based on a target image and a pixel negation image of the target image respectively to determine an acquisition scene of the target image; and carrying out image enhancement on the target image according to the acquisition scene of the target image to obtain an enhanced image of the target image. By adopting the scheme, different image degradation scene recognition algorithms can be prevented from being adopted for respectively recognizing and detecting, and the image can be converted from one image degradation scene to another image degradation scene through image negation so as to adapt to the fixed image degradation scene recognition algorithm, thereby improving the scene adaptability of image degradation scene recognition.

Description

Image processing method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to an image processing method, an image processing device, image processing equipment and a storage medium.
Background
With the progress of electronic computer technology, computer image processing has been developed dramatically, and has been successfully applied to a plurality of fields related to imaging, and has played a very important role.
In photographing or video monitoring, the collected images are affected by poor imaging conditions such as low light and haze, and the collected images have the problems of over-bright brightness, over-dark brightness, blurring or poor visibility, so that the collected images need to be enhanced so as to be used in subsequent processing. However, images acquired under different scenes need to be processed by different image enhancement algorithms, so that the scene adaptability of image processing is poor.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, image processing equipment and a storage medium, which are used for realizing self-adaptive processing of images in different scenes and improving scene adaptability during image processing.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
performing preset image degradation scene recognition based on a target image and a pixel negation image of the target image respectively to determine an acquisition scene of the target image;
and according to the acquisition scene of the target image, carrying out image enhancement on the target image to obtain an enhanced image of the target image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, where the apparatus includes:
the quality-degraded scene recognition module is used for performing preset image quality-degraded scene recognition based on a target image and a pixel negation image of the target image respectively so as to determine an acquisition scene of the target image;
and the image enhancement processing module is used for carrying out image enhancement on the target image according to the acquisition scene of the target image to obtain an enhanced image of the target image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processing devices;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processing devices, the one or more processing devices are caused to implement the image processing method according to any one of the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processing apparatus, implements the image processing method described in any one of the embodiments of the present invention.
The embodiment of the invention provides an image processing method, and the image acquired in a low-illumination scene and a wide-dynamic scene is similar to the image acquired in a haze scene after being subjected to image inversion, while the image acquired in a normal scene does not have the characteristics, so that the image of a preset image degradation scene can be obtained no matter what kind of image degradation scene the target image is, the image of the target image and the image of the target image are subjected to image inversion only by adopting the same image degradation scene recognition detection algorithm, the scene acquisition scene of the target image can be determined by performing scene screening according to the two recognition results, and further, the image enhancement can be determined according to the acquisition scene. Therefore, by adopting the scheme of the application, different image degradation scene recognition algorithms can be prevented from being adopted for respectively recognizing and detecting, and the image can be converted from one image degradation scene to another image degradation scene through image negation so as to adapt to the fixed image degradation scene recognition algorithm, and the scene adaptability of image degradation scene recognition is improved.
The above summary of the present invention is merely an overview of the technical solutions of the present invention, and the present invention can be implemented in accordance with the content of the description in order to make the technical means of the present invention more clearly understood, and the above and other objects, features, and advantages of the present invention will be more clearly understood.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart of an image processing method provided in an embodiment of the present invention;
FIG. 2 is a flow chart of another image processing method provided in an embodiment of the present invention;
fig. 3 is a flowchart of scene recognition on an image to be recognized according to an embodiment of the present invention;
FIG. 4 is a flow chart of yet another image processing method provided in an embodiment of the present invention;
FIG. 5 is a flowchart of image enhancement for an image to be enhanced according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of image enhancement of an image to be enhanced according to an embodiment of the present invention;
fig. 7 is a block diagram of a configuration of an image processing apparatus provided in an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations (or steps) can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flowchart of an image processing method provided in an embodiment of the present invention. The embodiment can be applied to the situation of image enhancement of images acquired under different scenes. The method can be executed by an image processing device, which can be implemented in software and/or hardware and integrated on any electronic equipment with network communication function. As shown in fig. 1, the image processing method provided in the present embodiment may include the following steps S110 to S120:
and S110, performing preset image degradation scene recognition based on the target image and the pixel negation image of the target image respectively to determine the acquisition scene of the target image.
In this embodiment, the electronic device may be a computer device or a server. The computer equipment can be various types of terminal equipment or monitoring equipment and the like; for example, the monitoring device may be a video camera, a video recorder, an electronic monitor, or the like.
In this embodiment, the image degradation scene may include: haze scenes, low-light scenes, wide dynamic scenes, and the like. When an image is acquired in an image degradation scene, the acquired target image has the problems of over-bright brightness, over-dark brightness, blurring or poor visibility and the like. Therefore, the acquired target image needs to be subjected to image enhancement so as to purposefully emphasize the overall or local characteristics of the image, make an unclear image clear or emphasize some interesting characteristics and enlarge the difference between different object characteristics in the image; and, suppressing uninteresting features through image enhancement, improving image quality and enriching information content.
In this embodiment, images acquired under different image degradation scenes generally need to be processed by different enhancement algorithms, and therefore, before image enhancement is performed on a target image, an acquisition scene of the target image needs to be determined. That is, it is necessary to determine which acquisition scene among a normal scene, a haze scene, a low-illuminance scene, and a wide dynamic scene the target image is acquired in. According to statistical analysis, the following relationships exist between the image in the haze scene and the images in the low-illumination scene and the wide dynamic scene: the method comprises the steps that a pixel inversion graph obtained by inverting an image of an acquired image in a low-illumination scene and an image of an acquired image in a wide dynamic scene is obtained, for the same shooting object, the histogram distribution and the image effect of the pixel inversion graph obtained by inverting are similar to those of the image acquired in a haze scene, and the image acquired in a normal scene does not have the characteristics.
Based on the theory, the pixel in the target image can be subjected to negation operation to obtain a pixel negation image of the target image, and the target image and the pixel negation image are subjected to preset image degradation scene recognition respectively to judge whether the target image belongs to an image under a preset image degradation scene and judge whether the pixel negation image belongs to an image of the preset image scene. By combining the two recognition results, whether the target image belongs to the image degradation scene or not can be judged, and the acquired image which belongs to which image degradation scene can be determined, so that the acquisition scene of the target image can be determined. It should be noted that, when performing the preset image degradation scene recognition on the target image and the pixel inversion map, the recognition processing may be performed in parallel, or the target image may be processed first and then the pixel inversion image may be processed.
In this embodiment, when inverting the target image, the R, G, B three channels of the target image may be inverted respectively to obtain inverted pixel images of the target image. Taking an 8-bit target image as an example, the image inversion formula is as follows:
Figure BDA0002490255880000051
c belongs to { R, G, B }; wherein, Ic
Figure BDA0002490255880000052
Respectively representing the input target image and the inverted image of the pixels after the image inversion.
And S120, carrying out image enhancement on the target image according to the acquisition scene of the target image to obtain an enhanced image of the target image.
In this embodiment, after the target image acquisition scene is determined, a suitable image enhancement algorithm may be matched for the target image, and the image enhancement is performed on the target image belonging to the image degradation scene through the matched image enhancement algorithm, so as to improve the visual effect of the target image.
The embodiment of the invention provides an image processing method, which utilizes the similarity theory of images acquired under a haze scene after the images acquired under a low-illumination scene and a wide dynamic scene are subjected to image inversion, and can convert the images from one image degradation scene to another image degradation scene through the image inversion so as to adapt to a fixed image degradation scene recognition algorithm. On the basis, the target image can be identified under which image degradation scene the target image is, only by adopting the same image degradation scene detection algorithm to respectively identify and detect the pixel negation images of the target image and the target image, so that the identification and detection by adopting different image degradation scene identification algorithms are avoided, and the scene adaptability of the image degradation scene identification is improved.
Fig. 2 is a flowchart of another image processing method provided in an embodiment of the present invention, and this embodiment is further optimized based on the above embodiment, and this embodiment may be combined with various alternatives in one or more embodiments described above. As shown in fig. 2, the image processing method provided in the present embodiment may include the following steps S210 to S240:
s210, performing preset image degradation scene recognition on the target image to determine whether the target image belongs to the image of the preset image degradation scene.
In this embodiment, the preset image degradation scene is a fixed image degradation scene preset in the acquisition scene detection stage, so that the preset image degradation scene is only used for determining whether an image belongs to the image of the preset image degradation scene in the scene detection stage. In an optional example, the preset image degradation scene can be specifically set as a haze scene, so that the target image can be judged according to the haze scene to determine whether the target image belongs to the image of the haze scene. In another optional example, the preset image degradation scene may be specifically set to be a low-illumination scene and a wide dynamic scene, so that the low-illumination scene and the wide dynamic scene may be determined for the target image to determine whether the target image belongs to an image of the low-illumination or wide dynamic scene.
And S220, if the target image is determined not to belong to the image of the preset image degradation scene, performing preset image degradation scene recognition on the pixel negation image of the target image to determine whether the pixel negation image of the target image belongs to the image of the preset image degradation scene.
In this embodiment, after excluding an image whose target image does not belong to the preset image degradation scene, it may be continuously determined whether the target image is an image belonging to another image degradation scene other than the preset image degradation scene or an image belonging to a normal scene. Here, the pixel-inverted image of the target image is subjected to preset image degradation scene recognition to determine whether the pixel-inverted image belongs to an image of the preset image degradation scene.
And S230, determining the acquisition scene of the target image according to the identification result of the pixel negation image of the target image.
In this embodiment, by using the similarity theory between the image acquired in the low-illumination scene and the wide-dynamic scene after the image inversion and the image acquired in the haze scene, it can be known that the image acquired in the preset image degradation scene after the image inversion is approximate to an image in another image degradation scene. Based on the theory, if the pixel inversion image of the target image is determined to belong to the image of the preset image degradation scene according to the recognition detection result of the pixel inversion image, the acquisition scene of the target image is determined to be another image degradation scene. Otherwise, determining the acquisition scene of the target image as a normal scene.
In this embodiment, optionally, when the preset image degradation scene is a haze scene, another image degradation scene is a low-illumination or wide-dynamic scene. Still optionally, when the preset image degradation scene is a low-illumination or wide-dynamic scene, the other image degradation scene is a haze scene.
S240, according to the acquisition scene of the target image, image enhancement is carried out on the target image to obtain an enhanced image of the target image.
The embodiment of the invention provides a self-adaptive image processing method, which is used for judging a preset image degradation scene of an input target image, if the target image does not belong to the preset image degradation scene, continuously adopting the first adopted image degradation scene judgment method to carry out scene judgment on the negation of the target image, and further screening out the image of which image degradation scene the target image specifically belongs to.
Fig. 3 is a flowchart of scene recognition on an image to be recognized according to an embodiment of the present invention, and this embodiment is further optimized based on the above embodiment, and this embodiment may be combined with various alternatives in one or more embodiments. The method includes the steps of taking a preset image degradation scene as a haze scene, and taking a to-be-recognized image as a target image or a pixel inversion image of the target image as an example. As shown in fig. 3, the method for scene recognition of an image to be recognized provided in this embodiment may include the following steps S310 to S330:
s310, determining a fog concentration statistic value of each pixel in the image to be recognized according to the brightness information and the color saturation information of the image to be recognized in the preset color gamut space.
In the related art, for the identification of the haze scene of the image to be identified, the identification is mostly performed by using the luminance information of the image to be identified or using the combination of the luminance information and the chrominance information. For the first mode, because of the influence of haze factors, the degraded image is usually higher in brightness or the brightness histogram distribution is concentrated in a highlight area, and scene recognition can be performed by counting the brightness histogram distribution in the image to be recognized, but if a large area of sky or a white object exists in the image to be recognized, scene misjudgment can occur by adopting the brightness statistics mode. For the second mode, the value ranges of the brightness and the chromaticity of the haze image are usually only given out, when the brightness and the chromaticity of the image to be recognized are in the ranges, the image to be recognized is considered to be the haze image, otherwise, the image to be recognized is not. Although the method is more reasonable than a method of singly referring to the luminance information, the method is poor in robustness, is easily affected by noise, and is unstable in scene judgment.
In this embodiment, the image to be recognized is the target image or the inverted image of the pixels of the target image mentioned in the foregoing embodiments. When the image to be recognized is recognized in the haze scene, the method and the device for recognizing the haze scene do not simply adopt the brightness information and the chromaticity information of the image to be recognized to judge the threshold range so as to determine whether the image to be recognized belongs to the image collected in the haze scene, but carry out color gamut conversion on the image to be recognized so as to obtain the brightness information and the color saturation information of the image to be recognized in the preset color gamut space. And then determining the fog concentration of each pixel in the image to be recognized according to the brightness information and the color saturation information in the preset color gamut space, and further continuing judging the fog and haze scene according to the fog concentration of each pixel in the image to be recognized. Alternatively, the image to be recognized may be subjected to color gamut conversion by using a standard HSV, HIS or the like color space or by using other customized bright-color separation color spaces. Taking HSV space as an example, for an image to be identified in RGB format, converting the image to be identified into HSV space, and then taking a brightness component V and a color saturation component S of the image to be identified as brightness information and color saturation information of the image to be identified in a preset color gamut space respectively.
In an optional manner of this embodiment, determining a statistical value of fog density of each pixel in the image to be recognized according to the luminance information and the color saturation information of the image to be recognized in the preset color gamut space includes steps a1-a 2:
step A1, determining contrast information of the image to be recognized in the preset color gamut space according to the brightness information of the image to be recognized in the preset color gamut space.
In this embodiment, after determining the brightness information of the image to be recognized in the preset color gamut space, the contrast information of each pixel may be determined according to the brightness information of each pixel in the image to be recognized and the brightness of the surrounding pixels of each pixel, so that the contrast information of each pixel in the image to be recognized in the preset color gamut space may be determined. Optionally, noise filtering is performed on the brightness information and the color saturation information of the image to be recognized in the preset color gamut space, so that the influence of noise on the fog concentration statistics is weakened. For example, smooth filtering is performed on a luminance component V and a saturation component S of an image to be recognized in a preset color gamut space to obtain a luminance component after the smooth filtering processing
Figure BDA0002490255880000091
And smoothing the filtered color saturation component
Figure BDA0002490255880000092
The smoothing filtering can be implemented by using a low-pass filtering algorithm, such as mean filtering, gaussian filtering, NLM, and the like.
And A2, fog concentration estimation is carried out respectively based on the color saturation information, the brightness information and the contrast information, and a first fog concentration estimation value, a second fog concentration estimation value and a third fog concentration estimation value of each pixel in the image to be recognized are determined.
In the present embodiment, the fog concentration is estimated based on the color saturation information, the brightness information, and the contrast information, respectively, in a preset fog concentration calculation manner. Optionally, the preset fog concentration calculation method includes the following steps:
Figure BDA0002490255880000101
and
Figure BDA0002490255880000102
wherein
Figure BDA0002490255880000103
Is composed of
Figure BDA0002490255880000104
Local contrast at the current position, σs、σv、σcRespectively, a predetermined standard deviation parameter, ps、pv、pcAnd fog concentration estimation based on color saturation, local brightness and local contrast is carried out on each pixel in the image to be identified respectively.
And step A3, taking the product of the first fog concentration estimated value, the second fog concentration estimated value and the third fog concentration estimated value of each pixel as a fog concentration statistic value of each pixel in the image to be identified.
In this embodiment, the formula p (i, j) ═ p can be useds(i,j)×pv(i,j)×pcAnd (i, j) calculating to obtain a fog concentration statistic value of each pixel in the image to be identified. Wherein, the darker the color of the image, the poorer the visibility, and the lower the definitionThe greater the haze concentration, i.e. the lower the color saturation, the smaller the local contrast and the greater the brightness.
Note that, the above p is calculateds、pv、pcThe formula for p is not exclusive as long as it satisfies a decreasing function with respect to saturation, contrast, and an increasing function with respect to luminance.
S320, taking the pixel of which the fog concentration statistic value in the image to be recognized belongs to the range of the preset statistic value interval as the fog-haze pixel in the image to be recognized.
In the present embodiment, the statistical value interval range p is setTIf the statistic value p of fog concentration in the image to be identified is more than pTThe pixel of (2) is used as a haze pixel in the image to be identified; and the statistic value p of fog concentration in the image to be identified is less than or equal to pTAs a non-haze pixel in the image to be recognized.
S330, determining whether the image to be recognized belongs to the image of the haze scene or not according to the proportion of the haze pixels in the image to be recognized.
In the embodiment, the pixel proportion of the haze pixels in the image to be recognized is counted. And judging whether the pixel proportion of the haze pixels is larger than a preset Thr or not, wherein the Thr is a preset proportion threshold. When the pixel proportion of the haze pixels is larger than Thr, determining that the image to be recognized is the image of the haze scene; otherwise, determining that the image to be identified is the image of the non-haze scene.
The embodiment of the invention provides an image haze scene recognition method, which is characterized in that according to the characteristics of an image to be recognized in color saturation and brightness, the color saturation and local brightness information of the image to be recognized are utilized to perform the statistical judgment of the haze concentration, and then whether the image to be recognized is the image of a haze scene or not is determined based on the proportion of pixels meeting the statistical value of the haze concentration. And then, the subsequent image enhancement effect for the image to be recognized is improved.
Fig. 4 is a flowchart of another image processing method provided in an embodiment of the present invention, and the embodiment of the present invention further optimizes the process of "performing image enhancement on the target image according to the acquisition scene of the target image" in the foregoing embodiment on the basis of the foregoing embodiment, and the present embodiment may be combined with various alternatives in one or more embodiments. As shown in fig. 4, the image processing method provided in this embodiment may include the following steps S410 to S430:
s410, performing preset image degradation scene recognition based on the target image and the pixel negation image of the target image respectively to determine the acquisition scene of the target image.
And S420, if the acquisition scene of the target image belongs to an image degradation scene and the image degradation scene associated with the image enhancement stage is different, performing image enhancement on the pixel negation image of the target image.
In the embodiment, although the acquisition scene of the target image is determined, the image enhancement can be performed by using a suitable image enhancement mode in the image enhancement stage. For example, the image of the haze scene can utilize a digital fog-penetration algorithm such as a dark channel, and the image of the low-light or wide-dynamic scene can pass through an algorithm such as retinex or image layering. Therefore, in the image enhancement stage, it is usually difficult to process images in different image degradation scenes by using the same image enhancement algorithm, that is, the scene adaptability of each image enhancement algorithm is poor, and the image enhancement modes in various scenes need to be set in the enhancement stage, so that the image requirements of different scenes can be met.
Based on the above analysis, in the case where it is determined that the capture scene of the target image belongs to the image degradation scene, it may be continuously determined whether the image degradation scene to which the target image belongs is the same as the image degradation scene associated with the image enhancement stage. Therefore, the similarity theory between the image acquired in the haze scene and the image acquired in the low-illumination scene and the wide-dynamic scene after the image inversion can be utilized, the target image can be converted from one image degradation scene to another image degradation scene through the image inversion so as to adapt to the image degradation scene associated with the image enhancement stage, and therefore the image enhancement algorithm of the image degradation scene can be applied to enhance the target image in the image enhancement stage.
In this embodiment, if the acquisition scene of the target image belongs to an image degradation scene, and the image degradation scene associated with the image enhancement stage is the same, the image enhancement is directly performed on the target image, and then the enhanced image can be directly obtained. If the difference is not the same, the image enhancement is carried out on the pixel inversion image of the target image, and an enhanced image of the pixel inversion image can be obtained. And if the acquisition scene of the target image belongs to a normal scene, not performing enhancement processing on the target image. Optionally, after determining the acquisition scene of the target image, determining whether to perform image inversion on the target image may be implemented by the following logic:
Figure BDA0002490255880000121
and S430, inverting the enhanced image of the pixel inverted image of the target image to obtain the enhanced image of the target image.
In this embodiment, after obtaining the enhanced image of the inverted pixel image, it is only necessary to invert the enhanced image of the inverted pixel image of the target image, and at this time, the inverted image of the enhanced image of the obtained inverted pixel image may be used as the enhanced image of the target image.
By adopting the scheme, the image degradation scenes of the target image can be matched through the image negation operation, so that the images of different image degradation scenes can be enhanced only by using an image enhancement mode for processing one image degradation scene in the image enhancement stage, the images of different image degradation scenes are prevented from being enhanced by using the image enhancement modes of different image degradation scenes, and the scene adaptability of the image enhancement stage is improved.
Fig. 5 is a flowchart of image enhancement on an image to be enhanced according to an embodiment of the present invention, which is further optimized based on the above embodiment, and may be combined with various alternatives in one or more of the above embodiments. The description will be given by taking the image to be enhanced as the target image or the inverted pixel image of the target image as an example. As shown in fig. 5, the method for scene recognition of an image to be recognized provided in this embodiment may include the following steps S510 to S540:
s510, layering the initial brightness component of the image to be enhanced to obtain a Base layer image and a detail layer image of the initial brightness component.
In this embodiment, the image to be enhanced is the target image or the inverted image of the pixels of the target image mentioned in the foregoing embodiments. Fig. 6 is a schematic diagram of image enhancement of an image to be enhanced according to an embodiment of the present invention. Referring to fig. 6, the initial luminance component of the image to be enhanced may be determined using a conventional luminance component calculation formula, for example, the luminance component of the image to be enhanced is: y is 0.299 × R +0.517 × G +0.114 × B, where R, G, B is a three-channel of the image to be processed.
In this embodiment, referring to fig. 6, a traditional filtering algorithm, such as linear filtering (mean filtering, gaussian filtering, etc.) or nonlinear filtering (edge-preserving filters such as GF, NLM, etc.), may be adopted to perform a hierarchical processing on the initial luminance component of the image to be enhanced, and to initially separate a Base layer and a detail layer in the image to be enhanced. After the Base layer image S is obtained by separation, the detail layer image D is: d ═ I1-S. Optionally, in order to better separate edge details and noise, a fully-varying model with better robustness to noise may be used to perform hierarchical processing on the initial luminance component of the image to be enhanced. The specific separation formula is as follows:
Figure BDA0002490255880000141
wherein λ is a preset regularization parameter, S is a smoothed Base layer image, and the above minimization formula can obtain an optimal solution thereof through an FTVd algorithm.
S520, carrying out contrast enhancement on the Base layer image to obtain the enhanced Base layer image.
In the present embodiment, referring to fig. 6, after separating the luminance component of the image to be enhanced, a Base layer image and a detail layer image of the initial luminance component may be obtained. The Base layer image obtained by the preliminary separation contains most of low-frequency information and large-edge information. Contrast enhancement may be performed on the Base layer image of the initial luminance component, resulting in a contrast enhanced Base layer image S1. By the method, contrast enhancement can be performed on the Base layer image independently, and noise is prevented from being introduced for contrast enhancement. Alternatively, the Base layer image may be subjected to contrast enhancement by using a global algorithm (gamma, histogram equalization, etc.) and a local algorithm (retinex, dark channel algorithm), etc., to obtain a contrast-enhanced Base layer image.
Taking retinex algorithm as an example, the steps specifically include: convert S to the log domain: s ═ log(s); and (3) carrying out Gaussian filtering on the s to calculate an illumination image:
Figure BDA0002490255880000142
wherein w is a preset Gaussian filtering template; the reflected image is: r ═ s-L, which is linearly mapped to [0,255]The enhanced image S can be obtained1
S530, extracting the texture of the detail layer image to obtain a target texture image; and enhancing the target texture image to obtain the enhanced target texture image.
In the present embodiment, referring to fig. 6, the detail layer image D obtained through the preliminary separation includes texture details and noise, and in order to further separate the texture and the noise in the detail layer image D, the texture details in the detail layer image D are extracted as the target texture image. After the noise in the detail layer image is removed to obtain the target texture layer image without the noise, the target texture image can be enhanced by adopting an image enhancement algorithm in an image degradation scene preset in an image enhancement stage.
In this embodiment, optionally, the target texture image is processed
Figure BDA0002490255880000151
Before enhancement, Base when contrast enhancement is carried out on the Base layer image can be determinedLayer gain
Figure BDA0002490255880000152
Then the target texture image is processed according to the base layer gain
Figure BDA0002490255880000153
And (6) performing enhancement. For example, the enhanced target texture image is
Figure BDA0002490255880000154
According to the method and the device, the image enhancement is not directly performed on the detail layer image of the image to be enhanced, the texture details after noise elimination, which are obtained by separating the noise from the detail layer image in the image to be enhanced, are independently enhanced, so that the noise can be effectively inhibited when the detail layer information of the image to be enhanced is enhanced, and the image enhancement effect is improved.
In an optional manner of this embodiment, performing texture extraction on the detail layer image to obtain the target texture image may include steps B1-B3:
and step B1, separating the detail layer image to respectively obtain a preliminary texture image and a preliminary noise image.
In this embodiment, referring to fig. 6, the edge preserving filtering algorithm may be used to perform filtering decomposition on the detail layer image D obtained by the preliminary separation to obtain a preliminary texture image T and a preliminary noise image, so that the noise in the detail layer image may be removed. The edge-preserving filtering algorithm can adopt algorithms such as guiding filtering, bilateral filtering and the like.
And step B2, determining the probability that each position point in the preliminary noise image belongs to the texture feature point, and extracting the missing texture feature from the preliminary noise image according to the probability that each position point belongs to the texture feature point.
In the present embodiment, referring to fig. 6, the preliminary noise image N is described as: N-D-T, but the preliminary noise image N still contains some missing texture details, so it is necessary to extract some missing texture features from the preliminary noise image N. Before extracting the missing texture features from the preliminary noise image, the probability that each position point in the preliminary noise image belongs to the texture feature point can be determined so as to screen out the texture belonging to the missing texture from the preliminary noise image.
In this embodiment, optionally, determining the probability that each position point in the preliminary noise image belongs to the texture feature point may include: extracting a detail edge image of the detail layer image and a texture edge image of the target texture image; and for any position point in the preliminary noise image, determining the probability that each position point belongs to the texture feature point according to the strength ratio of the position point between the first edge strength of the detail edge image and the second edge strength of the texture edge image.
And step B3, obtaining a target texture image according to the preliminary texture image and the missing texture features.
For example, the detail edge image and the texture edge image of the detail layer image D and the preliminary texture image T, respectively denoted as G, can be extracted by using a preset high-frequency filter operator (for example, canny operator, Robert operator, laplacian operator, and the like)D、GT. For a point P at an arbitrary position in the preliminary noise image, G is used when the position point P is in a strong texture regionD(p)、GT(P) values are close and G is when the location point P is in a strong noise regionD(p) should be greater than GT(p), the probability that any location point p in the preliminary noisy image belongs to a texture detail can thus be defined as:
Figure BDA0002490255880000161
and finally, according to the probability that each position point belongs to the texture feature point, obtaining a target texture image as follows:
Figure BDA0002490255880000162
the method has the advantages that sufficient textures can be reserved as far as possible under the condition that noise in the images of the detail layers is removed as far as possible, and loss of texture details is avoided, so that the finally enhanced images are also subjected to feature loss, and further the images cannot be normally used.
And S540, determining an enhanced image of the image to be enhanced according to the enhanced Base layer image and the enhanced target texture image.
In an alternative manner of this embodiment, determining an enhanced image of the image to be enhanced according to the enhanced Base layer image and the enhanced target texture image may include steps C1-C2:
and step C1, overlapping the enhanced Base layer image and the enhanced target texture image to obtain an enhanced brightness component.
And step C2, obtaining an enhanced image of the image to be enhanced by reconstructing the color image according to the ratio of the enhanced brightness component to the initial brightness component.
In this embodiment, the color image is reconstructed according to the formula
Figure BDA0002490255880000171
And reconstructing the color image of the enhanced brightness component to obtain an enhanced image of the image to be enhanced. Where Y1 denotes the enhanced luminance component, Y denotes the initial luminance component,
Figure BDA0002490255880000172
representing the image to be enhanced.
The embodiment of the invention provides a method for enhancing an image to be enhanced, which is characterized in that a total variation model is utilized to firstly separate Base and detail, then further separate texture and noise from the detail, and then only the Base and the texture are enhanced. Meanwhile, the synchronous enhancement of noise while improving the contrast is avoided by different processing of edges, textures and noise, and the synchronous amplification of noise or the loss of details are avoided while improving the contrast and the visibility of an image. Moreover, compared with the image enhancement by similar superpixel segmentation, the image enhancement method has lower complexity and is easy to realize by hardware or software in real time.
Fig. 7 is a block diagram of an image processing apparatus provided in the embodiment of the present invention. The embodiment can be applied to the situation of image enhancement of images acquired under different scenes. The device can be implemented in software and/or hardware and integrated on any electronic equipment with network communication function. As shown in fig. 7, the image processing apparatus provided in the present embodiment may include: a degraded scene recognition module 710 and an image enhancement processing module 720. Wherein:
a degraded scene recognition module 710, configured to perform preset image degraded scene recognition based on a target image and a pixel negation image of the target image, respectively, so as to determine an acquisition scene of the target image;
and the image enhancement processing module 720 is configured to perform image enhancement on the target image according to the acquisition scene of the target image, so as to obtain an enhanced image of the target image.
On the basis of the foregoing embodiment, optionally, the degradation scene recognition module 710 includes:
the degraded scene recognition unit is used for carrying out preset image degraded scene recognition on the target image;
the degraded scene recognition unit is further used for recognizing the preset image degraded scene of the pixel negation image of the target image if the target image is determined not to belong to the image of the preset image degraded scene;
and the acquisition scene determining unit is used for determining the acquisition scene of the target image according to the identification result of the pixel negation image.
On the basis of the foregoing embodiment, optionally, the acquisition scenario determination unit includes:
and if the pixel negation image is determined to belong to the image of the preset image degradation scene according to the recognition result of the pixel negation image, determining that the acquisition scene of the target image is another image degradation scene.
On the basis of the above embodiment, optionally, when the preset image degradation scene is a haze scene, the another image degradation scene is a low-illumination or wide dynamic scene; and when the preset image degradation scene is a low-illumination or wide dynamic scene, the other image degradation scene is a haze scene.
On the basis of the above embodiment, optionally, the preset image degradation scene is a haze scene, and the image to be recognized is the target image or a pixel inversion image of the target image;
accordingly, the degraded scene identifying unit includes:
the haze concentration statistical subunit is used for determining a haze concentration statistical value of each pixel in the image to be recognized according to the brightness information and the color saturation information of the image to be recognized in a preset color gamut space; the image to be recognized is the target image or a pixel inversion image of the target image;
a haze pixel determining subunit, configured to use a pixel in which the fog concentration statistic value in the image to be recognized belongs to a preset statistic value interval range as a haze pixel in the image to be recognized;
and the degradation scene determining subunit is used for determining whether the image to be identified belongs to the image of the haze scene according to the proportion of the haze pixels in the image to be identified.
On the basis of the foregoing embodiment, optionally, the haze concentration statistics subunit includes:
determining contrast information of the image to be recognized in a preset color gamut space according to the brightness information of the image to be recognized in the preset color gamut space;
fog concentration estimation is carried out respectively based on the color saturation information, the brightness information and the contrast information, and a first fog concentration estimation value, a second fog concentration estimation value and a third fog concentration estimation value of each pixel in the image to be identified are determined;
and taking the product of the first fog concentration estimation value, the second fog concentration estimation value and the third fog concentration estimation value of each pixel as a fog concentration statistic value of each pixel in the image to be identified.
On the basis of the foregoing embodiment, optionally, the image enhancement processing module 720 includes:
the image enhancement processing unit is used for enhancing the image of the pixel negation image of the target image if the acquisition scene of the target image belongs to an image degradation scene and the image degradation scene associated with the image enhancement stage is different;
and the image negation processing unit is used for negating the enhanced image of the pixel negation image to obtain an enhanced image of the target image.
On the basis of the foregoing embodiment, optionally, the image enhancement processing unit is further configured to directly perform image enhancement on the target image if the acquisition scene of the target image belongs to an image degradation scene and the image degradation scene associated with the image enhancement stage is the same as the image degradation scene associated with the image enhancement stage.
On the basis of the above embodiment, optionally, the image to be enhanced is a pixel inversion map of the target image or the target image;
accordingly, the image enhancement processing unit includes:
the image layering subunit is used for layering the initial brightness component of the image to be enhanced to obtain a Base layer image and a detail layer image of the initial brightness component;
the Base layer enhancer unit is used for enhancing the contrast of the Base layer image to obtain an enhanced Base layer image;
the texture enhancer unit is used for extracting the texture of the detail layer image to obtain a target texture image; enhancing the target texture image to obtain an enhanced target texture image;
and the image enhancement processing subunit is used for determining an enhanced image of the image to be enhanced according to the enhanced Base layer image and the enhanced target texture image.
On the basis of the above embodiment, optionally, the texture enhancer unit includes:
separating the detail layer image to respectively obtain a preliminary texture image and a preliminary noise image;
determining the probability of each position point in the preliminary noise image belonging to the texture feature point, and extracting the missing texture feature from the preliminary noise image according to the probability of each position point belonging to the texture feature point;
and obtaining the target texture image according to the preliminary texture image and the missing texture features.
On the basis of the foregoing embodiment, optionally, the texture enhancer unit specifically includes:
extracting a detail edge image of the detail layer image and a texture edge image of the target texture image;
and for any position point in the preliminary noise image, determining the probability that each position point belongs to the texture feature point according to the intensity ratio of the position point between the first edge intensity of the detail edge image and the second edge intensity of the texture edge image.
On the basis of the foregoing embodiment, optionally, the image enhancement processing subunit includes:
superposing the enhanced Base layer image and the enhanced target texture image to obtain an enhanced brightness component;
and obtaining an enhanced image of the image to be enhanced by reconstructing the color image according to the ratio of the enhanced brightness component to the initial brightness component.
The image processing apparatus provided in the embodiment of the present invention may execute the image processing method provided in any embodiment of the present invention, and has corresponding functions and beneficial effects for executing the image processing method, and the detailed process refers to the related operations of the image processing method in the foregoing embodiments.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention. As shown in fig. 8, the electronic device provided in the embodiment of the present invention includes: one or more processors 810 and storage 820; the processor 810 in the electronic device may be one or more, and fig. 8 illustrates one processor 810 as an example; storage 820 is used to store one or more programs; the one or more programs are executed by the one or more processors 810, such that the one or more processors 810 implement the image processing method according to any of the embodiments of the present invention.
The electronic device may further include: an input device 830 and an output device 840.
The processor 810, the storage device 820, the input device 830 and the output device 840 in the electronic apparatus may be connected by a bus or other means, and fig. 8 illustrates an example of connection by a bus.
The storage device 820 in the electronic device is used as a computer readable storage medium for storing one or more programs, which may be software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the image processing method provided in the embodiment of the present invention. The processor 810 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the storage 820, that is, implements the image processing method provided in the above-described method embodiment.
The storage device 820 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, storage 820 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 820 may further include memory located remotely from processor 810, which may be connected to devices over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 830 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. The output device 840 may include a display device such as a display screen.
And, when the one or more programs included in the electronic device are executed by the one or more processors 810, the programs perform the following operations:
performing preset image degradation scene recognition based on a target image and a pixel negation image of the target image respectively to determine an acquisition scene of the target image;
and according to the acquisition scene of the target image, carrying out image enhancement on the target image to obtain an enhanced image of the target image.
Of course, it will be understood by those skilled in the art that when one or more programs included in the electronic device are executed by the one or more processors 810, the programs may also perform operations associated with the image processing method provided in any embodiment of the present invention.
An embodiment of the present invention provides a computer-readable medium having stored thereon a computer program for executing an image processing method when executed by a processor, the method including:
performing preset image degradation scene recognition based on a target image and a pixel negation image of the target image respectively to determine an acquisition scene of the target image;
according to the acquisition scene of the target image, carrying out image enhancement on the target image to obtain an enhanced image of the target image
Optionally, the program, when executed by a processor, may be further configured to perform an image processing method provided in any of the embodiments of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (15)

1. An image processing method, characterized in that the method comprises:
performing preset image degradation scene recognition based on a target image and a pixel negation image of the target image respectively to determine an acquisition scene of the target image;
and according to the acquisition scene of the target image, carrying out image enhancement on the target image to obtain an enhanced image of the target image.
2. The method according to claim 1, wherein performing pre-set image degradation scene recognition based on a target image and a pixel negation image of the target image, respectively, comprises:
performing preset image degradation scene recognition on a target image;
if the target image is determined not to belong to the image of the preset image degradation scene, performing preset image degradation scene recognition on the pixel negation image of the target image;
and determining the acquisition scene of the target image according to the identification result of the pixel inverted image.
3. The method of claim 2, wherein determining the acquisition scene of the target image according to the recognition result of the inverted pixel image comprises:
and if the pixel negation image is determined to belong to the image of the preset image degradation scene according to the recognition result of the pixel negation image, determining that the acquisition scene of the target image is another image degradation scene.
4. The method according to claim 3, wherein when the preset image degradation scene is a haze scene, the other image degradation scene is a low-illumination or wide-dynamic scene; and when the preset image degradation scene is a low-illumination or wide dynamic scene, the other image degradation scene is a haze scene.
5. The method according to claim 2, wherein the preset image degradation scene is a haze scene, and the image to be recognized is the target image or a pixel inversion image of the target image;
correspondingly, the method for recognizing the image to be recognized in the degraded scene of the preset image comprises the following steps:
determining a fog concentration statistic value of each pixel in an image to be recognized according to brightness information and color saturation information of the image to be recognized in a preset color gamut space; the image to be recognized is the target image or a pixel inversion image of the target image;
taking the pixel of which the fog concentration statistic value in the image to be recognized belongs to a preset statistic value interval range as a fog-haze pixel in the image to be recognized;
and determining whether the image to be recognized belongs to the image of the haze scene or not according to the proportion of the haze pixels in the image to be recognized.
6. The method according to claim 5, wherein determining a statistic value of fog concentration of each pixel in the image to be recognized according to the brightness information and the color saturation information of the image to be recognized in a preset color gamut space comprises:
determining contrast information of the image to be recognized in a preset color gamut space according to the brightness information of the image to be recognized in the preset color gamut space;
fog concentration estimation is carried out respectively based on the color saturation information, the brightness information and the contrast information, and a first fog concentration estimation value, a second fog concentration estimation value and a third fog concentration estimation value of each pixel in the image to be identified are determined;
and taking the product of the first fog concentration estimation value, the second fog concentration estimation value and the third fog concentration estimation value of each pixel as a fog concentration statistic value of each pixel in the image to be identified.
7. The method of claim 1, wherein image enhancing the target image according to the acquisition scene of the target image comprises:
if the acquisition scene of the target image belongs to an image degradation scene and the image degradation scene associated with the image enhancement stage is different, performing image enhancement on the pixel negation image of the target image;
and inverting the enhanced image of the pixel inverted image to obtain an enhanced image of the target image.
8. The method of claim 7, further comprising:
and if the acquisition scene of the target image belongs to an image degradation scene which is the same as the image degradation scene associated with the image enhancement stage, directly carrying out image enhancement on the target image.
9. The method according to claim 7 or 8, characterized in that the image to be enhanced is a pixel inversion map of the target image or the target image;
correspondingly, the image enhancement is carried out on the image to be enhanced, and the image enhancement comprises the following steps:
layering the initial brightness component of the image to be enhanced to obtain a Base layer image and a detail layer image of the initial brightness component;
carrying out contrast enhancement on the Base layer image to obtain an enhanced Base layer image;
extracting texture of the detail layer image to obtain a target texture image; enhancing the target texture image to obtain an enhanced target texture image;
and determining an enhanced image of the image to be enhanced according to the enhanced Base layer image and the enhanced target texture image.
10. The method of claim 9, wherein performing texture extraction on the detail layer image to obtain a target texture image comprises:
separating the detail layer image to respectively obtain a preliminary texture image and a preliminary noise image;
determining the probability of each position point in the preliminary noise image belonging to the texture feature point, and extracting the missing texture feature from the preliminary noise image according to the probability of each position point belonging to the texture feature point;
and obtaining the target texture image according to the preliminary texture image and the missing texture features.
11. The method of claim 10, wherein determining the probability that each location point in the preliminary noisy image belongs to a texture feature point comprises:
extracting a detail edge image of the detail layer image and a texture edge image of the target texture image;
and for any position point in the preliminary noise image, determining the probability that each position point belongs to the texture feature point according to the intensity ratio of the position point between the first edge intensity of the detail edge image and the second edge intensity of the texture edge image.
12. The method of claim 9, wherein determining the enhanced image of the image to be enhanced according to the enhanced Base layer image and the enhanced target texture image comprises:
superposing the enhanced Base layer image and the enhanced target texture image to obtain an enhanced brightness component;
and obtaining an enhanced image of the image to be enhanced by reconstructing the color image according to the ratio of the enhanced brightness component to the initial brightness component.
13. An image processing apparatus, characterized in that the apparatus comprises:
the quality-degraded scene recognition module is used for performing preset image quality-degraded scene recognition based on a target image and a pixel negation image of the target image respectively so as to determine an acquisition scene of the target image;
and the image enhancement processing module is used for carrying out image enhancement on the target image according to the acquisition scene of the target image to obtain an enhanced image of the target image.
14. An electronic device, comprising:
one or more processing devices;
storage means for storing one or more programs;
when executed by the one or more processing devices, cause the one or more processing devices to implement the image processing method of any of claims 1-12.
15. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processing device, implements the image processing method of any one of claims 1 to 12.
CN202010403137.7A 2020-05-13 2020-05-13 Image processing method, device, equipment and storage medium Pending CN113674158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010403137.7A CN113674158A (en) 2020-05-13 2020-05-13 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010403137.7A CN113674158A (en) 2020-05-13 2020-05-13 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113674158A true CN113674158A (en) 2021-11-19

Family

ID=78537004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010403137.7A Pending CN113674158A (en) 2020-05-13 2020-05-13 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113674158A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409153A (en) * 2023-12-15 2024-01-16 深圳大学 Three-dimensional target transmission imaging method in turbid medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177424A (en) * 2012-12-07 2013-06-26 西安电子科技大学 Low-luminance image reinforcing and denoising method
CN106663326A (en) * 2014-06-12 2017-05-10 Eizo株式会社 Image processing system and computer-readable recording medium
WO2018099136A1 (en) * 2016-11-29 2018-06-07 深圳市中兴微电子技术有限公司 Method and device for denoising image with low illumination, and storage medium
CN110113510A (en) * 2019-05-27 2019-08-09 杭州国翌科技有限公司 A kind of real time video image Enhancement Method and high speed camera system
CN110634112A (en) * 2019-10-15 2019-12-31 中国矿业大学(北京) Method for enhancing noise-containing image under mine by double-domain decomposition
CN110807742A (en) * 2019-11-21 2020-02-18 西安工业大学 Low-light-level image enhancement method based on integrated network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177424A (en) * 2012-12-07 2013-06-26 西安电子科技大学 Low-luminance image reinforcing and denoising method
CN106663326A (en) * 2014-06-12 2017-05-10 Eizo株式会社 Image processing system and computer-readable recording medium
WO2018099136A1 (en) * 2016-11-29 2018-06-07 深圳市中兴微电子技术有限公司 Method and device for denoising image with low illumination, and storage medium
CN110113510A (en) * 2019-05-27 2019-08-09 杭州国翌科技有限公司 A kind of real time video image Enhancement Method and high speed camera system
CN110634112A (en) * 2019-10-15 2019-12-31 中国矿业大学(北京) Method for enhancing noise-containing image under mine by double-domain decomposition
CN110807742A (en) * 2019-11-21 2020-02-18 西安工业大学 Low-light-level image enhancement method based on integrated network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409153A (en) * 2023-12-15 2024-01-16 深圳大学 Three-dimensional target transmission imaging method in turbid medium

Similar Documents

Publication Publication Date Title
CN109636754B (en) Extremely-low-illumination image enhancement method based on generation countermeasure network
EP2852152B1 (en) Image processing method, apparatus and shooting terminal
JP6803378B2 (en) Reverse tone mapping method and equipment
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
US10074162B2 (en) Brightness control for spatially adaptive tone mapping of high dynamic range (HDR) images
CN106846276B (en) Image enhancement method and device
WO2022179335A1 (en) Video processing method and apparatus, electronic device, and storage medium
CN111784605B (en) Image noise reduction method based on region guidance, computer device and computer readable storage medium
KR102567860B1 (en) Improved inverse tone mapping method and corresponding device
WO2020108060A1 (en) Video processing method and apparatus, and electronic device and storage medium
JP5286215B2 (en) Outline extracting apparatus, outline extracting method, and outline extracting program
CN113674158A (en) Image processing method, device, equipment and storage medium
CN110136085B (en) Image noise reduction method and device
WO2023011280A1 (en) Image noise degree estimation method and apparatus, and electronic device and storage medium
CN114693543B (en) Image noise reduction method and device, image processing chip and image acquisition equipment
CN116189037A (en) Flame detection identification method and device and terminal equipment
CN112887513B (en) Image noise reduction method and camera
Dewei et al. Fast single image haze removal method based on atmospheric scattering model
CN114092884A (en) Camera lens displacement detection method and device, electronic equipment and storage medium
CN111652806B (en) Method and system for removing shadows from image
CN108133204B (en) Hand body identification method, device, equipment and computer readable storage medium
CN112488933A (en) Video detail enhancement method and device, mobile terminal and storage medium
Neelima et al. Performance Evaluation of Clustering Based Tone Mapping Operators with State-of-Art Methods
CN112801932A (en) Image display method, image display device, electronic equipment and storage medium
CN104243767A (en) Method for removing image noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination