CN116485679A - Low-illumination enhancement processing method, device, equipment and storage medium - Google Patents

Low-illumination enhancement processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116485679A
CN116485679A CN202310480943.8A CN202310480943A CN116485679A CN 116485679 A CN116485679 A CN 116485679A CN 202310480943 A CN202310480943 A CN 202310480943A CN 116485679 A CN116485679 A CN 116485679A
Authority
CN
China
Prior art keywords
image
processed
pixel
value
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310480943.8A
Other languages
Chinese (zh)
Inventor
李福海
李凯
刘应斌
关旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jitter Technology Shenzhen Co ltd
Shenzhen Instant Construction Technology Co ltd
Original Assignee
Jitter Technology Shenzhen Co ltd
Shenzhen Instant Construction Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jitter Technology Shenzhen Co ltd, Shenzhen Instant Construction Technology Co ltd filed Critical Jitter Technology Shenzhen Co ltd
Priority to CN202310480943.8A priority Critical patent/CN116485679A/en
Publication of CN116485679A publication Critical patent/CN116485679A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application is applicable to the field of image processing, and provides a low-illumination enhancement processing method, a device, equipment and a storage medium, which comprise the following steps: acquiring an image to be processed and source information of the image to be processed; determining a first brightness enhancement factor according to the source information; calculating image brightness information corresponding to the image to be processed; determining a second brightness enhancement coefficient according to the image brightness information; determining a pixel traversal model corresponding to the image to be processed; determining a plurality of initial pixels corresponding to the image to be processed to obtain an initial pixel set; invoking the pixel traversing model to traverse the initial pixel set to obtain a target pixel set corresponding to the initial pixel set; and adjusting the initial pixel set and the target pixel set according to the first brightness enhancement coefficient and the second brightness enhancement coefficient to obtain a low-illumination enhanced image. The method and the device can improve the effect and efficiency of low-illumination enhancement.

Description

Low-illumination enhancement processing method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a low-illumination enhancement processing method, a device, equipment and a storage medium.
Background
The electronic device performs operations such as object detection, classification, recognition, scene understanding, 3D reconstruction and the like on the captured image, so that the image can be applied to actual scenes such as automatic driving, video monitoring, virtual augmented reality and the like. The image quality has a great influence on the subsequent scene application, and the illumination change in the scene often directly influences the acquired image quality. Typically, images captured in low light environments are used, and pictures taken in low light environments often experience severe degradation, such as poor visibility, low contrast, unexpected noise, etc. Weak light undoubtedly reduces the performance of most vision-based algorithms, and therefore, it is necessary to perform low-light enhancement processing on the image to improve the image quality.
However, the low-illuminance enhancement processing of an image is affected by many factors, and the effect and efficiency of the low-illuminance enhancement processing cannot be ensured.
Content of the application
The embodiment of the application provides a low-illumination enhancement processing method, a low-illumination enhancement processing device, electronic equipment and a storage medium, so as to solve the problems of poor effect and efficiency of low-illumination enhancement processing.
An embodiment of the present application provides a low-illuminance enhancement processing method, where the method includes: acquiring an image to be processed and source information of the image to be processed; determining a first brightness enhancement factor according to the source information; calculating image brightness information corresponding to the image to be processed; determining a second brightness enhancement coefficient according to the image brightness information; determining a pixel traversal model corresponding to the image to be processed; determining a plurality of initial pixels corresponding to the image to be processed to obtain an initial pixel set; invoking the pixel traversing model to traverse the initial pixel set to obtain a target pixel set corresponding to the initial pixel set; and adjusting the initial pixel set and the target pixel set according to the first brightness enhancement coefficient and the second brightness enhancement coefficient to obtain a low-illumination enhanced image.
Further, in the above method provided in the embodiment of the present application, the determining the first luminance enhancement coefficient according to the source information includes: when the source information is non-video, determining that the first brightness enhancement coefficient is a preset value; when the source information is video, determining a video frame rate of the video corresponding to the image to be processed and a preset interval difference value corresponding to the first brightness enhancement coefficient, calculating a ratio of the interval difference value to the video frame rate, and determining the first brightness enhancement coefficient corresponding to each frame of the video according to the ratio.
Further, in the above method provided in the embodiment of the present application, before the calculating the image brightness information corresponding to the image to be processed, the method further includes: determining an image format of the image to be processed; when the image format is an RGB data format, the RGB data format is converted into a YUV data format.
Further, in the above method provided in the embodiment of the present application, the calculating the image brightness information corresponding to the image to be processed includes: determining a Y-channel image corresponding to the image to be processed; determining preset dark pixels and global image pixels corresponding to the Y channel image; calculating a pixel proportion value of a first pixel number corresponding to the preset dark part pixel and a second pixel number corresponding to the global image pixel; splitting the Y channel image according to a preset proportion to obtain a plurality of image blocks; selecting a preset number of sampling points from the plurality of image blocks, and determining a sampling interval according to the sampling points; calculating a pixel mean value corresponding to the sampling interval; and weighting the pixel proportion value and the pixel mean value to obtain the image brightness information corresponding to the image to be processed.
Further, in the above method provided by the embodiment of the present application, the determining a pixel traversal model corresponding to the image to be processed includes: calculating a priori value of a target bright channel corresponding to the image to be processed; calculating the target atmospheric light intensity corresponding to the image to be processed; processing the prior value of the target bright channel and the target atmospheric light intensity according to a preset atmospheric light transmittance formula to obtain the atmospheric light transmittance; performing gamma conversion on the atmospheric light transmittance to obtain an atmospheric light transmittance enhancement value; and generating the pixel traversing model according to the target atmospheric light intensity and the atmospheric light transmittance enhancement value.
Further, in the above method provided by the embodiment of the present application, the calculating the target bright channel prior value corresponding to the image to be processed includes: determining the brightness value of each pixel corresponding to the image to be processed to obtain a first brightness value set; selecting a plurality of local maximum values from the first brightness value set according to a first preset sliding window algorithm; sorting the local maximum values according to the sequence from large to small, and selecting the local maximum value with the first preset duty ratio before sorting; and calculating a first average value of the local maximum value of the first preset duty ratio, and taking the first average value as the prior value of the target bright channel.
Further, in the above method provided in the embodiment of the present application, the calculating the atmospheric light intensity corresponding to the image to be processed includes: determining the brightness value of each pixel corresponding to the image to be processed to obtain a second brightness value set; selecting a plurality of local minimum values from the second brightness value set according to a second preset sliding window algorithm; sorting the local minimum values according to the sequence from large to small, and selecting the local minimum value of a second preset duty ratio with the previous sorting; and calculating a second average value of the local minimum value of the second preset duty ratio, and taking the second average value as the target atmospheric light intensity.
The second aspect of the embodiments of the present application further provides a low-illuminance enhancement processing apparatus, including: the source acquisition module is used for acquiring an image to be processed and source information of the image to be processed; a first coefficient determining module, configured to determine a first luminance enhancement coefficient according to the source information; the brightness calculation module is used for calculating image brightness information corresponding to the image to be processed; a second coefficient determining module, configured to determine a second luminance enhancement coefficient according to the image luminance information; the traversal table determining module is used for determining a pixel traversal model corresponding to the image to be processed; an initial pixel determining module, configured to determine a plurality of initial pixels corresponding to the image to be processed, to obtain an initial pixel set; the target pixel determining module is used for traversing the pixel traversing model to obtain a target pixel set corresponding to the initial pixel set; and the pixel adjusting module is used for adjusting the initial pixel set and the target pixel set according to the first brightness enhancement coefficient and the second brightness enhancement coefficient to obtain a low-illumination enhanced image.
A third aspect of the embodiments of the present application further provides an electronic device, where the electronic device includes a processor, and the processor is configured to implement a low-illuminance enhancement processing method according to any one of the foregoing when executing a computer program stored in a memory.
The fourth aspect of the embodiments of the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a controller to implement a low-illuminance enhancement processing method according to any one of the above.
In the implementation of the method, the first brightness enhancement coefficient is determined according to the source information of the image to be processed, and by setting the first brightness enhancement coefficient, when the source information is video, the phenomenon of jumping and the like among video frames can be avoided, and the low-illumination enhancement effect is ensured; the pixel traversal model corresponding to the image to be processed is determined, the image to be processed is subjected to low-illumination enhancement processing by the pixel traversal model, the problems of complex calculation and low-illumination pairing real data training required caused by low-illumination enhancement of the image by a deep learning method can be avoided, and the low-illumination enhancement efficiency is improved; in addition, the second brightness enhancement coefficient is determined according to the image brightness information of the image to be processed, so that the brightness enhancement upper bound can be adaptively controlled according to the current illumination condition of the image, and the low-illumination enhancement effect is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a low-illuminance enhancement processing method provided in an embodiment of the present application;
fig. 2 is a flow chart of a low-illuminance enhancement processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a determining flow of a first luminance enhancement coefficient according to an embodiment of the present disclosure;
FIG. 4 is a flowchart for determining image brightness information according to an embodiment of the present application;
FIG. 5A is a statistical histogram of pixels provided by an embodiment of the present application;
FIG. 5B is a plot of a local site-directed sampling provided by an embodiment of the present application;
FIG. 6 is a flowchart of determining a pixel traversal model provided by an embodiment of the present application;
FIG. 7 is a flowchart for determining a priori value of a bright target channel according to an embodiment of the present application;
FIG. 8 is a flow chart of determining a target atmospheric light intensity provided by an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a low-illuminance enhancement processing apparatus according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, the described embodiments are some, but not all, of the embodiments of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
When an image captured in an environment with insufficient light (also called a low-illumination image) is subjected to enhancement processing, the electronic device is affected by a plurality of factors, and the effect and efficiency of the image enhancement processing cannot be ensured. For example, the existing image low-illumination enhancement processing mostly adopts a deep learning mode, however, the huge calculation amount and space complexity of the neural network will result in poor enhancement processing efficiency; moreover, the deep learning mode is often difficult to acquire the low-illumination image of the actual scene, the artificially synthesized low-illumination image is required to be used as training data, and the artificially synthesized low-illumination image can lead to poor practicality of generalizing the model to the actual low-illumination scene, the effect of image enhancement processing cannot be ensured, and the effect of image enhancement processing is poor.
Based on the above-mentioned problems, the embodiments of the present application provide a low-illuminance enhancement processing method, which improves the effect and efficiency of the low-illuminance enhancement processing.
A block diagram of a low-illuminance enhancement processing method provided in an embodiment of the present application is described with reference to fig. 1. As shown in fig. 1, the electronic device 3 comprises a memory 31, at least one processor 32, at least one communication bus 33 and a transceiver 34. It will be appreciated that the configuration of the electronic device shown in fig. 1 is not limiting of the embodiments of the present application, and that either a bus-type configuration or a star-type configuration may be used, and that the electronic device 3 may also include more or less other hardware or software than shown, or a different arrangement of components.
In some embodiments, the electronic device 3 is a device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The electronic device 3 may also comprise a client device, including but not limited to any electronic product that can interact with a client by means of a keyboard, mouse, remote control, touch pad or voice control device, such as a personal computer, tablet, smart phone, digital camera, etc.
It should be noted that the electronic device 3 is only used as an example, and other electronic products that may be present in the present application or may be present in the future are also included in the scope of the present application and are incorporated herein by reference.
In some embodiments, at least one communication bus 33 is provided to enable connected communication between the memory 31 and the at least one processor 32 or the like.
Although not shown, the electronic device 3 may also include a power source (e.g., a battery) for powering the various components, preferably the power source is logically connected to the at least one processor 32 via a power management device that performs functions such as managing charge, discharge, and power consumption. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 3 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
Fig. 2 is a flowchart of a low-illuminance enhancement processing method according to an embodiment of the present application. As shown in fig. 2, the low-light intensity enhancement processing method is performed by the electronic device, and may include steps, the order of the steps in the flowchart may be changed according to different needs, and some may be omitted.
S11, obtaining an image to be processed and source information of the image to be processed.
In at least one embodiment of the present application, the image to be processed refers to an image that needs to be subjected to low-illumination enhancement processing. The source information may include both video and non-video sources. The image to be processed may be a separate image or may be a video frame image in a video. When the image to be processed is a single image, the source information refers to non-video; when the image to be processed is a video frame image of a certain frame in the video, the source information refers to the video.
S12, determining a first brightness enhancement coefficient according to the source information.
In at least one embodiment of the present application, the first luminance enhancement coefficient is used to identify a luminance enhancement slope of a video frame within the video, i.e., the first luminance enhancement coefficient is associated with the video. It can be appreciated that when the source information is non-video, the first luminance enhancement coefficient is a preset value, and the preset value may be 1; when the source information is video, the video frame rate needs to be determined, and a first brightness enhancement coefficient is set according to the video frame rate, so that the phenomena of flickering, frame skipping and the like caused by excessive brightness change of adjacent video frames in the same second are avoided.
The determination process of the first luminance enhancement coefficient may be described in detail below with respect to fig. 3.
S13, calculating image brightness information corresponding to the image to be processed.
In at least one embodiment of the present application, the image brightness information refers to global illumination information of an image to be processed, and in one embodiment, the image brightness information may be divided into five brightness levels, i.e. dark, normal, bright and bright, from dark to bright.
In an embodiment, the image format of the image to be processed may be an RGB data format or a YUV data format. The image to be processed in RGB data format includes R channel, G channel and B channel corresponding to red, green and blue components, and the color of the image is determined by the values of the three components. The to-be-processed image in the YUV data format comprises a Y channel, a U channel and a V channel, wherein the Y channel identifies the brightness of the image, and the U channel and the V channel identify the chromaticity of the image.
In an embodiment, before calculating the image brightness information corresponding to the image to be processed, the electronic device further includes: determining an image format of an image to be processed; when the image format is an RGB data format, the RGB data format is converted into a YUV data format. The conversion of the RGB data format into the YUV data format is known in the art, and will not be described herein. According to the embodiment of the application, the image brightness information of the Y-channel image is determined to replace the image brightness information of the RGB three-channel image by the mode of performing low-illumination brightness processing on the Y-channel image, so that the efficiency of the low-illumination brightness processing can be remarkably improved.
The image brightness information determining process may be described in detail below with respect to fig. 4.
S14, determining a second brightness enhancement coefficient according to the image brightness information.
In at least one embodiment of the present application, the second luminance enhancement coefficient is used to identify a luminance enhancement upper boundary of the image to be processed, and a mapping relationship exists between the image luminance information and the second luminance enhancement coefficient, and the second luminance enhancement coefficient corresponding to the image luminance information can be obtained by querying the mapping relationship. Illustratively, the value range of the second luminance enhancement coefficient is 0-1, and when the image luminance information is extremely dark, the second luminance enhancement coefficient may be 1; when the image brightness information is darker, the second brightness enhancement coefficient may be 0.8; when the image brightness information is conventional, the second brightness enhancement coefficient may be 0.5; when the image brightness information is bright, the second brightness enhancement coefficient may be 0.2; when the image brightness information is extremely bright, the second brightness enhancement coefficient may be 0.
In an embodiment, the value setting of the second brightness enhancement coefficient may be fine-tuned according to the image condition of the image to be processed, which may include, but is not limited to, image quality. Before determining the second luminance enhancement coefficient from the image luminance information, the method further comprises: determining the image quality of an image to be processed; when the image quality meets the preset image quality requirement, determining the second brightness enhancement coefficient as a set value; and when the image quality does not meet the preset image quality requirement, determining a second brightness enhancement coefficient according to the image brightness information. The preset image quality requirement may refer to that the image stores excessive noise in an extremely dark state.
It can be understood that when the image quality meets the preset image quality requirement, the image brightness information of the image to be processed is extremely dark, and excessive noise exists in the image to be processed. When the image quality does not meet the preset image quality requirement, the image brightness information of the image to be processed can be extremely dark, but the noise number of the image to be processed is small or no noise exists; or the image brightness information of the image to be processed is not in an extremely dark state at this time.
Illustratively, when the image to be processed stores excessive noise in an extremely dark state, the noise may be enlarged if forcibly enhanced, resulting in serious blocking effect. In response to such a case, the setting value of the second luminance enhancement coefficient is set to a value smaller than 0.1, that is, the brightness of the image is properly adjusted without enlarging the noise, so that a better visual effect is obtained.
According to the method and the device for enhancing the brightness of the image, the second brightness enhancement coefficient is determined according to the image brightness information of the image to be processed, the brightness enhancement upper bound can be adaptively controlled according to the current illumination condition of the image, and the effect of enhancing the low illumination is guaranteed.
S15, determining a pixel traversal model corresponding to the image to be processed.
In at least one embodiment of the present application, the pixel traversal model is a mathematical model for performing an initial low-illumination enhancement transformation on each pixel in an image to be processed, and the brightness value corresponding to each initial pixel of the image to be processed is processed by calling the pixel traversal model, so that the brightness value corresponding to the target pixel after the initial low-illumination enhancement transformation is obtained. And then, the first brightness enhancement coefficient and the second brightness enhancement coefficient are called to adjust the brightness value of each target pixel, so that a final low-illumination enhanced photo is obtained. In an embodiment, the plurality of images to be processed may share one pixel traversal model, or each image to be processed may have a corresponding pixel traversal model, which is not limited herein.
In an embodiment, the model parameters corresponding to the pixel traversal model may include a bright channel prior value, an atmospheric light intensity, and an atmospheric light transmittance of the image to be processed, and the pixel traversal model may be obtained by calculating the bright channel prior value, the atmospheric light intensity, and the atmospheric light transmittance corresponding to the image to be processed, and combining the bright channel prior value, the atmospheric light intensity, and the atmospheric light transmittance.
The determination procedure for the pixel traversal model may be referred to in detail below with respect to fig. 6.
In at least one embodiment of the present application, a pixel traversal model (also referred to herein as a static pixel traversal model) may be set for various low-illuminance images to be processed, or a corresponding pixel traversal model (also referred to herein as a dynamic pixel traversal model) may be set for each low-illuminance image to be processed, which is not limited herein.
In an embodiment, when the pixel traversing model is one, determining the target bright channel prior value and the target atmospheric light intensity in a deep learning mode, and then processing the target bright channel prior value and the target atmospheric light intensity according to a preset atmospheric light transmittance formula to obtain the atmospheric light transmittance, and performing gamma transformation on the atmospheric light transmittance to obtain the atmospheric light transmittance enhancement value; and generating a pixel traversing model according to the target atmospheric light intensity and the atmospheric light transmittance enhancement value. The initialized neural network model may be a convolutional neural network model, the low-illuminance image data includes a low-illuminance image and a bright channel prior value and an atmospheric light intensity corresponding to the image, the low-illuminance image data is used as model training data, the low-illuminance image is used as an input vector, and the bright channel prior value and the atmospheric light intensity are used as output vectors to train the neural network model, so that a pixel traversal model suitable for a plurality of low-illuminance images to be processed can be obtained.
In other embodiments, when the pixel traversal model is plural, a pixel traversal model corresponding to each image to be processed is calculated. The method comprises the steps of determining a target bright channel prior value and target atmospheric light intensity corresponding to each image to be processed, and then processing the target bright channel prior value and the target atmospheric light intensity according to a preset atmospheric light transmittance formula to obtain the atmospheric light transmittance, and performing gamma transformation on the atmospheric light transmittance to obtain an atmospheric light transmittance enhancement value; and generating a pixel traversing model according to the target atmospheric light intensity and the atmospheric light transmittance enhancement value.
In other embodiments, the corresponding pixel traversal model is determined for each video frame in the high frame rate video, which is computationally intensive, resulting in less efficient image enhancement processing. In this regard, the present application may also determine a pixel traversal model corresponding to a video frame over a period of time by establishing a long-range dependent period of time, so as to reduce the amount of computation. The long-distance dependent time period may be set according to actual demands, and may be, for example, 5 seconds or 10 seconds. Taking the long-distance dependence time of 5 seconds as an example, 50 frames of video frames are contained in the 5-second video, wherein pixel traversal models corresponding to a plurality of frames (for example, the previous 3 frames) of video frames with previous time stamps are acquired, the pixel traversal models are respectively a pixel traversal model A, a pixel traversal model B and a pixel traversal model C, mean processing is obtained on the pixel traversal models A, B, C (a sum value among the pixel traversal models A, B, C is calculated first, and then the quotient of the sum value and the number 3 of the pixel traversal models is obtained as a mean result), and the obtained pixel traversal model (namely, the mean result) is taken as a pixel traversal model of the following 47 frames of video frames. According to the method and the device, the pixel traversal model corresponding to the image to be processed is determined, the image to be processed is subjected to low-illumination enhancement processing by the pixel traversal model, the problems of complex calculation and low-illumination pairing real data training required caused by low-illumination enhancement of the image by a deep learning method can be avoided, and the low-illumination enhancement efficiency is improved.
S16, determining a plurality of initial pixels corresponding to the image to be processed, and obtaining an initial pixel set.
In at least one embodiment of the present application, the image to be processed includes a plurality of initial pixels, each of the initial pixels includes a corresponding luminance value, and the plurality of initial pixels form an initial pixel set.
S17, a pixel traversing model is called to traverse the initial pixel set, and a target pixel set corresponding to the initial pixel set is obtained.
In at least one embodiment of the present application, a luminance value corresponding to each initial pixel in an initial pixel set is input to a pixel traversal model to perform initial low-illuminance enhancement processing, so as to obtain a luminance value corresponding to a corresponding target pixel, and luminance values corresponding to a plurality of target pixels are combined, so as to obtain a target pixel set.
And S18, adjusting the initial pixel set and the target pixel set according to the first brightness enhancement coefficient and the second brightness enhancement coefficient to obtain a low-illumination enhanced image.
In at least one embodiment of the present application, a mathematical relationship between a first luminance enhancement coefficient, a second luminance enhancement coefficient, a target pixel within a target pixel set, and an initial pixel within an initial pixel set is determined, and a final low-luminance enhanced image is obtained according to the mathematical relationship.
In one embodiment, where (x) is represented as the target set of pixels and I (x) is represented as the initial set of pixels (i.e., the image to be processed), the mathematical relationship may be: i (x) second luminance enhancement coefficient + (1-second luminance enhancement coefficient) first luminance enhancement coefficient. Wherein the mathematical relationship may be adjusted according to the image enhancement effect.
According to the low-illumination enhancement processing method provided by the embodiment of the application, the first brightness enhancement coefficient is determined according to the source information of the image to be processed, and when the source information is video, the phenomenon of jumping and the like among video frames can be avoided by setting the first brightness enhancement coefficient, so that the low-illumination enhancement effect is ensured; the pixel traversal model corresponding to the image to be processed is determined, the image to be processed is subjected to low-illumination enhancement processing by the pixel traversal model, the problems of complex calculation and low-illumination pairing real data training required caused by low-illumination enhancement of the image by a deep learning method can be avoided, and the low-illumination enhancement efficiency is improved; in addition, the second brightness enhancement coefficient is determined according to the image brightness information of the image to be processed, so that the brightness enhancement upper bound can be adaptively controlled according to the current illumination condition of the image, and the low-illumination enhancement effect is ensured.
The following describes a process for determining the first luminance enhancement coefficient according to the embodiment of the present application with reference to fig. 3, where in an embodiment, the determining, by the electronic device, the first luminance enhancement coefficient according to the source information includes:
s121, when the source information is non-video, determining that the first brightness enhancement coefficient is a preset value.
In an embodiment, the preset value may be set according to actual requirements, for example, the preset value is 1.
And S122, when the source information is video, determining the video frame rate of the video corresponding to the image to be processed and the interval difference value corresponding to the preset first brightness enhancement coefficient, calculating the ratio of the interval difference value to the video frame rate, and determining the first brightness enhancement coefficient corresponding to each frame of the image to be processed of the video according to the ratio.
For example, when the video frame rate is 15 frames/second, the interval corresponding to the preset first luminance enhancement coefficient is determined to be [0.5,1], and it is understood that the interval may be dynamically set according to the video frame image (i.e., the image to be processed), which is not limited herein. In order to avoid the phenomena of flicker, frame skip and the like among video frames, calculating an interval difference value corresponding to the first brightness enhancement coefficient to be 0.5, and calculating the ratio of the interval difference value to the video frame rate to be 0.5/15=0.033. At this time, there are 15 frames of images to be processed per second, and for the first frame of images to be processed per second, the first luminance enhancement coefficient is 0.5; for the second frame of the image to be processed, the first luminance enhancement coefficient is 0.5+0.033=0.533; for the third frame of the image to be processed, the first luminance enhancement coefficient is 0.533+0.033=0.566; and so on until a first brightness enhancement coefficient corresponding to the fifteenth frame of the image to be processed is determined.
According to the embodiment of the application, the brightness enhancement slope between the video frames is controlled by setting the first brightness enhancement coefficient, so that the phenomenon of jumping and the like between the video frames can be avoided when the source information is video, and the effect of low-illumination enhancement is ensured.
The following describes an image brightness information determining process provided in the embodiment of the present application with reference to fig. 4, where in an embodiment, the electronic device calculates image brightness information corresponding to an image to be processed, including:
s133, determining a Y-channel image corresponding to the image to be processed.
In an embodiment, the image brightness information corresponding to the image to be processed can be determined by performing pixel statistics on the Y-channel image corresponding to the image to be processed in a histogram manner. Illustratively, the luminance value of the pixel of the image to be processed is set at the interval [0,255], wherein the pixel luminance closer to 255 is higher and the pixel luminance closer to 0 is lower.
S134, determining preset dark part pixels and global image pixels corresponding to the Y channel image.
In an embodiment, a preset dark pixel and a global image pixel are determined in the image to be processed, wherein the preset dark pixel refers to a pixel in a preset specific brightness value range, for example, the preset dark pixel may refer to a pixel in a [0,45] brightness value range; global image pixels refer to all pixels in the entire image to be processed, e.g., global image pixels may refer to pixels within a [0,255] luminance value range.
S135, calculating a pixel proportion value of a first pixel number corresponding to the preset dark part pixel and a second pixel number corresponding to the global image pixel.
In an embodiment, the brightness level of the image to be processed, that is, the image brightness information, can be determined by counting the pixel proportion value of the preset dark pixels in the global image pixels in a histogram pixel counting manner. In one embodiment, a first number of pixels (also referred to as preset dark pixels) within the luminance value range of [0,45] and a second number of pixels of the global image pixel are calculated, and a ratio of the first number of pixels to the second number of pixels is calculated as a pixel ratio value.
Referring to fig. 5A, the horizontal axis represents the luminance value of a corresponding pixel of a certain image to be processed, the luminance value range is 0-255, and the vertical axis represents the number of pixels in the image to be processed. By referring to the histogram, the pixel proportion value of the preset dark portion pixel in the global image pixel can be clearly seen.
S136, splitting the Y-channel image according to a preset proportion to obtain a plurality of image blocks.
In an embodiment, the preset ratio is a split ratio set according to a pixel size of the image to be processed. Splitting the Y channel image according to a preset proportion to obtain a plurality of image blocks with the same shape and area.
S137, selecting a preset number of sampling points from the plurality of image blocks, and determining a sampling interval according to the sampling points.
Considering that the manner of using the histogram to perform pixel statistics so as to determine the brightness information of the image may be interfered by the color of a part of large objects, for example, the actual overall brightness of the image to be processed is normal, but since the image contains a cup of cola, the cola is black, the overall brightness of the image to be processed is easily affected by the black cola in a histogram manner, so that the overall brightness of the image to be processed is judged to be dark, and the accuracy of determining the brightness information of the image cannot be ensured. The method and the device are also used for assisting in determining the image brightness information of the image to be processed by combining a region fixed-point sampling mode on the basis of histogram pixel statistics.
In an embodiment, the preset number is a value determined according to the pixel size of the image to be processed, and the larger the pixel is, the larger the preset number is; the smaller the pixels, the smaller the preset number. In an embodiment, a correspondence between the pixel size of the image to be processed and the preset number may be predetermined, and the number of sampling points corresponding to the image to be processed may be obtained by querying the correspondence. The preset number may be 6, for example. The region formed by the image blocks (including the sampling points) surrounded by 6 sampling points is taken as a sampling interval.
Referring to fig. 5B, the image to be processed is split into 5*7 equal-scale image blocks, and the circled image blocks in the figure are the selected sampling points, and the number of the sampling points is 6. The region formed by the image blocks (including the sampling points) surrounded by 6 sampling points is taken as a sampling interval, and the sampling interval in the figure is a region formed by 3*5 image blocks.
S138, calculating a pixel mean value corresponding to the sampling interval.
In one embodiment, first, the brightness and value of all pixels in the sampling interval are calculated, for example, the brightness value is set in interval [0,255]; then, determining the number of pixels in a sampling interval; and finally, calculating the ratio of the brightness sum value to the number of pixels as a pixel mean value corresponding to the sampling interval.
And S139, weighting the pixel proportion value and the pixel mean value to obtain the image brightness information corresponding to the image to be processed.
In an embodiment, weighting is performed on a pixel proportion value calculated by adopting a histogram statistics mode and a pixel mean value calculated by adopting a region fixed-point sampling mode to obtain image brightness information of an image to be processed. For the pixel proportion value and the pixel average value, the weighting coefficient can be set according to practical requirements, the sum value of the weighting coefficients is 1, for example, the weighting coefficient of the pixel proportion value is 0.5, and the weighting coefficient of the pixel average value is 0.5. And weighting the pixel proportion value and the pixel mean value, and taking the weighted result as image brightness information corresponding to the image to be processed.
According to the method and the device, the image brightness information of the image to be processed is determined by combining the histogram pixel statistics mode and the area fixed-point sampling mode, the problem of color interference of partial large objects can be avoided, the accuracy of image brightness information determination is improved, and the effect of low-illumination enhancement processing is further guaranteed.
The following describes a pixel traversal model determining process provided in the embodiment of the present application with reference to fig. 6, where in an embodiment, the electronic device determines a pixel traversal model corresponding to an image to be processed, including:
s151, calculating a target bright channel prior value corresponding to the image to be processed.
In an embodiment, the brightness value of the dark channel image corresponding to the normal image is close to 0, and the brightness value of the dark channel image corresponding to the foggy image interfered by the fog is significantly higher than 0. Accordingly, the brightness value of the inverse dark channel (also called the bright channel) of the low-illumination image is close to the maximum value of the color range, which is called the bright channel prior value in the application.
The process of determining the prior value of the bright target channel may be described in detail below with respect to fig. 8.
S152, calculating the target atmospheric light intensity corresponding to the image to be processed.
The determination procedure for the target atmospheric light intensity may be referred to in detail below with respect to fig. 9.
And S153, processing the prior value of the target bright channel and the target atmospheric light intensity according to a preset atmospheric light transmittance formula to obtain the atmospheric light transmittance.
In one embodiment, the atmospheric transmittance can be calculated from the following equation 1:
wherein t (x) is used for representing the atmospheric light transmittance, and omega is a preset interval of [0,1 ]]Dark parameters, J bright For representing the prior value of the target bright channel, A bright For representing the target atmospheric light intensity,the sliding window algorithm is preset, and is not limited herein.
And S154, performing gamma conversion on the atmospheric light transmittance to obtain an atmospheric light transmittance enhancement value.
In an embodiment, the atmospheric light transmittance enhancement value may be obtained by performing gamma transformation on the atmospheric light transmittance by using a preset gamma function.
And S155, generating a pixel traversing model according to the target atmospheric light intensity and the atmospheric light transmittance enhancement value.
In one embodiment, the pixel traversal model may be calculated from equation 2 as follows:
wherein J (x) is used for representing a pixel traversal model, I (x) is used for representing an image to be processed, t 0 Is a preset value for avoiding the denominator being 0.
The following determines, in conjunction with fig. 7, a bright channel prior value determining procedure provided in an embodiment of the present application, where in an embodiment, the electronic device calculates a target bright channel prior value corresponding to an image to be processed, including:
s1511, determining the brightness value of the image to be processed corresponding to each pixel, and obtaining a first brightness value set.
In an embodiment, when the image to be processed is in RGB data format, a maximum value of pixel values corresponding to the same position in the RGB three channels is determined, and the first set of luminance values is generated from the maximum value of pixel values at each position of the image to be processed. When the image to be processed is in YUV data format, a first set of luminance values is generated from pixel values at each position within the Y-channel image.
S1512, selecting a plurality of local maximum values from the first brightness value set according to a first preset sliding window algorithm.
Illustratively, the actual overall brightness of the image to be processed is normal, but since a cup of cola is contained in the image, the cola is black, and the black cola is very easy to influence the overall brightness judgment of the image to be processed. Therefore, the application selects a plurality of local maximum values from the first brightness set through the first preset sliding window algorithm, so that the influence of extremely black objects (cola in the embodiment of the application) in the image to be processed on the whole brightness can be filtered out. The first preset sliding window algorithm is in the prior art, and the sliding window size can be set according to actual requirements, which is not limited herein.
S1513, sorting the local maximum values in the order from large to small, and selecting the local maximum value of the first preset duty ratio which is sorted before.
In an embodiment, the first preset duty cycle may be set according to actual requirements, for example, the first preset duty cycle is 0.1%.
S1514, calculating a first average value of the local maximum value of the first preset duty ratio, and taking the first average value as the prior value of the target bright channel.
In an embodiment, taking the number of local maxima of the first preset duty ratio as 3 as an example, the local maxima are respectively a local maximum a, a local maximum B and a local maximum C, where the first average value is (a+b+c)/3, and the first average value is taken as the prior value of the target bright channel.
The following describes a flow of determining the target atmospheric light intensity provided in the embodiment of the present application with reference to fig. 8. The reason why the atmospheric light is inferior in generalization is often that the brightest part of the image is regarded as atmospheric light, but even in the case of a low-illuminance image, the brightest pixel is not necessarily atmospheric light, and the influence of a solid white object needs to be considered as well. In an embodiment, the electronic device calculates an atmospheric light intensity corresponding to an image to be processed, including:
s1521, determining the brightness value of the image to be processed corresponding to each pixel, and obtaining a second brightness value set.
In an embodiment, when the image to be processed is in the RGB data format, a minimum value of pixel values corresponding to the same position in the RGB three channels is determined, and the second set of luminance values is generated from the minimum value of pixel values of each position of the image to be processed. When the image to be processed is in YUV data format, a second set of luminance values is generated from the pixel values at each position within the Y-channel image.
S1522, selecting a plurality of local minimum values from the second brightness value set according to a second preset sliding window algorithm.
The actual overall brightness of the image to be processed is normal, but the facial tissue is pure white because the image contains a piece of facial tissue, and the pure white facial tissue is extremely easy to influence the overall brightness judgment of the image to be processed. Therefore, the application selects a plurality of local minimum values from the second brightness value set through the second preset sliding window algorithm, so that the influence of the extremely white object (facial tissue in the embodiment of the application) in the image to be processed on the whole brightness can be filtered. The second preset sliding window algorithm is the prior art, and is not limited again.
S1523, sorting the local minimum values in the order from large to small, and selecting the local minimum value of the second preset duty ratio with the previous sorting.
In an embodiment, the second preset duty cycle may be set according to actual requirements, for example, the second preset duty cycle is 0.1%.
S1524, calculating a second average value of the local minimum value of the second preset duty ratio, and taking the second average value as the target atmospheric light intensity.
In an embodiment, taking the number of the local minima of the second preset duty ratio as 3 as an example, the local minima D, the local minima E and the local minima F are respectively defined, where the second average value is (d+e+f)/3, and the second average value is taken as the target atmospheric light intensity.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a low-illumination enhancement processing apparatus according to an embodiment of the present disclosure. In some embodiments, the low-light enhancement processing device 20 may include a plurality of functional modules consisting of computer program segments. The computer program of the individual program segments in the low-light intensity enhancement processing means 20 may be stored in a memory of the electronic device and executed by at least one controller to perform (see in detail fig. 2 for a description of) the functions of the low-light intensity enhancement processing.
In the present embodiment, the low-illuminance enhancement processing device 20 may be divided into a plurality of functional modules according to the functions it performs. The functional module may include: a source acquisition module 201, a first coefficient determination module 202, a luminance calculation module 203, a second coefficient determination module 204, a traversal table determination module 205, an initial pixel determination module 206, a target pixel determination module 207, and a pixel adjustment module 208. The module referred to in this application refers to a series of computer program segments capable of being executed by at least one controller and of performing a fixed function, stored in a memory. In the present embodiment, the functions of the respective modules will be described in detail in the following embodiments.
The source acquisition module 201 may be configured to acquire an image to be processed and source information of the image to be processed.
The first coefficient determination module 202 may be configured to determine a first luminance enhancement coefficient based on the source information.
The brightness calculation module 203 may be configured to calculate image brightness information corresponding to the image to be processed.
The second coefficient determination module 204 may be configured to determine a second luminance enhancement coefficient based on the image luminance information.
The traversal table determination module 205 can be configured to determine a pixel traversal model corresponding to the image to be processed.
The initial pixel determining module 206 may be configured to determine a plurality of initial pixels corresponding to the image to be processed, to obtain an initial pixel set.
The target pixel determining module 207 may be configured to invoke the pixel traversal model to traverse the initial pixel set, so as to obtain a target pixel set corresponding to the initial pixel set.
The pixel adjustment module 208 may be configured to adjust the initial pixel set and the target pixel set according to the first luminance enhancement coefficient and the second luminance enhancement coefficient, so as to obtain a low-luminance enhanced image.
Pending image pending image to process image in the present embodiment, the functional blocks in the low-illuminance enhancement processing apparatus 20 are the same as those of the low-illuminance enhancement processing methods of the above-described embodiments, the specific implementation manner of each module of the low-illuminance enhancement processing apparatus 20 corresponds to each step of the low-illuminance enhancement processing method in the above embodiment, and the description thereof will not be repeated.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of modules is merely a logical function division, and other manners of division may be implemented in practice. The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a computer device, or a network device, etc.) or processor (processor) to perform portions of the methods of various embodiments of the present application.
In some embodiments, the memory 31 has stored therein a computer program which, when executed by the at least one processor 32, performs all or part of the steps of a low-light enhancement processing method, for example. The Memory 31 includes Read-Only Memory (ROM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable rewritable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic tape Memory, or any other medium that can be used for a computer readable medium that carries or stores data.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store created data, etc.
In some embodiments, at least one processor 32 is a Control Unit (Control Unit) of electronic device 3, connects the various components of the entire electronic device 3 using various interfaces and lines, and performs various functions and processes of the electronic device 3 by running or executing programs or modules stored in memory 31, and invoking data stored in memory 31. For example, at least one processor 32, when executing computer programs stored in memory, implements all or part of the steps of the low-light enhancement processing methods in embodiments of the present application; or to implement all or part of the functionality of the low-light enhancement processing device. The at least one processor 32 may be comprised of integrated circuits, such as a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functionality, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, a combination of various control chips, and the like.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it will be obvious that the term "comprising" does not exclude other elements or that the singular does not exclude a plurality. Several of the elements or devices recited in the specification may be embodied by one and the same item of software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application.

Claims (10)

1. A method of low-light enhancement processing, the method comprising:
acquiring an image to be processed and source information of the image to be processed;
determining a first brightness enhancement factor according to the source information;
calculating image brightness information corresponding to the image to be processed;
determining a second brightness enhancement coefficient according to the image brightness information;
determining a pixel traversal model corresponding to the image to be processed;
determining a plurality of initial pixels corresponding to the image to be processed to obtain an initial pixel set;
invoking the pixel traversing model to traverse the initial pixel set to obtain a target pixel set corresponding to the initial pixel set;
and adjusting the initial pixel set and the target pixel set according to the first brightness enhancement coefficient and the second brightness enhancement coefficient to obtain a low-illumination enhanced image.
2. The method of claim 1, wherein said determining a first luminance enhancement factor from said source information comprises:
when the source information is non-video, determining that the first brightness enhancement coefficient is a preset value;
when the source information is video, determining a video frame rate of the video corresponding to the image to be processed and a preset interval difference value corresponding to the first brightness enhancement coefficient, calculating a ratio of the interval difference value to the video frame rate, and determining the first brightness enhancement coefficient corresponding to each frame of the video according to the ratio.
3. The method of claim 1, wherein prior to said calculating the image brightness information corresponding to the image to be processed, the method further comprises:
determining an image format of the image to be processed;
when the image format is an RGB data format, the RGB data format is converted into a YUV data format.
4. A method according to claim 3, wherein said calculating image brightness information corresponding to said image to be processed comprises:
determining a Y-channel image corresponding to the image to be processed;
Determining preset dark pixels and global image pixels corresponding to the Y channel image;
calculating a pixel proportion value of a first pixel number corresponding to the preset dark part pixel and a second pixel number corresponding to the global image pixel;
splitting the Y channel image according to a preset proportion to obtain a plurality of image blocks;
selecting a preset number of sampling points from the plurality of image blocks, and determining a sampling interval according to the sampling points;
calculating a pixel mean value corresponding to the sampling interval;
and weighting the pixel proportion value and the pixel mean value to obtain the image brightness information corresponding to the image to be processed.
5. The method of claim 1, wherein the determining a pixel traversal model corresponding to the image to be processed comprises:
calculating a priori value of a target bright channel corresponding to the image to be processed;
calculating the target atmospheric light intensity corresponding to the image to be processed;
processing the prior value of the target bright channel and the target atmospheric light intensity according to a preset atmospheric light transmittance formula to obtain the atmospheric light transmittance;
performing gamma conversion on the atmospheric light transmittance to obtain an atmospheric light transmittance enhancement value;
and generating the pixel traversing model according to the target atmospheric light intensity and the atmospheric light transmittance enhancement value.
6. The method of claim 5, wherein said calculating a target bright channel prior value for the image to be processed comprises:
determining the brightness value of each pixel corresponding to the image to be processed to obtain a first brightness value set;
selecting a plurality of local maximum values from the first brightness value set according to a first preset sliding window algorithm;
sorting the local maximum values according to the sequence from large to small, and selecting the local maximum value with the first preset duty ratio before sorting;
and calculating a first average value of the local maximum value of the first preset duty ratio, and taking the first average value as the prior value of the target bright channel.
7. The method of claim 6, wherein the calculating the atmospheric light intensity corresponding to the image to be processed comprises:
determining the brightness value of each pixel corresponding to the image to be processed to obtain a second brightness value set;
selecting a plurality of local minimum values from the second brightness value set according to a second preset sliding window algorithm;
sorting the local minimum values according to the sequence from large to small, and selecting the local minimum value of a second preset duty ratio with the previous sorting;
And calculating a second average value of the local minimum value of the second preset duty ratio, and taking the second average value as the target atmospheric light intensity.
8. A low-light intensity enhancement processing apparatus, the apparatus comprising:
the source acquisition module is used for acquiring an image to be processed and source information of the image to be processed;
a first coefficient determining module, configured to determine a first luminance enhancement coefficient according to the source information;
the brightness calculation module is used for calculating image brightness information corresponding to the image to be processed;
a second coefficient determining module, configured to determine a second luminance enhancement coefficient according to the image luminance information;
the traversal table determining module is used for determining a pixel traversal model corresponding to the image to be processed;
an initial pixel determining module, configured to determine a plurality of initial pixels corresponding to the image to be processed, to obtain an initial pixel set;
the target pixel determining module is used for traversing the pixel traversing model to obtain a target pixel set corresponding to the initial pixel set;
and the pixel adjustment module is used for adjusting the initial pixel set and the target pixel set according to the first brightness enhancement coefficient and the second brightness enhancement coefficient to obtain a low-illumination enhanced image.
9. An electronic device comprising a processor for implementing the low-light intensity enhancement processing method according to any one of claims 1 to 7 when executing a computer program stored in a memory.
10. A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a controller, implements the low-illuminance enhancement processing method according to any one of claims 1 to 7.
CN202310480943.8A 2023-04-27 2023-04-27 Low-illumination enhancement processing method, device, equipment and storage medium Pending CN116485679A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310480943.8A CN116485679A (en) 2023-04-27 2023-04-27 Low-illumination enhancement processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310480943.8A CN116485679A (en) 2023-04-27 2023-04-27 Low-illumination enhancement processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116485679A true CN116485679A (en) 2023-07-25

Family

ID=87221259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310480943.8A Pending CN116485679A (en) 2023-04-27 2023-04-27 Low-illumination enhancement processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116485679A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237258A (en) * 2023-11-14 2023-12-15 山东捷瑞数字科技股份有限公司 Night vision image processing method, system, equipment and medium based on three-dimensional engine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237258A (en) * 2023-11-14 2023-12-15 山东捷瑞数字科技股份有限公司 Night vision image processing method, system, equipment and medium based on three-dimensional engine
CN117237258B (en) * 2023-11-14 2024-02-09 山东捷瑞数字科技股份有限公司 Night vision image processing method, system, equipment and medium based on three-dimensional engine

Similar Documents

Publication Publication Date Title
CN110336954B (en) Automatic light supplementing adjustment method, system and storage medium
US10565742B1 (en) Image processing method and apparatus
WO2016160221A1 (en) Machine learning of real-time image capture parameters
CN113518185B (en) Video conversion processing method and device, computer readable medium and electronic equipment
CN107451969A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN109361910A (en) Self-adapted white balance correction method and device
CN104182721A (en) Image processing system and image processing method capable of improving face identification rate
CN109147005A (en) It is a kind of for the adaptive colouring method of infrared image, system, storage medium, terminal
CN113299245A (en) Method and device for adjusting local backlight of display equipment, display equipment and storage medium
CN116485679A (en) Low-illumination enhancement processing method, device, equipment and storage medium
CN113395457B (en) Parameter adjusting method, device and equipment of image collector and storage medium
CN109920381B (en) Method and equipment for adjusting backlight value
CN116168652A (en) Image display method, device, electronic equipment and computer readable storage medium
CN109348207B (en) Color temperature adjusting method, image processing method and device, medium and electronic equipment
CN106454140B (en) A kind of information processing method and electronic equipment
CN106686320B (en) A kind of tone mapping method based on number density equilibrium
CN111901519B (en) Screen light supplement method and device and electronic equipment
CN111311500A (en) Method and device for carrying out color restoration on image
CN116453470B (en) Image display method, device, electronic equipment and computer readable storage medium
US20130286245A1 (en) System and method for minimizing flicker
CN115334250A (en) Image processing method and device and electronic equipment
CN111738949B (en) Image brightness adjusting method and device, electronic equipment and storage medium
CN114266803A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112488933A (en) Video detail enhancement method and device, mobile terminal and storage medium
CN111915529A (en) Video dim light enhancement method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination