WO2022133749A1 - Image processing method and apparatus, storage medium and electronic device - Google Patents

Image processing method and apparatus, storage medium and electronic device Download PDF

Info

Publication number
WO2022133749A1
WO2022133749A1 PCT/CN2020/138407 CN2020138407W WO2022133749A1 WO 2022133749 A1 WO2022133749 A1 WO 2022133749A1 CN 2020138407 W CN2020138407 W CN 2020138407W WO 2022133749 A1 WO2022133749 A1 WO 2022133749A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
images
frame
processed image
Prior art date
Application number
PCT/CN2020/138407
Other languages
French (fr)
Chinese (zh)
Inventor
罗俊
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202080107017.0A priority Critical patent/CN116457822A/en
Priority to PCT/CN2020/138407 priority patent/WO2022133749A1/en
Publication of WO2022133749A1 publication Critical patent/WO2022133749A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
  • Image brightness refers to the brightness of an image, and is an important factor that affects people's visual experience when viewing an image.
  • image brightness is inappropriate, such as too high or too low, the content of the image cannot be fully presented, for example, the face and text in the image are difficult to recognize, thus affecting the image quality.
  • the present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium and an electronic device, so as to improve the brightness problem in an image at least to a certain extent.
  • an image processing method comprising: acquiring an image to be processed from consecutive multi-frame images; performing luminance mapping processing on the to-be-processed image to generate an intermediate image; using the consecutive multi-frame images Acquire at least one frame of reference image; perform fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
  • an image processing apparatus including a processor; wherein the processor is configured to execute the following program modules stored in a memory: a to-be-processed image acquisition module, configured to acquire a to-be-processed image from multiple consecutive frames of images processing an image; an intermediate image generation module for performing brightness mapping processing on the to-be-processed image to generate an intermediate image; a reference image acquisition module for acquiring at least one frame of reference image by using the continuous multi-frame images; an image fusion processing module , which is used to perform fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
  • a to-be-processed image acquisition module configured to acquire a to-be-processed image from multiple consecutive frames of images processing an image
  • an intermediate image generation module for performing brightness mapping processing on the to-be-processed image to generate an intermediate image
  • a reference image acquisition module for acquiring at least one frame of reference image by using the continuous multi-frame images
  • an image fusion processing module which is used to perform
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the image processing method of the first aspect and possible implementations thereof.
  • an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the executable instructions to The image processing method of the above-mentioned first aspect and possible implementations thereof are performed.
  • brightness mapping processing of the image to be processed can improve the brightness problems in the image, such as low brightness, high brightness, uneven local brightness, etc.
  • the fusion processing of the reference image and the intermediate image can achieve noise reduction, And repair the missing local information in the image, so as to obtain an optimized image with clearly visible image content.
  • this solution can realize the optimization of any one of the frame images without external information, and has lower implementation cost and higher practicability.
  • Figure 1A shows a face image captured in a low-light environment
  • Figure 1B shows a face image captured in a backlit environment
  • FIG. 2 shows a schematic structural diagram of an electronic device in this exemplary embodiment
  • FIG. 3 shows a flowchart of an image processing method in this exemplary embodiment
  • FIG. 4 shows a flowchart of a method for acquiring an image to be processed in this exemplary embodiment
  • FIG. 5 shows a schematic diagram of converting a RAW image into a single-channel image in this exemplary embodiment
  • FIG. 6 shows a schematic diagram of a mapping curve in this exemplary embodiment
  • FIG. 7 shows a flowchart of a luminance mapping processing method in this exemplary embodiment
  • FIG. 8 shows a flowchart of a method for obtaining a reference image in this exemplary embodiment
  • FIG. 9 shows an example diagram of image processing in this exemplary embodiment
  • FIG. 10 shows an example diagram of image processing and face recognition in this exemplary embodiment
  • FIG. 11 shows a schematic structural diagram of an image processing apparatus in this exemplary embodiment
  • FIG. 12 shows a schematic diagram of an architecture in this exemplary embodiment.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments can be embodied in various forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
  • the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • numerous specific details are provided in order to give a thorough understanding of the embodiments of the present disclosure.
  • those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details, or other methods, components, devices, steps, etc. may be employed.
  • well-known solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
  • the ambient light conditions and the exposure parameters of the shooting equipment when taking the image will affect the brightness of the image.
  • the influence of ambient light conditions is greater.
  • the captured images may lack important part of the information.
  • FIG. 1A when shooting a portrait, if the ambient light is very weak, that is, a weak light environment, the overall brightness of the captured image is very low, and it is difficult to recognize the face; as shown in FIG. 1B, if the light source is located at the Behind the person, that is, the backlight environment, the brightness of the face part is low, and it is also difficult to identify. Therefore, there is a need to improve the brightness of images captured in extreme environments.
  • the exemplary embodiments of the present disclosure first provide an image processing method, the application scenarios of which include but are not limited to: in the intelligent interaction scenario of a mobile terminal, collecting, monitoring and detecting through an AON (Always ON) camera Face and gesture information to achieve specific interactive functions, such as automatically activating the display when a face is detected, and automatically turning the user interface when a page turning gesture is detected; however, in weak lighting, backlight, extremely strong In an environment such as a light source, the brightness of the face and gesture images collected by the AON camera may be too low or too high, which affects the accuracy of the above detection; through the image processing method of this exemplary embodiment, the image brightness can be improved to increase the The accuracy of detection of faces, gestures, etc.
  • AON Automatic ON
  • Exemplary embodiments of the present disclosure also provide an electronic device for executing the above-described image processing method.
  • the electronic devices include, but are not limited to, computers, smart phones, tablet computers, game consoles, wearable devices, and the like.
  • an electronic device includes a processor and a memory.
  • the memory is used to store executable instructions of the processor, and may also store application data, such as image data, game data, etc.; the processor is configured to execute the image processing method in this exemplary embodiment by executing the executable instructions.
  • the mobile terminal 200 may specifically include: a processor 210 , an internal storage area 221 , an external memory interface 222 , a USB (Universal Serial Bus, Universal Serial Bus) interface 230 , a charging management module 240 , and a power management module 241 , battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headphone jack 274, sensor module 280, display screen 290, camera module 291, Indicator 292, motor 293, key 294, SIM (Subscriber Identification Module, Subscriber Identification Module) card interface 295 and so on.
  • a processor 210 an internal storage area 221 , an external memory interface 222 , a USB (Universal Serial Bus, Universal Serial Bus) interface 230 , a charging management module 240 , and a power management module 241 , battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headphone jack 274, sensor module 280
  • the processor 210 may include one or more processing units, for example, the processor 210 may include an AP (Application Processor, application processor), a modem processor, a GPU (Graphics Processing Unit, graphics processor), an ISP (Image Signal Processor, image signal processor), controller, encoder, decoder, DSP (Digital Signal Processor, digital signal processor), baseband processor and/or NPU (Neural-Network Processing Unit, neural network processor), etc.
  • AP Application Processor
  • modem processor e.g., graphics processing circuitry
  • GPU Graphics Processing Unit, graphics processor
  • ISP Image Signal Processor, image signal processor
  • controller encoder, decoder
  • DSP Digital Signal Processor, digital signal processor
  • baseband processor and/or NPU Neural-Network Processing Unit, neural network processor
  • the processor 210 may include one or more interfaces through which connections are formed with other components of the mobile terminal 200 .
  • Internal storage area 221 may be used to store computer executable program code, which includes instructions.
  • the internal storage area 221 may include volatile memory, such as DRAM (Dynamic Random Access Memory, dynamic random access memory), SRAM (Static Random Access Memory, static random access memory), and may also include non-volatile memory, such as at least one disk memory devices, flash memory devices, UFS (Universal Flash Storage, universal flash memory), etc.
  • the processor 210 executes various functional applications and data processing of the mobile terminal 200 by executing instructions stored in the internal storage area 221 and/or instructions stored in a memory provided in the processor.
  • the external memory interface 222 can be used to connect an external memory, such as a Micro SD card, to expand the storage capacity of the mobile terminal 200.
  • the external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing music, video and other files.
  • the USB interface 230 is an interface conforming to the USB standard specification, and can be used to connect a charger to charge the mobile terminal 200, and can also be connected to an earphone or other electronic devices.
  • the charging management module 240 is used to receive charging input from the charger. While charging the battery 242, the charging management module 240 can also supply power to the device through the power management module 241; the power management module 241 can also monitor the state of the battery.
  • the wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • the mobile communication module 250 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the mobile terminal 200 .
  • the wireless communication module 260 can provide applications on the mobile terminal 200 including WLAN (Wireless Local Area Networks, wireless local area network) (such as Wi-Fi (Wireless Fidelity, wireless fidelity) network), BT (Bluetooth, Bluetooth), GNSS (Global Navigation Satellite System, global navigation satellite system), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, short-range wireless communication technology), IR (Infrared, infrared technology) and other wireless communication solutions.
  • WLAN Wireless Local Area Networks, wireless local area network
  • Wi-Fi Wireless Fidelity, wireless fidelity
  • BT Bluetooth
  • GNSS Global Navigation Satellite System, global navigation satellite system
  • FM Frequency Modulation, frequency modulation
  • NFC Near Field Communication, short-range wireless communication technology
  • IR Infrared, infrared technology
  • the mobile terminal 200 may implement a display function through a GPU, a display screen 290, an AP, and the like.
  • the mobile terminal 200 can realize the shooting function through the ISP, the camera module 291, the encoder, the decoder, the GPU, the display screen 290, the AP, and the like.
  • the camera module 291 can include various types of cameras, such as AON cameras, wide-angle cameras, high-definition cameras, etc.
  • the camera can be arranged at any position of the mobile terminal 200, for example, arranged on the side of the display screen 290 to form a front camera, Or set on the opposite side of the display screen 290 to form a rear camera.
  • the mobile terminal 200 can implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, an earphone interface 274, an AP, and the like.
  • the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyro sensor 2803, an air pressure sensor 2804, etc., to realize different sensing detection functions.
  • the indicator 292 can be an indicator light, which can be used to indicate the charging status, the change of power, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the motor 293 can generate vibration prompts, and can also be used for touch vibration feedback and the like.
  • the keys 294 include a power-on key, a volume key, and the like.
  • the mobile terminal 200 may support one or more SIM card interfaces 295 for connecting the SIM cards to realize functions such as calling and data communication.
  • FIG. 3 shows a schematic flow of the image processing method in this exemplary embodiment, which may include:
  • Step S310 acquiring an image to be processed from multiple consecutive frames of images
  • Step S320 performing brightness mapping processing on the image to be processed to generate an intermediate image
  • Step S330 obtaining at least one frame of reference image by using the above-mentioned consecutive multi-frame images
  • Step S340 performing fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
  • the brightness mapping processing of the image to be processed can improve the brightness problems in the image, such as low brightness, high brightness, uneven local brightness, etc. It realizes noise reduction and repairs the lack of local information in the image, so as to obtain an optimized image with clearly visible image content, which is conducive to the further realization of face recognition, gesture recognition, target detection and other applications; further, this solution improves image capture and processing. Robustness in extreme lighting environments reduces the dependence on the performance of hardware such as cameras and image sensors. For example, for image sensors with low photosensitive performance, this solution can be used to optimize the captured images to obtain high-quality images. This helps to reduce hardware costs. On the other hand, based on the continuous multi-frame images collected during video shooting or image preview, this solution can realize the optimization of any one of the frame images without external information, and has low implementation cost and high practicability.
  • step S310 the to-be-processed image is acquired from consecutive multiple frames of images.
  • the above-mentioned continuous multi-frame images may be images continuously collected by a camera, for example, a camera shoots a video or continuously collects a preview image, and the like.
  • the image to be processed can be any one of the frame images.
  • the method in FIG. 3 is executed in real time by taking the currently collected frame of image as the image to be processed, so as to realize the processing of each frame of image. .
  • step S310 may include
  • Step S410 obtaining the current RAW image from consecutive multiple frames of RAW images
  • Step S420 performing channel conversion processing on the current RAW image to obtain an image to be processed.
  • the RAW image refers to an image stored in a RAW format, and generally refers to an original image collected by an image sensor in a camera.
  • the image sensor collects the light signal through a Bayer filter and converts it into a digital signal to obtain a RAW image.
  • each pixel in the RAW image has only one color in RGB and is arranged in a Bayer array.
  • an AON camera may be set up on the terminal device, and the above-mentioned continuous multiple frames of RAW images may be collected and obtained
  • a currently collected RAW image is acquired, and since the color channels of each pixel point are different, channel conversion processing is performed on the current RAW image to obtain an image to be processed.
  • Channel conversion processing is to unify the color channels of each pixel, including but not limited to the following two methods:
  • each pixel of the current RAW image to any channel in RGB, for example, uniformly convert it to the R channel.
  • the R channel and B channel of the current RAW image can be converted into the G channel to obtain an image with all pixels in the G channel.
  • the pixels of the R channel and the B channel can be mapped to convert to the pixel value of the G channel, for example:
  • the R, G, and B channels can be converted into grayscale values according to a certain coefficient ratio, or the RAW image can be processed by Demosaic (de-mosaic) to obtain an RGB image, and then The gray value (Gray) of each pixel is calculated by the following formula (2) to obtain a gray image.
  • each pixel needs 10 bits, of which 8 bits record the pixel value and 2 bits record the channel information.
  • a single-channel image is obtained through channel conversion processing (a grayscale image can be regarded as a special single-channel image), and there is no need to record channel information, so each pixel only needs 8 bits.
  • RGB images single-channel images have greatly reduced data volume and save the bit width for recording channel information, thereby reducing the data volume of subsequent processing, which is conducive to real-time optimization processing.
  • the single-channel image can characterize the brightness of the image to be processed, and the information is relatively sufficient.
  • step S320 luminance mapping processing is performed on the image to be processed to generate an intermediate image.
  • the brightness mapping process refers to mapping the brightness value of each pixel in the image to be processed into a new value.
  • the brightness values of all pixels can be adjusted in the same direction, such as uniform increase or uniform decrease, the brightness change values of different pixels can be the same or different; the brightness values of different pixels can also be adjusted in different directions , such as reducing the brightness of the too bright part, increasing the brightness of the too dark part, adjusting the gray scale of the image, etc.
  • a mapping relationship between luminance values before and after mapping can be pre-configured, and the mapping relationship can be related to the luminance level or luminance distribution of the image to be processed itself, and then the luminance mapping process is implemented according to the mapping relationship.
  • Intermediate image can be pre-configured, and the mapping relationship can be related to the luminance level or luminance distribution of the image to be processed itself, and then the luminance mapping process is implemented according to the mapping relationship.
  • step S320 may include:
  • the illumination information is information representing the ambient illumination condition of the image to be processed, for example, it may include the ratio of illuminance value to backlight.
  • the illuminance value is a measure of the amount of light received per unit area of the captured image.
  • the backlight ratio is the proportion of the backlight part of the image to be processed.
  • the illumination information can reflect whether the image to be processed has defects in illumination, such as the overall illumination is excessive, the illumination is insufficient, and the local illumination distribution is unbalanced, etc., and further, brightness mapping processing can be performed on the image to be processed.
  • the luminance value of the image to be processed may be determined according to the exposure parameter of the image to be processed.
  • the exposure parameter includes at least one of exposure time (Exptime), sensitivity (ISO), and aperture value (F value).
  • Expotime Expotime
  • ISO sensitivity
  • F value aperture value
  • Exponce parameters are recorded when the image to be processed is captured, and for cameras on smartphones, the aperture value is usually a fixed value.
  • the exposure parameter is used to estimate the illuminance value. The more comprehensive the acquired exposure parameter, the more accurate the estimated illuminance value. For example, you can refer to the following formula (3):
  • Lumi a 3 *F 2 /(Exptime*ISO) (3)
  • Lumi represents the illuminance value, the unit is lux; a 3 represents the empirical coefficient, such as 12.4, which can be adjusted according to the actual shooting scene or camera performance; F is the aperture value, which is the ratio of the focal length of the lens to the effective diameter of the lens; Exptime is the exposure time, The unit is seconds; ISO is the sensitivity.
  • step S320 may include:
  • the image to be processed is subjected to global luminance mapping processing.
  • the first preset condition may include: less than the first threshold, or greater than the second threshold.
  • the first threshold is a threshold for measuring underexposure
  • the second threshold is a threshold for measuring overexposure.
  • the two thresholds can be determined based on experience, or can be adjusted according to the actual scene. For example, the first threshold and the second threshold are appropriately increased during the day. , and appropriately reduce the first threshold and the second threshold in the dark.
  • a global luminance mapping process Global Luminance Mapping
  • the to-be-processed image is subjected to a global brightness upward mapping process, that is, global brightening.
  • the to-be-processed image is subjected to global brightness down mapping processing, that is, global brightness reduction.
  • the brightness values of different pixel points in the image to be processed can be mapped to higher brightness values according to a preset mapping curve.
  • the mapping curve can be a linear curve, which reflects a linear mapping relationship, or a quadratic curve or other nonlinear curve, which reflects a nonlinear mapping relationship.
  • a plurality of mapping curves may be configured, which correspond to different mapping intensities respectively, and the higher the mapping intensities, the more significant the brightening will be.
  • the mapping intensity and which mapping curve to use can be determined according to the illuminance value of the image to be processed or the actual scene requirements. For example, the lower the illuminance value, the higher the mapping intensity, and the larger the overall slope of the mapping curve.
  • step S320 may further include:
  • tone-mapping processing is performed on the to-be-processed image.
  • the illumination value of the image to be processed does not meet the first preset condition, indicating that the global illumination of the image to be processed is relatively suitable, and then the backlight ratio of the image to be processed is calculated to determine whether there is a local inappropriate situation.
  • the backlight ratio of the to-be-processed image may be determined according to the brightness histogram of the to-be-processed image, which may specifically include: setting a plurality of brightness levels, counting the pixel ratios in each brightness level, and forming a to-be-processed image The brightness histogram of the image; if there are at least two brightness levels, the brightness difference value reaches the set value (the set value can be an empirical value, or can be determined according to the specific scene, such as when there is a prominent light source in the scene, the set value Generally larger, the setting is generally smaller under natural light or no significant light source), then the lower brightness level is determined as the backlight part; the brightness histogram of all the backlight parts is counted to obtain the proportion of the backlight part in the image to be processed, that is Backlight ratio.
  • the set value can be an empirical value, or can be determined according to the specific scene, such as when there is a prominent light source in the scene, the set value Generally larger, the setting is generally smaller under
  • the highest brightness value in the image to be processed can also be obtained, and the brightness threshold value can be determined according to the highest brightness value.
  • the highest brightness value can be multiplied by a fixed coefficient less than 1 to obtain the brightness threshold value;
  • the pixels below the brightness threshold are extracted, and the connected regions are extracted, that is, the isolated pixels are filtered out; the proportion of the connected regions in the image to be processed is taken as the backlight ratio.
  • the second preset condition may include: the backlight ratio is greater than the third threshold. It should be noted that the above calculation of the backlight ratio is to estimate the possible backlight phenomenon in the image to be processed. When the backlight ratio is greater than the third threshold, it can be considered that the probability of the existence of the backlight phenomenon is high.
  • the third threshold is a threshold for measuring whether there is a backlight phenomenon, which can be determined according to experience or actual scenarios. In this case, tone mapping processing (Tone Mapping) is performed on the image to be processed. The tone mapping process essentially still maps the brightness. Different from the above-mentioned global brightness mapping process, the tone mapping process can change the brightness range or brightness level distribution of the image, and the brightness adjustment direction of each pixel can be different.
  • the brightness is adjusted in the same direction, such as increasing the brightness as a whole or decreasing the brightness as a whole, to adjust the overall brightness level; in the tone mapping process, For different pixels in the whole image, the brightness can be adjusted in different directions. For example, the brighter parts are mapped downwards, and the parts with lower brightness are mapped upwards to adjust the brightness distribution.
  • the tone mapping process can be implemented by a mapping curve.
  • Figure 6 shows the mapping curves used in tone mapping processing, wherein the abscissa is the luminance value before mapping, the ordinate is the luminance value after mapping, the curves A, B, and C are the mapping curves under different mapping intensities, and the curve C is The mapping strength is the highest, and curve A has the lowest mapping strength. It can be seen that after the brightness value is mapped by the mapping curve, the brightness distribution of the image to be processed can be mapped to a smaller range. Caused by the local invisible problem. Generally, the higher the backlight ratio, or the greater the difference between the backlight part and the high-brightness part, the higher the mapping intensity is.
  • mapping curve used in the tone mapping processing is different from the mapping curve used in the above-mentioned global brightness mapping processing: the former is generally a non-linear curve, and a section of the slope is greater than 45 degrees (that is, the part of the brightness upward mapping processing, generally the brightness is relatively high.
  • the lower section), the slope of the other section is less than 45 degrees (that is, the part where the brightness is mapped downward, generally the section with higher brightness);
  • the latter can be a linear curve or a non-linear curve, and the slope of the entire curve is greater than 45 degrees or less.
  • the backlight ratio of the image to be processed does not meet the second preset condition, it means that the global illumination and local illumination of the image to be processed are relatively good, and the brightness mapping process may not be performed, and the The image to be processed serves as an intermediate image.
  • FIG. 7 shows a schematic flow of luminance mapping processing, including:
  • Step S710 determining the luminance value Lumi of the image to be processed
  • Step S720 compare the illuminance value with the first threshold value T1 and the second threshold value T2; when Lumi ⁇ T1 or Lumi>T2, it is determined that the first preset condition is met, and step S730 is performed; otherwise, step S740 is performed;
  • Step S730 performing global brightness mapping processing on the image to be processed
  • Step S740 determine the backlight ratio (BL ratio) of the image to be processed
  • Step S750 compare the backlight ratio with the third threshold value T3; when BL ratio>T3, it is determined that the second preset condition is met, and step S760 is performed; otherwise, the image to be processed is not processed, and jumps to step S770;
  • Step S760 performing tone mapping processing on the image to be processed
  • step S770 an intermediate image is obtained.
  • the brightness of the image to be processed is improved from the global and local levels, and in the obtained intermediate image, the image information missing due to the brightness problem can be recovered to a certain extent.
  • step S330 at least one frame of reference image is acquired by using the above-mentioned consecutive multiple frames of images.
  • the reference image is one or more frames of images that are continuous in time with the image to be processed, which can form supplementary image information of the image to be processed.
  • step S330 may include:
  • Step S810 acquiring at least one frame of images other than the image to be processed among the above-mentioned consecutive multiple frames of images;
  • Step S820 performing luminance mapping processing on the at least one frame of image to obtain a reference image
  • Step S830 acquiring an optimized image corresponding to the at least one frame of image as a reference image.
  • any one or more frames of images other than the image to be processed can be selected from the above-mentioned continuous multiple frames of images.
  • the more the number of selected images the closer the time to the image to be processed, the more conducive to improving the optimization effect.
  • the image to be processed is the i-th frame image in the above-mentioned consecutive multi-frame images, and i is a positive integer not less than 2
  • the i-m-th frame image to the i-1-th frame image can be selected, that is, the image located in the image to be processed
  • the previous m frames of images are used for subsequent optimization, where m is any positive integer.
  • the value of m can be determined by combining experience, actual demand and computing power performance.
  • step S820 and step S830 one of them may be selectively executed.
  • step S820 for the luminance mapping processing of the above-mentioned at least one frame of image, reference may be made to the luminance mapping processing performed on the image to be processed in step S320 and FIG. 7 .
  • the optimized image corresponding to the above-mentioned at least one frame of image may be an image obtained by optimizing the image by using the method flow of FIG. The method process is optimized to obtain an optimized image corresponding to each frame of image, and the optimized image can be used as a reference image for optimizing the next frame of image.
  • the exposure parameter of the image to be processed may be different from the exposure parameter of the above-mentioned at least one frame of image, for example, any one of exposure time, sensitivity, and aperture value may be different.
  • the image to be processed or the intermediate image can be complementary to the above-mentioned at least one frame of image or reference image to form information in terms of exposure and brightness.
  • the device can control the camera to collect the above-mentioned continuous multiple frames of images with different exposure parameters. For example, when the image is collected, the exposure time is gradually increased, or the sensitivity is gradually increased, and the exposure parameters between different frame images are different, so that the continuous The information in the multi-frame images is maximized to form a more effective information complementation.
  • step S340 a fusion process is performed on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
  • the reference image and the intermediate image have a slight difference in time, so there is a difference in image information, and the fusion of the two images can form a repairing effect on the image information of the detail part.
  • the image frequencies in the two images can be scanned separately and compared, and an area in the reference image whose image frequency is higher than that in the intermediate image is selected, and the area in the reference image is fused with the intermediate image.
  • weights can be performed on the pixels at the same position, for example, the weights are determined according to the image frequencies of the pixels in the two images, and weighted fusion is performed.
  • step S340 may include:
  • Time series filtering is performed on the reference image and the intermediate image.
  • time stamps of the reference image and the intermediate image they can be arranged into an image sequence, and then the image information in the image sequence can be converted into time-series signals, and the time-series signals can be filtered, such as Gaussian filtering, mean filtering, etc.
  • time-series signals can be filtered, such as Gaussian filtering, mean filtering, etc.
  • noise reduction Various ways to achieve further optimization effects such as noise reduction.
  • the brightness of the fused image may also be adjusted according to the above-mentioned continuous multiple frames to obtain an optimized image corresponding to the image to be processed. Since the intermediate image has undergone brightness mapping processing, its brightness may be significantly different from other frame images, resulting in brightness jumps of the video image. Therefore, the brightness of the fused image can be adjusted according to the brightness of other frame images, for example, selecting the same area in the other frame images and the fused image respectively, based on the brightness difference of the area in the two images, to The overall brightness adjustment is performed on the image after fusion processing, thereby ensuring the brightness consistency between consecutive frame images. It should be noted that, unlike the brightness mapping processing in step S320, the brightness adjustment performed here is generally brightness fine-tuning or brightness smoothing processing.
  • FIG. 9 shows an example diagram of the effect after the image to be processed is optimized, wherein the upper row is the to-be-processed image, and the lower row is the corresponding optimized image. It can be seen that the brightness of faces or gestures in the image to be processed is low, which makes it impossible to see clearly. After optimization, clearer face or gesture information can be obtained.
  • At least one of target detection, face recognition, and gesture recognition may be performed on the optimized image.
  • target detection, face recognition, and gesture recognition may be performed on the optimized image.
  • Figure 10 shows an example of face recognition on the optimized image. It can be seen that after optimization, the face part is clearly visible, so that the area where the face is located and the facial feature points are accurately detected.
  • a data set containing 845 images of faces and gestures is tested for optimization processing, most of the images are illuminance values of 3 to 15 lux, and the distance between the face or gesture and the camera is 15 ⁇ 40cm, belonging to low light environment and backlight environment.
  • the accuracy rate is 95.2%
  • the recall rate is 16.7%
  • the F1 value (F1 value is a kind of algorithm model in the field of machine learning). Evaluation index, the calculation method is ) is 28%;
  • the accuracy rate is 99.1%
  • the recall rate is 82.3%
  • the F1 value is 90%.
  • the image processing method of the present exemplary embodiment has a very significant improvement effect on the recognition algorithm in a low-light environment and a backlight environment. Therefore, even in extreme lighting environments, results similar to those in normal lighting environments can be obtained, which increases the robustness of the recognition algorithm and broadens its application scenarios.
  • the image processing apparatus 1100 may include a processor 1110 and a memory 1120 .
  • the memory 1120 stores the following program modules:
  • the to-be-processed image acquisition module 1121 is used to acquire the to-be-processed image from consecutive multiple frames of images;
  • An intermediate image generation module 1122 configured to perform luminance mapping processing on the image to be processed to generate an intermediate image
  • a reference image acquisition module 1123 configured to acquire at least one frame of reference image by using the above-mentioned consecutive multiple frames of images
  • the image fusion processing module 1124 is used to perform fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed;
  • the processor 1110 may be used to execute the above-described program modules.
  • the intermediate image generation module 1122 is configured to:
  • the illumination information includes an illumination value
  • the intermediate image generation module 1122 is configured to:
  • the image to be processed is subjected to global luminance mapping processing.
  • the intermediate image generation module 1122 is configured to:
  • the illuminance value of the image to be processed is determined according to the exposure parameter of the image to be processed; the exposure parameter includes at least one of exposure time, sensitivity, and aperture value.
  • the illumination information further includes a backlight ratio;
  • the intermediate image generation module 1122 is configured to:
  • tone-mapping processing is performed on the to-be-processed image.
  • the intermediate image generation module 1122 is configured to:
  • the backlight ratio of the to-be-processed image is determined according to the brightness histogram of the to-be-processed image.
  • the intermediate image generation module 1122 is configured to:
  • the image to be processed is used as an intermediate image.
  • the reference image acquisition module 1123 is configured to:
  • An optimized image corresponding to the at least one frame of image is obtained as a reference image.
  • the exposure parameter of the image to be processed is different from the exposure parameter of the above-mentioned at least one frame of image.
  • the image to be processed is the i-th frame of images in consecutive multiple frames of images; the above-mentioned at least one frame of image includes the i-m-th frame of images to the i-1-th frame of images in the consecutive multi-frame images; i is a positive integer not less than 2, and m is any positive integer.
  • the image fusion processing module 1124 is configured to:
  • Time series filtering is performed on the reference image and the intermediate image.
  • the image fusion processing module 1124 is configured to:
  • the brightness of the fused image is adjusted according to the above-mentioned continuous multi-frame images, so as to obtain an optimized image corresponding to the image to be processed.
  • the to-be-processed image acquisition module 1121 is configured to:
  • the to-be-processed image acquisition module 1121 is configured to:
  • the image processing apparatus 1100 is configured in a terminal device, and the terminal device includes an AON camera, which is used to collect the above-mentioned continuous multiple frames of RAW images.
  • the memory 1120 further includes the following program modules:
  • the image recognition application module is used to perform at least one process of target detection, face recognition, and gesture recognition on the above-mentioned optimized image.
  • FIG. 12 shows a schematic diagram of the architecture of this exemplary embodiment.
  • the electronic device is configured with an AON camera, which runs the AON camera service, and can realize the underlying processing of the image through the image signal processor. After processing, the corresponding optimized image is obtained, and the optimized image is provided to the AON software service.
  • the AON software service can perform monitoring, recognition and other services through the digital signal processor, such as face recognition and gesture recognition on the optimized image, obtain the corresponding recognition results, and provide the recognition results to the application service.
  • the application service can run related applications through the main processor, and use the above face and gesture recognition results to implement specific interactive instructions, such as screen locking and unlocking, and user interface page turning.
  • Exemplary embodiments of the present disclosure also provide a computer-readable storage medium that can be implemented in the form of a program product including program code for enabling the electronic device when the program product is run on the electronic device.
  • program product may be implemented as a portable compact disk read only memory (CD-ROM) and include program code, and may be executed on an electronic device, such as a personal computer.
  • CD-ROM portable compact disk read only memory
  • the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the program product may employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer readable signal medium may include a propagated data signal in baseband or as part of a carrier wave with readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable signal medium can also be any readable medium other than a readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • Program code embodied on a readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (eg, using an Internet service provider business via an Internet connection).
  • LAN local area network
  • WAN wide area network
  • an external computing device eg, using an Internet service provider business via an Internet connection
  • the exemplary embodiments described herein may be implemented by software, or by a combination of software and necessary hardware. Therefore, the technical solutions according to the embodiments of the present disclosure may be embodied in the form of software products, and the software products may be stored in a non-volatile storage medium (which may be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to cause a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the exemplary embodiment of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • modules or units of the apparatus for action performance are mentioned in the above detailed description, this division is not mandatory. Indeed, according to exemplary embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one module or unit described above may be further divided into multiple modules or units to be embodied.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

An image processing method, an image processing apparatus, a storage medium and an electronic device. The image processing method comprises: acquiring, from among multiple continuous frames of images, an image to be processed (S310); performing luminance mapping processing on the image to be processed and generating an intermediate image (S320); acquiring at least one frame of reference image by using the multiple continuous frames of images (S330); and performing time sequence filtering processing on the reference image and the intermediate image so as to obtain an optimized image corresponding to the image to be processed (S340). By means of the method, luminance problems in images are alleviated, image noise reduction and the restoration of missing local information are achieved, and a high-quality image optimization result is obtained.

Description

图像处理方法、装置、存储介质与电子设备Image processing method, device, storage medium and electronic device 技术领域technical field
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法、图像处理装置、计算机可读存储介质与电子设备。The present disclosure relates to the technical field of image processing, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
背景技术Background technique
图像亮度是指图像的明亮程度,是影响人们观看图像时视觉感受的重要因素。当图像亮度不合适,如过高或过低时,图像内容无法得到充分地呈现,例如图像中的人脸、文字等难以辨识,从而影响图像质量。Image brightness refers to the brightness of an image, and is an important factor that affects people's visual experience when viewing an image. When the image brightness is inappropriate, such as too high or too low, the content of the image cannot be fully presented, for example, the face and text in the image are difficult to recognize, thus affecting the image quality.
发明内容SUMMARY OF THE INVENTION
本公开提供一种图像处理方法、图像处理装置、计算机可读存储介质与电子设备,进而至少在一定程度上改善图像中的亮度问题。The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium and an electronic device, so as to improve the brightness problem in an image at least to a certain extent.
根据本公开的第一方面,提供一种图像处理方法,包括:从连续多帧图像中获取待处理图像;对所述待处理图像进行亮度映射处理,生成中间图像;利用所述连续多帧图像获取至少一帧参考图像;对所述参考图像和所述中间图像进行融合处理,得到所述待处理图像对应的优化图像。According to a first aspect of the present disclosure, an image processing method is provided, comprising: acquiring an image to be processed from consecutive multi-frame images; performing luminance mapping processing on the to-be-processed image to generate an intermediate image; using the consecutive multi-frame images Acquire at least one frame of reference image; perform fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
根据本公开的第二方面,提供一种图像处理装置,包括处理器;其中,所述处理器用于执行存储器存储的以下程序模块:待处理图像获取模块,用于从连续多帧图像中获取待处理图像;中间图像生成模块,用于对所述待处理图像进行亮度映射处理,生成中间图像;参考图像获取模块,用于利用所述连续多帧图像获取至少一帧参考图像;图像融合处理模块,用于对所述参考图像和所述中间图像进行融合处理,得到所述待处理图像对应的优化图像。According to a second aspect of the present disclosure, there is provided an image processing apparatus, including a processor; wherein the processor is configured to execute the following program modules stored in a memory: a to-be-processed image acquisition module, configured to acquire a to-be-processed image from multiple consecutive frames of images processing an image; an intermediate image generation module for performing brightness mapping processing on the to-be-processed image to generate an intermediate image; a reference image acquisition module for acquiring at least one frame of reference image by using the continuous multi-frame images; an image fusion processing module , which is used to perform fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
根据本公开的第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面的图像处理方法及其可能的实现方式。According to a third aspect of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the image processing method of the first aspect and possible implementations thereof.
根据本公开的第四方面,提供一种电子设备,包括:处理器;以及存储器,用于存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行上述第一方面的图像处理方法及其可能的实现方式。According to a fourth aspect of the present disclosure, there is provided an electronic device, comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the executable instructions to The image processing method of the above-mentioned first aspect and possible implementations thereof are performed.
本公开的技术方案具有以下有益效果:The technical solution of the present disclosure has the following beneficial effects:
一方面,对待处理图像进行亮度映射处理,可以改善图像中存在的亮度问题,如亮度过低、亮度过高、局部亮度不均衡等,对参考图像和中间图像进行融合处理,可以实现降噪,并修复图像中的局部信息缺失,从而得到图像内容清晰可见的优化图像。另一方面,本方案基于视频拍摄或图像预览时采集的连续多帧图像,即可实现对其中 任一帧图像的优化,无需外部信息,具有较低的实现成本与较高的实用性。On the one hand, brightness mapping processing of the image to be processed can improve the brightness problems in the image, such as low brightness, high brightness, uneven local brightness, etc. The fusion processing of the reference image and the intermediate image can achieve noise reduction, And repair the missing local information in the image, so as to obtain an optimized image with clearly visible image content. On the other hand, based on the continuous multiple frames of images collected during video shooting or image preview, this solution can realize the optimization of any one of the frame images without external information, and has lower implementation cost and higher practicability.
附图说明Description of drawings
图1A示出弱光照环境中拍摄的人脸图像;Figure 1A shows a face image captured in a low-light environment;
图1B示出背光环境中拍摄的人脸图像;Figure 1B shows a face image captured in a backlit environment;
图2示出本示例性实施方式中一种电子设备的结构示意图;FIG. 2 shows a schematic structural diagram of an electronic device in this exemplary embodiment;
图3示出本示例性实施方式中一种图像处理方法的流程图;FIG. 3 shows a flowchart of an image processing method in this exemplary embodiment;
图4示出本示例性实施方式中一种获取待处理图像方法的流程图;FIG. 4 shows a flowchart of a method for acquiring an image to be processed in this exemplary embodiment;
图5示出本示例性实施方式中RAW图像转换为单通道图像的示意图;FIG. 5 shows a schematic diagram of converting a RAW image into a single-channel image in this exemplary embodiment;
图6示出本示例性实施方式中映射曲线的示意图;FIG. 6 shows a schematic diagram of a mapping curve in this exemplary embodiment;
图7示出本示例性实施方式中一种亮度映射处理方法的流程图;FIG. 7 shows a flowchart of a luminance mapping processing method in this exemplary embodiment;
图8示出本示例性实施方式中一种获取参考图像方法的流程图;FIG. 8 shows a flowchart of a method for obtaining a reference image in this exemplary embodiment;
图9示出本示例性实施方式中图像处理的实例图;FIG. 9 shows an example diagram of image processing in this exemplary embodiment;
图10示出本示例性实施方式中图像处理与人脸识别的实例图;FIG. 10 shows an example diagram of image processing and face recognition in this exemplary embodiment;
图11示出本示例性实施方式中一种图像处理装置的结构示意图;FIG. 11 shows a schematic structural diagram of an image processing apparatus in this exemplary embodiment;
图12示出本示例性实施方式中一种架构示意图。FIG. 12 shows a schematic diagram of an architecture in this exemplary embodiment.
具体实施方式Detailed ways
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments, however, can be embodied in various forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided in order to give a thorough understanding of the embodiments of the present disclosure. However, those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details, or other methods, components, devices, steps, etc. may be employed. In other instances, well-known solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repeated descriptions will be omitted. Some of the block diagrams shown in the figures are functional entities that do not necessarily necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
拍摄图像时的环境光照条件与拍摄设备的曝光参数等均会影响到图像亮度。其中环境光照条件的影响更大,在弱光照、背光、极强光源等较为极端的环境中,拍摄的图像可能缺失重要的部分信息。例如,参考图1A所示,在拍摄人像时,如果环境光照很弱,即弱光照环境,则所拍摄的图像整体亮度很低,难以识别人脸;参考图1B所示, 如果光源位于被拍摄的人身后,即背光环境,则人脸部分的亮度较低,同样难以识别。因此,需要对极端环境中所拍摄的图像进行亮度改善。The ambient light conditions and the exposure parameters of the shooting equipment when taking the image will affect the brightness of the image. Among them, the influence of ambient light conditions is greater. In relatively extreme environments such as weak light, backlight, and extremely strong light sources, the captured images may lack important part of the information. For example, as shown in FIG. 1A, when shooting a portrait, if the ambient light is very weak, that is, a weak light environment, the overall brightness of the captured image is very low, and it is difficult to recognize the face; as shown in FIG. 1B, if the light source is located at the Behind the person, that is, the backlight environment, the brightness of the face part is low, and it is also difficult to identify. Therefore, there is a need to improve the brightness of images captured in extreme environments.
为此,本公开的示例性实施方式首先提供一种图像处理方法,其应用场景包括但不限于:在移动终端的智能交互场景中,通过AON(Always ON,常开)摄像头采集、监听并检测人脸与手势信息,以实现特定的交互功能,如检测到人脸时,自动激活显示屏,检测到翻页手势时,对用户界面进行自动翻页;然而,在弱照明、背光、极强光源等环境中,AON摄像头所采集的人脸、手势图像可能亮度过低或过高,影响上述检测的准确度;通过本示例性实施方式的图像处理方法,可以对图像亮度进行改善,以提高人脸、手势等检测的准确度。在目标追踪的场景中,需要实时采集图像,识别图像中的目标物;如果处于上述极端照明环境中,图像中目标物部分的亮度过低或过高,导致难以识别;同样可以通过本示例性实施方式的图像处理方法进行改善,提高识别准确度。To this end, the exemplary embodiments of the present disclosure first provide an image processing method, the application scenarios of which include but are not limited to: in the intelligent interaction scenario of a mobile terminal, collecting, monitoring and detecting through an AON (Always ON) camera Face and gesture information to achieve specific interactive functions, such as automatically activating the display when a face is detected, and automatically turning the user interface when a page turning gesture is detected; however, in weak lighting, backlight, extremely strong In an environment such as a light source, the brightness of the face and gesture images collected by the AON camera may be too low or too high, which affects the accuracy of the above detection; through the image processing method of this exemplary embodiment, the image brightness can be improved to increase the The accuracy of detection of faces, gestures, etc. In the scene of target tracking, it is necessary to collect images in real time to identify the target in the image; if it is in the above-mentioned extreme lighting environment, the brightness of the target part in the image is too low or too high, making it difficult to identify; The image processing method of the embodiment is improved to improve the recognition accuracy.
本公开的示例性实施方式还提供一种电子设备,用于执行上述图像处理方法。该电子设备包括但不限于计算机、智能手机、平板电脑、游戏机、可穿戴设备等。一般的,电子设备包括处理器和存储器。存储器用于存储处理器的可执行指令,也可以存储应用数据,如图像数据、游戏数据等;处理器配置为经由执行可执行指令来执行本示例性实施方式中的图像处理方法。Exemplary embodiments of the present disclosure also provide an electronic device for executing the above-described image processing method. The electronic devices include, but are not limited to, computers, smart phones, tablet computers, game consoles, wearable devices, and the like. Typically, an electronic device includes a processor and a memory. The memory is used to store executable instructions of the processor, and may also store application data, such as image data, game data, etc.; the processor is configured to execute the image processing method in this exemplary embodiment by executing the executable instructions.
下面以图2中的移动终端200为例,对上述电子设备的构造进行示例性说明。本领域技术人员应当理解,除了特别用于移动目的的部件之外,图2中的构造也能够应用于固定类型的设备。The following takes the mobile terminal 200 in FIG. 2 as an example to illustrate the structure of the above electronic device. It will be understood by those skilled in the art that the configuration in Figure 2 can also be applied to stationary type devices, in addition to components specifically for mobile purposes.
如图2所示,移动终端200具体可以包括:处理器210、内部存储区221、外部存储器接口222、USB(Universal Serial Bus,通用串行总线)接口230、充电管理模块240、电源管理模块241、电池242、天线1、天线2、移动通信模块250、无线通信模块260、音频模块270、扬声器271、受话器272、麦克风273、耳机接口274、传感器模块280、显示屏290、摄像模组291、指示器292、马达293、按键294以及SIM(Subscriber Identification Module,用户标识模块)卡接口295等。As shown in FIG. 2 , the mobile terminal 200 may specifically include: a processor 210 , an internal storage area 221 , an external memory interface 222 , a USB (Universal Serial Bus, Universal Serial Bus) interface 230 , a charging management module 240 , and a power management module 241 , battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headphone jack 274, sensor module 280, display screen 290, camera module 291, Indicator 292, motor 293, key 294, SIM (Subscriber Identification Module, Subscriber Identification Module) card interface 295 and so on.
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括AP(Application Processor,应用处理器)、调制解调处理器、GPU(Graphics Processing Unit,图形处理器)、ISP(Image Signal Processor,图像信号处理器)、控制器、编码器、解码器、DSP(Digital Signal Processor,数字信号处理器)、基带处理器和/或NPU(Neural-Network Processing Unit,神经网络处理器)等。The processor 210 may include one or more processing units, for example, the processor 210 may include an AP (Application Processor, application processor), a modem processor, a GPU (Graphics Processing Unit, graphics processor), an ISP (Image Signal Processor, image signal processor), controller, encoder, decoder, DSP (Digital Signal Processor, digital signal processor), baseband processor and/or NPU (Neural-Network Processing Unit, neural network processor), etc.
在一些实施方式中,处理器210可以包括一个或多个接口,通过不同的接口和移动终端200的其他部件形成连接。In some embodiments, the processor 210 may include one or more interfaces through which connections are formed with other components of the mobile terminal 200 .
内部存储区221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储区221可以包括易失性存储器,如DRAM(Dynamic Random Access  Memory,动态随机存储器)、SRAM(Static Random Access Memory,静态随机存储器),还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,UFS(Universal Flash Storage,通用闪存存储器)等。处理器210通过运行存储在内部存储区221的指令和/或存储在设置于处理器中的存储器的指令,执行移动终端200的各种功能应用以及数据处理。Internal storage area 221 may be used to store computer executable program code, which includes instructions. The internal storage area 221 may include volatile memory, such as DRAM (Dynamic Random Access Memory, dynamic random access memory), SRAM (Static Random Access Memory, static random access memory), and may also include non-volatile memory, such as at least one disk memory devices, flash memory devices, UFS (Universal Flash Storage, universal flash memory), etc. The processor 210 executes various functional applications and data processing of the mobile terminal 200 by executing instructions stored in the internal storage area 221 and/or instructions stored in a memory provided in the processor.
外部存储器接口222可以用于连接外部存储器,例如Micro SD卡,实现扩展移动终端200的存储能力。外部存储器通过外部存储器接口222与处理器210通信,实现数据存储功能,例如存储音乐,视频等文件。The external memory interface 222 can be used to connect an external memory, such as a Micro SD card, to expand the storage capacity of the mobile terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing music, video and other files.
USB接口230是符合USB标准规范的接口,可以用于连接充电器为移动终端200充电,也可以连接耳机或其他电子设备。The USB interface 230 is an interface conforming to the USB standard specification, and can be used to connect a charger to charge the mobile terminal 200, and can also be connected to an earphone or other electronic devices.
充电管理模块240用于从充电器接收充电输入。充电管理模块240为电池242充电的同时,还可以通过电源管理模块241为设备供电;电源管理模块241还可以监测电池的状态。The charging management module 240 is used to receive charging input from the charger. While charging the battery 242, the charging management module 240 can also supply power to the device through the power management module 241; the power management module 241 can also monitor the state of the battery.
移动终端200的无线通信功能可以通过天线1、天线2、移动通信模块250、无线通信模块260、调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。移动通信模块250可以提供应用在移动终端200上的包括2G/3G/4G/5G等无线通信的解决方案。无线通信模块260可以提供应用在移动终端200上的包括WLAN(Wireless Local Area Networks,无线局域网)(如Wi-Fi(Wireless Fidelity,无线保真)网络)、BT(Bluetooth,蓝牙)、GNSS(Global Navigation Satellite System,全球导航卫星系统)、FM(Frequency Modulation,调频)、NFC(Near Field Communication,近距离无线通信技术)、IR(Infrared,红外技术)等无线通信解决方案。The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modulation and demodulation processor, the baseband processor, and the like. Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. The mobile communication module 250 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the mobile terminal 200 . The wireless communication module 260 can provide applications on the mobile terminal 200 including WLAN (Wireless Local Area Networks, wireless local area network) (such as Wi-Fi (Wireless Fidelity, wireless fidelity) network), BT (Bluetooth, Bluetooth), GNSS (Global Navigation Satellite System, global navigation satellite system), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, short-range wireless communication technology), IR (Infrared, infrared technology) and other wireless communication solutions.
移动终端200可以通过GPU、显示屏290及AP等实现显示功能。The mobile terminal 200 may implement a display function through a GPU, a display screen 290, an AP, and the like.
移动终端200可以通过ISP、摄像模组291、编码器、解码器、GPU、显示屏290及AP等实现拍摄功能。摄像模组291可以包括各种类型的摄像头,例如AON摄像头、广角摄像头、高清摄像头等,摄像头可以设置在移动终端200的任意位置,例如设置在有显示屏290的一侧,形成前置摄像头,或者设置在显示屏290相对的一侧,形成后置摄像头。The mobile terminal 200 can realize the shooting function through the ISP, the camera module 291, the encoder, the decoder, the GPU, the display screen 290, the AP, and the like. The camera module 291 can include various types of cameras, such as AON cameras, wide-angle cameras, high-definition cameras, etc. The camera can be arranged at any position of the mobile terminal 200, for example, arranged on the side of the display screen 290 to form a front camera, Or set on the opposite side of the display screen 290 to form a rear camera.
移动终端200可以通过音频模块270、扬声器271、受话器272、麦克风273、耳机接口274及AP等实现音频功能。The mobile terminal 200 can implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, an earphone interface 274, an AP, and the like.
传感器模块280可以包括深度传感器2801、压力传感器2802、陀螺仪传感器2803、气压传感器2804等,以实现不同的感应检测功能。The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyro sensor 2803, an air pressure sensor 2804, etc., to realize different sensing detection functions.
指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。马达293可以产生振动提示,也可以用于触摸振动反馈等。按键294包括开机键,音量键等。The indicator 292 can be an indicator light, which can be used to indicate the charging status, the change of power, and can also be used to indicate messages, missed calls, notifications, and the like. The motor 293 can generate vibration prompts, and can also be used for touch vibration feedback and the like. The keys 294 include a power-on key, a volume key, and the like.
移动终端200可以支持一个或多个SIM卡接口295,用于连接SIM卡,以实现通 话以及数据通信等功能。The mobile terminal 200 may support one or more SIM card interfaces 295 for connecting the SIM cards to realize functions such as calling and data communication.
图3示出了本示例性实施方式中图像处理方法的示意性流程,可以包括:FIG. 3 shows a schematic flow of the image processing method in this exemplary embodiment, which may include:
步骤S310,从连续多帧图像中获取待处理图像;Step S310, acquiring an image to be processed from multiple consecutive frames of images;
步骤S320,对待处理图像进行亮度映射处理,生成中间图像;Step S320, performing brightness mapping processing on the image to be processed to generate an intermediate image;
步骤S330,利用上述连续多帧图像获取至少一帧参考图像;Step S330, obtaining at least one frame of reference image by using the above-mentioned consecutive multi-frame images;
步骤S340,对参考图像和中间图像进行融合处理,得到待处理图像对应的优化图像。Step S340, performing fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
通过上述方法,一方面,对待处理图像进行亮度映射处理,可以改善图像中存在的亮度问题,如亮度过低、亮度过高、局部亮度不均衡等,对参考图像和中间图像进行融合处理,可以实现降噪,并修复图像中的局部信息缺失,从而得到图像内容清晰可见的优化图像,有利于进一步实现人脸识别、手势识别、目标检测等应用;进一步的,本方案提高了图像拍摄与处理在极端光照环境中的鲁棒性,降低了对于摄像头、图像传感器等硬件本身性能的依赖,例如对于感光性能较低的图像传感器,可以通过本方案进行拍摄图像的优化,得到高质量的图像,从而有利于降低硬件成本。另一方面,本方案基于视频拍摄或图像预览时采集的连续多帧图像,即可实现对其中任一帧图像的优化,无需外部信息,具有较低的实现成本与较高的实用性。Through the above method, on the one hand, the brightness mapping processing of the image to be processed can improve the brightness problems in the image, such as low brightness, high brightness, uneven local brightness, etc. It realizes noise reduction and repairs the lack of local information in the image, so as to obtain an optimized image with clearly visible image content, which is conducive to the further realization of face recognition, gesture recognition, target detection and other applications; further, this solution improves image capture and processing. Robustness in extreme lighting environments reduces the dependence on the performance of hardware such as cameras and image sensors. For example, for image sensors with low photosensitive performance, this solution can be used to optimize the captured images to obtain high-quality images. This helps to reduce hardware costs. On the other hand, based on the continuous multi-frame images collected during video shooting or image preview, this solution can realize the optimization of any one of the frame images without external information, and has low implementation cost and high practicability.
下面分别对图3中的每个步骤进行具体说明。Each step in FIG. 3 will be specifically described below.
步骤S310中,从连续多帧图像中获取待处理图像。In step S310, the to-be-processed image is acquired from consecutive multiple frames of images.
上述连续多帧图像可以是摄像头连续采集的图像,例如摄像头拍摄视频或者连续采集预览图像等。待处理图像可以是其中的任一帧图像,一般的,在摄像头采集图像时,实时地以当前采集的一帧图像作为待处理图像,执行图3的方法,从而实现对每一帧图像的处理。The above-mentioned continuous multi-frame images may be images continuously collected by a camera, for example, a camera shoots a video or continuously collects a preview image, and the like. The image to be processed can be any one of the frame images. Generally, when the camera captures an image, the method in FIG. 3 is executed in real time by taking the currently collected frame of image as the image to be processed, so as to realize the processing of each frame of image. .
在一种可选的实施方式中,参考图4所示,步骤S310可以包括In an optional implementation manner, referring to FIG. 4 , step S310 may include
步骤S410,从连续多帧RAW图像中获取当前RAW图像;Step S410, obtaining the current RAW image from consecutive multiple frames of RAW images;
步骤S420,对当前RAW图像进行通道转换处理,得到待处理图像。Step S420, performing channel conversion processing on the current RAW image to obtain an image to be processed.
其中,RAW图像是指以RAW格式存储的图像,一般指摄像头中的图像传感器采集的原始图像。图像传感器通过拜耳滤镜采集光信号,并转化为数字信号,得到RAW图像。参考图5中的左图所示,RAW图像中每个像素点只有RGB中的一种颜色,并按照拜耳阵列排布。在一种可选的实施方式中,可以在终端设备上设置AON摄像头,并采集得到上述连续多帧RAW图像The RAW image refers to an image stored in a RAW format, and generally refers to an original image collected by an image sensor in a camera. The image sensor collects the light signal through a Bayer filter and converts it into a digital signal to obtain a RAW image. Referring to the left image in Figure 5, each pixel in the RAW image has only one color in RGB and is arranged in a Bayer array. In an optional implementation manner, an AON camera may be set up on the terminal device, and the above-mentioned continuous multiple frames of RAW images may be collected and obtained
本示例性实施方式中,获取当前采集的一帧RAW图像,由于各像素点的颜色通道不同,对当前RAW图像进行通道转换处理,得到待处理图像。通道转换处理是将各像素点的颜色通道进行统一,包括但不限于以下两种方式:In this exemplary embodiment, a currently collected RAW image is acquired, and since the color channels of each pixel point are different, channel conversion processing is performed on the current RAW image to obtain an image to be processed. Channel conversion processing is to unify the color channels of each pixel, including but not limited to the following two methods:
1、将当前RAW图像的各像素点转换为RGB中的任一通道,例如统一转换到R通道上。一般的,由于人眼对于绿色较为敏感,参考图5所示,可以将当前RAW图像 的R通道和B通道转换为G通道,得到全部为G通道像素的图像。具体可以将R通道和B通道的像素点进行映射,以转换为G通道的像素值,例如:1. Convert each pixel of the current RAW image to any channel in RGB, for example, uniformly convert it to the R channel. Generally, since the human eye is more sensitive to green, as shown in FIG. 5 , the R channel and B channel of the current RAW image can be converted into the G channel to obtain an image with all pixels in the G channel. Specifically, the pixels of the R channel and the B channel can be mapped to convert to the pixel value of the G channel, for example:
P R*a 1=P G,P B*a 2=P G          (1) P R *a 1 =P G , P B *a 2 =P G (1)
其中,P R、P G、P B分别表示R通道、G通道、B通道的像素值;a 1和a 2分别是R通道和B通道向G通道转换的系数,可以是经验系数,例如a 1=0.299/0.587=0.509(0.299和0.587分别是R通道和G通道进行灰度转换时的系数),a 2=0.114/0.587=0.194(0.114是B通道进行灰度转换时的系数)。可见,在将R、B通道转换为G通道时,考虑了亮度的一致性,即转换前后同一像素点的亮度水平基本一致。 Among them, P R , P G , and P B represent the pixel values of the R channel, the G channel, and the B channel, respectively; a 1 and a 2 are the conversion coefficients from the R channel and the B channel to the G channel, which can be empirical coefficients, such as a 1 = 0.299/0.587 = 0.509 (0.299 and 0.587 are the coefficients when the R channel and G channel are grayscale conversion, respectively), a 2 =0.114/0.587=0.194 (0.114 is the coefficient when the B channel is grayscale conversion). It can be seen that when converting the R and B channels to the G channel, the consistency of brightness is considered, that is, the brightness level of the same pixel before and after the conversion is basically the same.
2、将当前RAW图像转换为灰度图像,例如可以分别将R、G、B通道按照一定的系数比例转换为灰度值,也可以将RAW图像通过Demosaic(解马赛克)处理得到RGB图像,再通过以下公式(2)计算出每个像素点的灰度值(Gray),从而得到灰度图像。2. Convert the current RAW image into a grayscale image. For example, the R, G, and B channels can be converted into grayscale values according to a certain coefficient ratio, or the RAW image can be processed by Demosaic (de-mosaic) to obtain an RGB image, and then The gray value (Gray) of each pixel is calculated by the following formula (2) to obtain a gray image.
Gray=P R*0.299+P G*0.587+P B*0.114       (2) Gray=P R *0.299+P G *0.587+P B *0.114 (2)
在当前RAW图像中,每个像素点需要10bit,其中8bit记录像素值,2bit记录通道信息。通过通道转换处理得到单通道图像(灰度图像可视为一种特殊的单通道图像),无需记录通道信息,因此每个像素点仅需要8bit。单通道图像相比于RGB图像,本身的数据量大大减少,同时节省了用于记录通道信息的位宽,从而降低了后续处理的数据量,有利于实现实时的优化处理。并且,单通道图像可以对待处理图像进行亮度表征,信息较为充分。In the current RAW image, each pixel needs 10 bits, of which 8 bits record the pixel value and 2 bits record the channel information. A single-channel image is obtained through channel conversion processing (a grayscale image can be regarded as a special single-channel image), and there is no need to record channel information, so each pixel only needs 8 bits. Compared with RGB images, single-channel images have greatly reduced data volume and save the bit width for recording channel information, thereby reducing the data volume of subsequent processing, which is conducive to real-time optimization processing. Moreover, the single-channel image can characterize the brightness of the image to be processed, and the information is relatively sufficient.
继续参考图3,步骤S320中,对待处理图像进行亮度映射处理,生成中间图像。Continuing to refer to FIG. 3 , in step S320, luminance mapping processing is performed on the image to be processed to generate an intermediate image.
亮度映射处理是指将待处理图像中各像素点的亮度值映射为新的值。其中,可以将所有像素点的亮度值向同一方向调整,如统一增加或统一减小,不同像素点的亮度变化值可以相同,也可以不同;也可以将不同像素点的亮度值向不同方向调整,如将过亮的部分减小亮度,将过暗的部分增加亮度,调整图像的灰度阶等。The brightness mapping process refers to mapping the brightness value of each pixel in the image to be processed into a new value. Among them, the brightness values of all pixels can be adjusted in the same direction, such as uniform increase or uniform decrease, the brightness change values of different pixels can be the same or different; the brightness values of different pixels can also be adjusted in different directions , such as reducing the brightness of the too bright part, increasing the brightness of the too dark part, adjusting the gray scale of the image, etc.
本示例性实施方式中,可以预先配置映射前后亮度值之间的映射关系,该映射关系可以与待处理图像本身的亮度水平或亮度分布情况相关,再根据映射关系实现亮度映射处理,处理后得到中间图像。In this exemplary embodiment, a mapping relationship between luminance values before and after mapping can be pre-configured, and the mapping relationship can be related to the luminance level or luminance distribution of the image to be processed itself, and then the luminance mapping process is implemented according to the mapping relationship. Intermediate image.
在一种可选的实施方式中,步骤S320可以包括:In an optional implementation manner, step S320 may include:
根据待处理图像的光照信息对待处理图像进行亮度映射处理。Perform luminance mapping processing on the image to be processed according to the illumination information of the image to be processed.
其中,光照信息是表示拍摄待处理图像的环境光照状况的信息,如可以包括照度值与背光比例。照度值是对被拍摄图像单位面积受到的光照量的度量。背光比例是待处理图像中背光部分区域所占的比例。光照信息可以反映出待处理图像是否存在光照方面的缺陷,如整体的光照过度、光照不足、局部的光照分布失衡等,进而,可以针对性地对待处理图像进行亮度映射处理。Wherein, the illumination information is information representing the ambient illumination condition of the image to be processed, for example, it may include the ratio of illuminance value to backlight. The illuminance value is a measure of the amount of light received per unit area of the captured image. The backlight ratio is the proportion of the backlight part of the image to be processed. The illumination information can reflect whether the image to be processed has defects in illumination, such as the overall illumination is excessive, the illumination is insufficient, and the local illumination distribution is unbalanced, etc., and further, brightness mapping processing can be performed on the image to be processed.
示例性的,可以根据待处理图像的曝光参数确定待处理图像的照度值。曝光参数包括曝光时间(Exptime)、感光度(ISO)、光圈值(F值)中的至少一个。一般的,拍 摄待处理图像时会记录曝光参数,对于智能手机上的摄像头来说,其光圈值通常是固定值。通过曝光参数对照度值进行估计,所获取的曝光参数越全面,估计的照度值越准确,例如可以参考以下公式(3):Exemplarily, the luminance value of the image to be processed may be determined according to the exposure parameter of the image to be processed. The exposure parameter includes at least one of exposure time (Exptime), sensitivity (ISO), and aperture value (F value). Generally, exposure parameters are recorded when the image to be processed is captured, and for cameras on smartphones, the aperture value is usually a fixed value. The exposure parameter is used to estimate the illuminance value. The more comprehensive the acquired exposure parameter, the more accurate the estimated illuminance value. For example, you can refer to the following formula (3):
Lumi=a 3*F 2/(Exptime*ISO)           (3) Lumi=a 3 *F 2 /(Exptime*ISO) (3)
Lumi表示照度值,单位为lux;a 3表示经验系数,如可以是12.4,根据实际拍摄场景或摄像头性能可以调整;F为光圈值,是镜头焦距与镜头有效直径的比值;Exptime为曝光时间,单位为秒;ISO为感光度。 Lumi represents the illuminance value, the unit is lux; a 3 represents the empirical coefficient, such as 12.4, which can be adjusted according to the actual shooting scene or camera performance; F is the aperture value, which is the ratio of the focal length of the lens to the effective diameter of the lens; Exptime is the exposure time, The unit is seconds; ISO is the sensitivity.
待处理图像的照度值体现了待处理图像的全局光照情况,照度值过低时,说明待处理图像整体存在欠曝光的情况,照度值过高时,说明待处理图像整体存在过曝光的情况。由此,在一种可选的实施方式中,步骤S320可以包括:The illuminance value of the image to be processed reflects the global illumination of the image to be processed. If the illuminance value is too low, it indicates that the entire image to be processed is underexposed. When the illuminance value is too high, it indicates that the entire image to be processed is overexposed. Therefore, in an optional implementation manner, step S320 may include:
当待处理图像的照度值满足第一预设条件时,对待处理图像进行全局亮度映射处理。When the luminance value of the image to be processed satisfies the first preset condition, the image to be processed is subjected to global luminance mapping processing.
第一预设条件可以包括:小于第一阈值,或者大于第二阈值。第一阈值为衡量是否欠曝光的阈值,第二阈值为衡量是否过曝光的阈值,两阈值可以根据经验确定,也可以根据实际场景进行调整,例如在白天中适当增加第一阈值和第二阈值,在黑夜中适当减小第一阈值和第二阈值。当待处理图像的照度值满足第一预设条件时,存在欠曝光或过曝光的情况,因而将其进行全局亮度映射处理(Global Luminance Mapping)。具体可以包括以下步骤:The first preset condition may include: less than the first threshold, or greater than the second threshold. The first threshold is a threshold for measuring underexposure, and the second threshold is a threshold for measuring overexposure. The two thresholds can be determined based on experience, or can be adjusted according to the actual scene. For example, the first threshold and the second threshold are appropriately increased during the day. , and appropriately reduce the first threshold and the second threshold in the dark. When the illuminance value of the image to be processed satisfies the first preset condition, there is a situation of under-exposure or over-exposure, so it is subjected to a global luminance mapping process (Global Luminance Mapping). Specifically, the following steps can be included:
当待处理图像的照度值小于第一阈值时,将待处理图像进行全局亮度向上映射处理,即全局增亮。When the luminance value of the to-be-processed image is less than the first threshold, the to-be-processed image is subjected to a global brightness upward mapping process, that is, global brightening.
当待处理图像的照度值大于第二阈值时,将待处理图像进行全局亮度向下映射处理,即全局减亮。When the luminance value of the to-be-processed image is greater than the second threshold, the to-be-processed image is subjected to global brightness down mapping processing, that is, global brightness reduction.
以全局增亮为例,可以按照预先设置的映射曲线,将待处理图像中不同像素点的亮度值映射为更高的亮度值。映射曲线可以是一次曲线,体现线性映射关系,也可以是二次曲线或其他非线性曲线,体现非线性映射关系。需要说明的是,本示例性实施方式可以配置多条映射曲线,分别对应于不同的映射强度,映射强度越高,增亮越显著。在具体处理时,可以根据待处理图像的照度值或者实际场景需求确定映射强度以及采用哪条映射曲线,例如照度值越低,映射强度越高,采用整体斜率越大的映射曲线。Taking global brightening as an example, the brightness values of different pixel points in the image to be processed can be mapped to higher brightness values according to a preset mapping curve. The mapping curve can be a linear curve, which reflects a linear mapping relationship, or a quadratic curve or other nonlinear curve, which reflects a nonlinear mapping relationship. It should be noted that, in this exemplary embodiment, a plurality of mapping curves may be configured, which correspond to different mapping intensities respectively, and the higher the mapping intensities, the more significant the brightening will be. During specific processing, the mapping intensity and which mapping curve to use can be determined according to the illuminance value of the image to be processed or the actual scene requirements. For example, the lower the illuminance value, the higher the mapping intensity, and the larger the overall slope of the mapping curve.
在一种可选的实施方式中,步骤S320还可以包括:In an optional implementation manner, step S320 may further include:
当待处理图像的照度值不满足第一预设条件时,确定待处理图像的背光比例;When the illuminance value of the image to be processed does not meet the first preset condition, determining the backlight ratio of the image to be processed;
当待处理图像的背光比例满足第二预设条件时,对待处理图像进行色调映射处理。When the backlight ratio of the to-be-processed image satisfies the second preset condition, tone-mapping processing is performed on the to-be-processed image.
其中,待处理图像的照度值不满足第一预设条件,说明待处理图像的全局光照情况是比较合适的,进而计算其背光比例,以判断是否存在局部不合适的情况。Wherein, the illumination value of the image to be processed does not meet the first preset condition, indicating that the global illumination of the image to be processed is relatively suitable, and then the backlight ratio of the image to be processed is calculated to determine whether there is a local inappropriate situation.
在一种可选的实施方式中,可以根据待处理图像的亮度直方图确定待处理图像的 背光比例,具体可以包括:设置多个亮度阶,统计各亮度阶中的像素点比例,形成待处理图像的亮度直方图;如果存在至少两个亮度阶,其亮度差值达到设定值(该设定值可以是经验值,也可以根据具体场景确定,例如场景中存在显著光源时,设定值一般较大,自然光或无显著光源下设定一般较小),则将其中较低的亮度阶确定为背光部分;统计全部背光部分的亮度直方图,得到背光部分占待处理图像的比例,即背光比例。In an optional implementation manner, the backlight ratio of the to-be-processed image may be determined according to the brightness histogram of the to-be-processed image, which may specifically include: setting a plurality of brightness levels, counting the pixel ratios in each brightness level, and forming a to-be-processed image The brightness histogram of the image; if there are at least two brightness levels, the brightness difference value reaches the set value (the set value can be an empirical value, or can be determined according to the specific scene, such as when there is a prominent light source in the scene, the set value Generally larger, the setting is generally smaller under natural light or no significant light source), then the lower brightness level is determined as the backlight part; the brightness histogram of all the backlight parts is counted to obtain the proportion of the backlight part in the image to be processed, that is Backlight ratio.
在一种可选的实施方式中,也可以获取待处理图像中的最高亮度值,根据最高亮度值确定亮度阈值,例如可以将最高亮度值乘以固定的小于1的系数,得到亮度阈值;筛选出低于亮度阈值的像素点,提取其中的连通区域,也就是过滤掉其中孤立的像素点;将连通区域占待处理图像的比例作为背光比例。In an optional embodiment, the highest brightness value in the image to be processed can also be obtained, and the brightness threshold value can be determined according to the highest brightness value. For example, the highest brightness value can be multiplied by a fixed coefficient less than 1 to obtain the brightness threshold value; The pixels below the brightness threshold are extracted, and the connected regions are extracted, that is, the isolated pixels are filtered out; the proportion of the connected regions in the image to be processed is taken as the backlight ratio.
第二预设条件可以包括:背光比例大于第三阈值。需要说明的是,上述计算背光比例,是对待处理图像中可能存在的背光现象进行估计,当背光比例大于第三阈值时,可以认为存在背光现象的概率较高。第三阈值为衡量是否存在背光现象的阈值,可根据经验或实际场景确定。在此情况下,对待处理图像进行色调映射处理(Tone Mapping)。色调映射处理本质上仍然是对亮度进行映射,与上述全局亮度映射处理不同的是,色调映射处理可以改变图像的亮度范围或亮度阶分布,各像素点的亮度调整方向可以不同。例如,在全局亮度映射处理中,对于整张图像中的所有像素点,均向同一方向做亮度调整,如整体增加亮度或整体减小亮度,以调整整体的亮度水平;在色调映射处理中,对于整张图像中的不同像素点,可以向不同方向做亮度调整,如将亮度较高的部分做亮度向下映射处理,将亮度较低的部分做亮度向上映射处理,以调整亮度分布情况。The second preset condition may include: the backlight ratio is greater than the third threshold. It should be noted that the above calculation of the backlight ratio is to estimate the possible backlight phenomenon in the image to be processed. When the backlight ratio is greater than the third threshold, it can be considered that the probability of the existence of the backlight phenomenon is high. The third threshold is a threshold for measuring whether there is a backlight phenomenon, which can be determined according to experience or actual scenarios. In this case, tone mapping processing (Tone Mapping) is performed on the image to be processed. The tone mapping process essentially still maps the brightness. Different from the above-mentioned global brightness mapping process, the tone mapping process can change the brightness range or brightness level distribution of the image, and the brightness adjustment direction of each pixel can be different. For example, in the global brightness mapping process, for all pixels in the entire image, the brightness is adjusted in the same direction, such as increasing the brightness as a whole or decreasing the brightness as a whole, to adjust the overall brightness level; in the tone mapping process, For different pixels in the whole image, the brightness can be adjusted in different directions. For example, the brighter parts are mapped downwards, and the parts with lower brightness are mapped upwards to adjust the brightness distribution.
本示例性实施方式中,可以通过映射曲线实现色调映射处理。图6示出了色调映射处理所用的映射曲线,其中横坐标为映射前的亮度值,纵坐标为映射后的亮度值,曲线A、B、C为不同映射强度下的映射曲线,曲线C的映射强度最高,曲线A的映射强度最低。可见,通过映射曲线映射亮度值后,可以将待处理图像的亮度分布映射到更小的范围内,映射强度越高,映射后的亮度范围越小,从而改善了不同局部亮度分布极度不均匀所导致的局部不可见的问题。一般的,背光比例越高,或者背光部分与高亮度部分的差值越大,采用越高的映射强度。可以看出,色调映射处理所用的映射曲线与上述全局亮度映射处理所用的映射曲线不同:前者一般为非线性曲线,其中的一段斜率大于45度(即亮度向上映射处理的部分,一般是亮度较低的一段),另一段斜率小于45度(即亮度向下映射处理的部分,一般是亮度较高的一段);后者可以是线性曲线,也可以是非线性曲线,整条曲线的斜率均大于45度或小于45度。In this exemplary embodiment, the tone mapping process can be implemented by a mapping curve. Figure 6 shows the mapping curves used in tone mapping processing, wherein the abscissa is the luminance value before mapping, the ordinate is the luminance value after mapping, the curves A, B, and C are the mapping curves under different mapping intensities, and the curve C is The mapping strength is the highest, and curve A has the lowest mapping strength. It can be seen that after the brightness value is mapped by the mapping curve, the brightness distribution of the image to be processed can be mapped to a smaller range. Caused by the local invisible problem. Generally, the higher the backlight ratio, or the greater the difference between the backlight part and the high-brightness part, the higher the mapping intensity is. It can be seen that the mapping curve used in the tone mapping processing is different from the mapping curve used in the above-mentioned global brightness mapping processing: the former is generally a non-linear curve, and a section of the slope is greater than 45 degrees (that is, the part of the brightness upward mapping processing, generally the brightness is relatively high. The lower section), the slope of the other section is less than 45 degrees (that is, the part where the brightness is mapped downward, generally the section with higher brightness); the latter can be a linear curve or a non-linear curve, and the slope of the entire curve is greater than 45 degrees or less.
在一种可选的实施方式中,当待处理图像的背光比例不满足第二预设条件时,说明待处理图像的全局光照情况与局部光照情况均较为良好,可以不做亮度映射处理,将待处理图像作为中间图像。In an optional embodiment, when the backlight ratio of the image to be processed does not meet the second preset condition, it means that the global illumination and local illumination of the image to be processed are relatively good, and the brightness mapping process may not be performed, and the The image to be processed serves as an intermediate image.
图7示出了亮度映射处理的示意性流程,包括:FIG. 7 shows a schematic flow of luminance mapping processing, including:
步骤S710,确定待处理图像的照度值Lumi;Step S710, determining the luminance value Lumi of the image to be processed;
步骤S720,将照度值与第一阈值T1、第二阈值T2进行比较;当Lumi<T1或Lumi>T2时,确定满足第一预设条件,执行步骤S730;否则执行步骤S740;Step S720, compare the illuminance value with the first threshold value T1 and the second threshold value T2; when Lumi<T1 or Lumi>T2, it is determined that the first preset condition is met, and step S730 is performed; otherwise, step S740 is performed;
步骤S730,对待处理图像进行全局亮度映射处理;Step S730, performing global brightness mapping processing on the image to be processed;
步骤S740,确定待处理图像的背光比例(BL ratio);Step S740, determine the backlight ratio (BL ratio) of the image to be processed;
步骤S750,将背光比例与第三阈值T3进行比较;当BL ratio>T3时,确定满足第二预设条件,执行步骤S760;否则对待处理图像不做处理,跳转至步骤S770;Step S750, compare the backlight ratio with the third threshold value T3; when BL ratio>T3, it is determined that the second preset condition is met, and step S760 is performed; otherwise, the image to be processed is not processed, and jumps to step S770;
步骤S760,对待处理图像进行色调映射处理;Step S760, performing tone mapping processing on the image to be processed;
步骤S770,得到中间图像。In step S770, an intermediate image is obtained.
通过以上方式,从全局与局部两个层面上对待处理图像进行了亮度改善,所得到的中间图像中,可以一定程度上恢复由于亮度问题所缺失的图像信息。Through the above methods, the brightness of the image to be processed is improved from the global and local levels, and in the obtained intermediate image, the image information missing due to the brightness problem can be recovered to a certain extent.
继续参考图3,步骤S330中,利用上述连续多帧图像获取至少一帧参考图像。Continuing to refer to FIG. 3 , in step S330 , at least one frame of reference image is acquired by using the above-mentioned consecutive multiple frames of images.
其中,参考图像是与待处理图像在时间上具有连续性的一帧或多帧图像,其可以对待处理图像形成图像信息的补充。Wherein, the reference image is one or more frames of images that are continuous in time with the image to be processed, which can form supplementary image information of the image to be processed.
在一种可选的实施方式中,参考图8所示,步骤S330可以包括:In an optional implementation, referring to FIG. 8 , step S330 may include:
步骤S810,在上述连续多帧图像中,获取待处理图像以外的至少一帧图像;Step S810, acquiring at least one frame of images other than the image to be processed among the above-mentioned consecutive multiple frames of images;
步骤S820,对该至少一帧图像进行亮度映射处理,得到参考图像;Step S820, performing luminance mapping processing on the at least one frame of image to obtain a reference image;
步骤S830,获取该至少一帧图像对应的优化图像,作为参考图像。Step S830, acquiring an optimized image corresponding to the at least one frame of image as a reference image.
其中,可以从上述连续多帧图像中选取待处理图像以外的任意一帧或多帧图像,一般的,所选取的图像数量越多,与待处理图像的时间越接近,越有利于提高优化效果。例如,记待处理图像为上述连续多帧图像中的第i帧图像,i为不小于2的正整数,则可以选取其中的第i-m帧图像至第i-1帧图像,即位于待处理图像之前的连续m帧图像,用于后续的优化,m为任意正整数。可以结合经验、实际需求与算力性能,确定m的数值。Among them, any one or more frames of images other than the image to be processed can be selected from the above-mentioned continuous multiple frames of images. Generally, the more the number of selected images, the closer the time to the image to be processed, the more conducive to improving the optimization effect. . For example, if the image to be processed is the i-th frame image in the above-mentioned consecutive multi-frame images, and i is a positive integer not less than 2, then the i-m-th frame image to the i-1-th frame image can be selected, that is, the image located in the image to be processed The previous m frames of images are used for subsequent optimization, where m is any positive integer. The value of m can be determined by combining experience, actual demand and computing power performance.
在步骤S820与步骤S830中,可以选择性执行其中之一。步骤S820中,对上述至少一帧图像的亮度映射处理可以参考步骤S320与图7中对待处理图像所做的亮度映射处理。步骤S830中,上述至少一帧图像对应的优化图像,可以是采用图3的方法流程对该图像进行优化后得到的图像,例如,对于摄像头实时采集的每一帧图像,均可以采用图3的方法流程进行优化,得到每一帧图像对应的优化图像,该优化图像可作为优化下一帧图像时的参考图像。In step S820 and step S830, one of them may be selectively executed. In step S820, for the luminance mapping processing of the above-mentioned at least one frame of image, reference may be made to the luminance mapping processing performed on the image to be processed in step S320 and FIG. 7 . In step S830, the optimized image corresponding to the above-mentioned at least one frame of image may be an image obtained by optimizing the image by using the method flow of FIG. The method process is optimized to obtain an optimized image corresponding to each frame of image, and the optimized image can be used as a reference image for optimizing the next frame of image.
在一种可选的实施方式中,待处理图像的曝光参数可以与上述至少一帧图像的曝光参数不同,例如可以是曝光时间、感光度、光圈值中的任一项不同。由此,待处理图像或中间图像,可以与上述至少一帧图像或参考图像,形成曝光与亮度方面的信息互补。进一步的,设备可以控制摄像头以不同的曝光参数采集上述连续多帧图像,例 如在采集图像时逐渐增加曝光时间,或者逐渐增加感光度,则不同帧图像之间的曝光参数均不同,使得该连续多帧图像中的信息最大化,以形成更加有效的信息互补。In an optional implementation manner, the exposure parameter of the image to be processed may be different from the exposure parameter of the above-mentioned at least one frame of image, for example, any one of exposure time, sensitivity, and aperture value may be different. Thus, the image to be processed or the intermediate image can be complementary to the above-mentioned at least one frame of image or reference image to form information in terms of exposure and brightness. Further, the device can control the camera to collect the above-mentioned continuous multiple frames of images with different exposure parameters. For example, when the image is collected, the exposure time is gradually increased, or the sensitivity is gradually increased, and the exposure parameters between different frame images are different, so that the continuous The information in the multi-frame images is maximized to form a more effective information complementation.
继续参考图3,步骤S340中,对参考图像和中间图像进行融合处理,得到待处理图像对应的优化图像。Continuing to refer to FIG. 3 , in step S340, a fusion process is performed on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
参考图像和中间图像具有时间上的微小差异,因此具有图像信息的差别,将两图像融合,可以对细节部分的图像信息形成修复作用。例如,可以分别扫描两图像中的图像频率,并进行对比,选取参考图像中图像频率高于中间图像的区域,将参考图像中的该区域与中间图像进行融合。The reference image and the intermediate image have a slight difference in time, so there is a difference in image information, and the fusion of the two images can form a repairing effect on the image information of the detail part. For example, the image frequencies in the two images can be scanned separately and compared, and an area in the reference image whose image frequency is higher than that in the intermediate image is selected, and the area in the reference image is fused with the intermediate image.
在融合时,可以对相同位置的像素点进行加权,例如根据两图像中像素点的图像频率确定权重,并进行加权融合。During fusion, weights can be performed on the pixels at the same position, for example, the weights are determined according to the image frequencies of the pixels in the two images, and weighted fusion is performed.
在一种可选的实施方式中,步骤S340可以包括:In an optional implementation manner, step S340 may include:
对参考图像和中间图像进行时序滤波处理。Time series filtering is performed on the reference image and the intermediate image.
具体地,可以根据参考图像与中间图像的时间戳,将其排列为图像序列,再将图像序列中的图像信息转换为时序信号,并对时序信号进行滤波,如可以采用高斯滤波、均值滤波等各种方式,实现降噪等进一步优化效果。Specifically, according to the time stamps of the reference image and the intermediate image, they can be arranged into an image sequence, and then the image information in the image sequence can be converted into time-series signals, and the time-series signals can be filtered, such as Gaussian filtering, mean filtering, etc. Various ways to achieve further optimization effects such as noise reduction.
在一种可选的实施方式中,在对参考图像和中间图像进行融合处理后,还可以根据上述连续多帧图像,对融合处理后的图像进行亮度调整,得到待处理图像对应的优化图像。由于中间图像经历过亮度映射处理,其亮度可能和其他帧图像存在较为明显的差异,导致视频画面的亮度跳变。因此,可以根据其他帧图像的亮度,对融合处理后的图像进行亮度调整,例如分别在其他帧图像和融合处理后的图像中选取相同的一个区域,基于两图像中该区域的亮度差,对融合处理后的图像进行整体的亮度调整,从而保证了连续帧图像之间的亮度一致性。需要说明的是,不同于步骤S320中的亮度映射处理,此处所做的亮度调整一般是亮度微调或亮度平滑处理。In an optional embodiment, after the reference image and the intermediate image are fused, the brightness of the fused image may also be adjusted according to the above-mentioned continuous multiple frames to obtain an optimized image corresponding to the image to be processed. Since the intermediate image has undergone brightness mapping processing, its brightness may be significantly different from other frame images, resulting in brightness jumps of the video image. Therefore, the brightness of the fused image can be adjusted according to the brightness of other frame images, for example, selecting the same area in the other frame images and the fused image respectively, based on the brightness difference of the area in the two images, to The overall brightness adjustment is performed on the image after fusion processing, thereby ensuring the brightness consistency between consecutive frame images. It should be noted that, unlike the brightness mapping processing in step S320, the brightness adjustment performed here is generally brightness fine-tuning or brightness smoothing processing.
图9示出了待处理图像进行优化后的效果实例图,其中上面一排为待处理图像,下面一排为对应的优化图像。可以看出,待处理图像中存在人脸或手势亮度较低,导致无法看清的情况,经过优化后,可以得到较为清晰的人脸或手势信息。FIG. 9 shows an example diagram of the effect after the image to be processed is optimized, wherein the upper row is the to-be-processed image, and the lower row is the corresponding optimized image. It can be seen that the brightness of faces or gestures in the image to be processed is low, which makes it impossible to see clearly. After optimization, clearer face or gesture information can be obtained.
在一种可选的实施方式中,在得到待处理图像对应的优化图像后,还可以对优化图像进行目标检测、人脸识别、手势识别中的至少一种处理。由此可以得到更加准确的检测或识别结果。图10示出了对优化图像进行人脸识别的实例图,可以看出,经过优化后,人脸部分清晰可见,从而准确检测出人脸所在区域以及脸部特征点。In an optional implementation manner, after the optimized image corresponding to the image to be processed is obtained, at least one of target detection, face recognition, and gesture recognition may be performed on the optimized image. Thereby, more accurate detection or identification results can be obtained. Figure 10 shows an example of face recognition on the optimized image. It can be seen that after optimization, the face part is clearly visible, so that the area where the face is located and the facial feature points are accurately detected.
采用本示例性实施方式的图像处理方法,对包含845张人脸与手势图像的数据集进行优化处理的测试,其中的图像大多为3~15lux的照度值,人脸或手势与摄像头的距离为15~40cm,属于弱光照环境与背光环境。Using the image processing method of this exemplary embodiment, a data set containing 845 images of faces and gestures is tested for optimization processing, most of the images are illuminance values of 3 to 15 lux, and the distance between the face or gesture and the camera is 15 ~ 40cm, belonging to low light environment and backlight environment.
在上述数据集未经过优化的情况下,对其进行人脸与手势识别的测试,准确率为95.2%,召回率为16.7%,F1值(F1值为机器学习领域中针对算法模型的一种评价指 标,计算方法为
Figure PCTCN2020138407-appb-000001
)为28%;
In the case that the above data set is not optimized, it is tested for face and gesture recognition, the accuracy rate is 95.2%, the recall rate is 16.7%, and the F1 value (F1 value is a kind of algorithm model in the field of machine learning). Evaluation index, the calculation method is
Figure PCTCN2020138407-appb-000001
) is 28%;
在上述数据集经过优化的情况下,对其进行人脸与手势识别的测试,准确率为99.1%,召回率为82.3%,F1值为90%。In the case of the above-mentioned data set being optimized, it is tested for face and gesture recognition, the accuracy rate is 99.1%, the recall rate is 82.3%, and the F1 value is 90%.
可见,本示例性实施方式的图像处理方法,对于弱光照环境与背光环境中的识别算法,有很显著的改善作用。从而,即使在极端光照环境中,也可以得到与正常光照环境中相近的结果,增加了识别算法的鲁棒性,拓宽了其应用场景。It can be seen that the image processing method of the present exemplary embodiment has a very significant improvement effect on the recognition algorithm in a low-light environment and a backlight environment. Therefore, even in extreme lighting environments, results similar to those in normal lighting environments can be obtained, which increases the robustness of the recognition algorithm and broadens its application scenarios.
本公开的示例性实施方式还提供一种图像处理装置。参考图11所示,该图像处理装置1100可以包括处理器1110和存储器1120。其中,存储器1120存储有以下程序模块:Exemplary embodiments of the present disclosure also provide an image processing apparatus. Referring to FIG. 11 , the image processing apparatus 1100 may include a processor 1110 and a memory 1120 . Wherein, the memory 1120 stores the following program modules:
待处理图像获取模块1121,用于从连续多帧图像中获取待处理图像;The to-be-processed image acquisition module 1121 is used to acquire the to-be-processed image from consecutive multiple frames of images;
中间图像生成模块1122,用于对待处理图像进行亮度映射处理,生成中间图像;An intermediate image generation module 1122, configured to perform luminance mapping processing on the image to be processed to generate an intermediate image;
参考图像获取模块1123,用于利用上述连续多帧图像获取至少一帧参考图像;A reference image acquisition module 1123, configured to acquire at least one frame of reference image by using the above-mentioned consecutive multiple frames of images;
图像融合处理模块1124,用于对参考图像和中间图像进行融合处理,得到待处理图像对应的优化图像;The image fusion processing module 1124 is used to perform fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed;
处理器1110可用于执行上述程序模块。The processor 1110 may be used to execute the above-described program modules.
在一种可选的实施方式中,中间图像生成模块1122,被配置为:In an optional implementation manner, the intermediate image generation module 1122 is configured to:
根据待处理图像的光照信息对待处理图像进行亮度映射处理。Perform luminance mapping processing on the image to be processed according to the illumination information of the image to be processed.
在一种可选的实施方式中,光照信息包括照度值;中间图像生成模块1122,被配置为:In an optional implementation manner, the illumination information includes an illumination value; the intermediate image generation module 1122 is configured to:
当待处理图像的照度值满足第一预设条件时,对待处理图像进行全局亮度映射处理。When the luminance value of the image to be processed satisfies the first preset condition, the image to be processed is subjected to global luminance mapping processing.
在一种可选的实施方式中,中间图像生成模块1122,被配置为:In an optional implementation manner, the intermediate image generation module 1122 is configured to:
根据待处理图像的曝光参数确定待处理图像的照度值;曝光参数包括曝光时间、感光度、光圈值中的至少一个。The illuminance value of the image to be processed is determined according to the exposure parameter of the image to be processed; the exposure parameter includes at least one of exposure time, sensitivity, and aperture value.
在一种可选的实施方式中,光照信息还包括背光比例;中间图像生成模块1122,被配置为:In an optional implementation manner, the illumination information further includes a backlight ratio; the intermediate image generation module 1122 is configured to:
当待处理图像的照度值不满足第一预设条件时,确定待处理图像的背光比例;When the illuminance value of the image to be processed does not meet the first preset condition, determining the backlight ratio of the image to be processed;
当待处理图像的背光比例满足第二预设条件时,对待处理图像进行色调映射处理。When the backlight ratio of the to-be-processed image satisfies the second preset condition, tone-mapping processing is performed on the to-be-processed image.
在一种可选的实施方式中,中间图像生成模块1122,被配置为:In an optional implementation manner, the intermediate image generation module 1122 is configured to:
根据待处理图像的亮度直方图确定待处理图像的背光比例。The backlight ratio of the to-be-processed image is determined according to the brightness histogram of the to-be-processed image.
在一种可选的实施方式中,中间图像生成模块1122,被配置为:In an optional implementation manner, the intermediate image generation module 1122 is configured to:
当待处理图像的背光比例不满足第二预设条件时,将待处理图像作为中间图像。When the backlight ratio of the image to be processed does not satisfy the second preset condition, the image to be processed is used as an intermediate image.
在一种可选的实施方式中,参考图像获取模块1123,被配置为:In an optional implementation manner, the reference image acquisition module 1123 is configured to:
在连续多帧图像中,获取待处理图像以外的至少一帧图像;Obtaining at least one frame of images other than the image to be processed in consecutive multiple frames of images;
对上述至少一帧图像进行亮度映射处理,得到参考图像;或者Perform luminance mapping processing on the above-mentioned at least one frame of image to obtain a reference image; or
获取上述至少一帧图像对应的优化图像,作为参考图像。An optimized image corresponding to the at least one frame of image is obtained as a reference image.
在一种可选的实施方式中,待处理图像的曝光参数与上述至少一帧图像的曝光参数不同。In an optional implementation manner, the exposure parameter of the image to be processed is different from the exposure parameter of the above-mentioned at least one frame of image.
在一种可选的实施方式中,待处理图像为连续多帧图像中的第i帧图像;上述至少一帧图像包括连续多帧图像中的第i-m帧图像至第i-1帧图像;i为不小于2的正整数,m为任意正整数。In an optional implementation manner, the image to be processed is the i-th frame of images in consecutive multiple frames of images; the above-mentioned at least one frame of image includes the i-m-th frame of images to the i-1-th frame of images in the consecutive multi-frame images; i is a positive integer not less than 2, and m is any positive integer.
在一种可选的实施方式中,图像融合处理模块1124,被配置为:In an optional implementation manner, the image fusion processing module 1124 is configured to:
对参考图像和中间图像进行时序滤波处理。Time series filtering is performed on the reference image and the intermediate image.
在一种可选的实施方式中,图像融合处理模块1124,被配置为:In an optional implementation manner, the image fusion processing module 1124 is configured to:
在对参考图像和中间图像进行融合处理后,根据上述连续多帧图像,对融合处理后的图像进行亮度调整,得到待处理图像对应的优化图像。After the reference image and the intermediate image are fused, the brightness of the fused image is adjusted according to the above-mentioned continuous multi-frame images, so as to obtain an optimized image corresponding to the image to be processed.
在一种可选的实施方式中,待处理图像获取模块1121,被配置为:In an optional implementation manner, the to-be-processed image acquisition module 1121 is configured to:
从连续多帧RAW图像中获取当前RAW图像;Obtain the current RAW image from consecutive multi-frame RAW images;
对当前RAW图像进行通道转换处理,得到待处理图像。Perform channel conversion processing on the current RAW image to obtain an image to be processed.
在一种可选的实施方式中,待处理图像获取模块1121,被配置为:In an optional implementation manner, the to-be-processed image acquisition module 1121 is configured to:
将当前RAW图像的R通道和B通道转换为G通道;或者Convert the R and B channels of the current RAW image to the G channel; or
将当前RAW图像转换为灰度图像。Convert the current RAW image to a grayscale image.
在一种可选的实施方式中,图像处理装置1100配置于终端设备,终端设备包括AON摄像头,用于采集上述连续多帧RAW图像。In an optional implementation manner, the image processing apparatus 1100 is configured in a terminal device, and the terminal device includes an AON camera, which is used to collect the above-mentioned continuous multiple frames of RAW images.
在一种可选的实施方式中,存储器1120还包括以下程序模块:In an optional implementation manner, the memory 1120 further includes the following program modules:
图像识别应用模块,用于对上述优化图像进行目标检测、人脸识别、手势识别中的至少一种处理。The image recognition application module is used to perform at least one process of target detection, face recognition, and gesture recognition on the above-mentioned optimized image.
上述装置1100中各部分的具体细节在方法部分实施方式中已经详细说明,未披露的细节内容可以参见方法部分的实施方式内容,因而不再赘述。The specific details of each part in the above-mentioned apparatus 1100 have been described in detail in the implementation of the method part, and for undisclosed details, please refer to the implementation content of the method part, and thus will not be repeated.
图12示出了本示例性实施方式的架构示意图。如图12所示,电子设备上配置有AON摄像头,其运行AON摄像头服务,可以通过图像信号处理器实现图像的底层处理,例如执行本示例性实施方式的图像处理方法,对采集的原始图像进行处理,得到对应的优化图像,并将优化图像提供至AON软件服务。AON软件服务可以通过数字信号处理器执行监听、识别等服务,例如对优化图像进行人脸识别、手势识别,得到相应的识别结果,并将识别结果提供至应用程序服务。应用程序服务可以通过主处理器运行相关的应用程序,利用上述人脸、手势识别结果实现特定的交互指令,如锁屏与解锁,用户界面翻页等。FIG. 12 shows a schematic diagram of the architecture of this exemplary embodiment. As shown in FIG. 12 , the electronic device is configured with an AON camera, which runs the AON camera service, and can realize the underlying processing of the image through the image signal processor. After processing, the corresponding optimized image is obtained, and the optimized image is provided to the AON software service. The AON software service can perform monitoring, recognition and other services through the digital signal processor, such as face recognition and gesture recognition on the optimized image, obtain the corresponding recognition results, and provide the recognition results to the application service. The application service can run related applications through the main processor, and use the above face and gesture recognition results to implement specific interactive instructions, such as screen locking and unlocking, and user interface page turning.
本公开的示例性实施方式还提供了一种计算机可读存储介质,可以实现为一种程序产品的形式,其包括程序代码,当程序产品在电子设备上运行时,程序代码用于使 电子设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤。在一种可选的实施方式中,该程序产品可以实现为便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在电子设备,例如个人电脑上运行。然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。Exemplary embodiments of the present disclosure also provide a computer-readable storage medium that can be implemented in the form of a program product including program code for enabling the electronic device when the program product is run on the electronic device The steps described in the "Example Methods" section above in this specification in accordance with various exemplary embodiments of the present disclosure are performed. In an alternative embodiment, the program product may be implemented as a portable compact disk read only memory (CD-ROM) and include program code, and may be executed on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer readable signal medium may include a propagated data signal in baseband or as part of a carrier wave with readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A readable signal medium can also be any readable medium other than a readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。Program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language - such as the "C" language or similar programming language. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (eg, using an Internet service provider business via an Internet connection).
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开示例性实施方式的方法。Those skilled in the art can easily understand from the description of the above embodiments that the exemplary embodiments described herein may be implemented by software, or by a combination of software and necessary hardware. Therefore, the technical solutions according to the embodiments of the present disclosure may be embodied in the form of software products, and the software products may be stored in a non-volatile storage medium (which may be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to cause a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the exemplary embodiment of the present disclosure.
此外,上述附图仅是根据本公开示例性实施方式的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时 间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。Furthermore, the above-mentioned figures are merely schematic illustrations of the processes included in the methods according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures do not indicate or limit the chronological order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, in multiple modules.
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的示例性实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the apparatus for action performance are mentioned in the above detailed description, this division is not mandatory. Indeed, according to exemplary embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one module or unit described above may be further divided into multiple modules or units to be embodied.
所属技术领域的技术人员能够理解,本公开的各个方面可以实现为系统、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施方式。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施方式仅被视为示例性的,本公开的真正范围和精神由权利要求指出。As will be appreciated by one skilled in the art, various aspects of the present disclosure may be implemented as a system, method or program product. Therefore, various aspects of the present disclosure can be embodied in the following forms: a complete hardware implementation, a complete software implementation (including firmware, microcode, etc.), or a combination of hardware and software aspects, which may be collectively referred to herein as implementations "circuit", "module" or "system". Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or techniques in the technical field not disclosed by the present disclosure . The specification and embodiments are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限定。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (19)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, comprising:
    从连续多帧图像中获取待处理图像;Obtain the image to be processed from consecutive multi-frame images;
    对所述待处理图像进行亮度映射处理,生成中间图像;Perform luminance mapping processing on the to-be-processed image to generate an intermediate image;
    利用所述连续多帧图像获取至少一帧参考图像;Obtain at least one frame of reference image by using the consecutive multiple frames of images;
    对所述参考图像和所述中间图像进行融合处理,得到所述待处理图像对应的优化图像。Fusion processing is performed on the reference image and the intermediate image to obtain an optimized image corresponding to the to-be-processed image.
  2. 根据权利要求1所述的方法,其特征在于,所述对所述待处理图像进行亮度映射处理,包括:The method according to claim 1, wherein the performing luminance mapping processing on the to-be-processed image comprises:
    根据所述待处理图像的光照信息对所述待处理图像进行亮度映射处理。Perform luminance mapping processing on the to-be-processed image according to the illumination information of the to-be-processed image.
  3. 根据权利要求2所述的方法,其特征在于,所述光照信息包括照度值;所述根据所述待处理图像的光照信息对所述待处理图像进行亮度映射处理,包括:The method according to claim 2, wherein the illumination information includes an illuminance value; and performing luminance mapping processing on the to-be-processed image according to the illumination information of the to-be-processed image comprises:
    当所述待处理图像的照度值满足第一预设条件时,对所述待处理图像进行全局亮度映射处理。When the illuminance value of the to-be-processed image satisfies the first preset condition, global luminance mapping processing is performed on the to-be-processed image.
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:The method according to claim 3, wherein the method further comprises:
    根据所述待处理图像的曝光参数确定所述待处理图像的照度值;Determine the illuminance value of the to-be-processed image according to the exposure parameter of the to-be-processed image;
    所述曝光参数包括曝光时间、感光度、光圈值中的至少一个。The exposure parameters include at least one of exposure time, sensitivity, and aperture value.
  5. 根据权利要求3所述的方法,其特征在于,所述光照信息还包括背光比例;所述根据所述待处理图像的光照信息对所述待处理图像进行亮度映射处理,还包括:The method according to claim 3, wherein the illumination information further includes a backlight ratio; the performing luminance mapping processing on the to-be-processed image according to the illumination information of the to-be-processed image further comprises:
    当所述待处理图像的照度值不满足所述第一预设条件时,确定所述待处理图像的背光比例;When the illuminance value of the to-be-processed image does not satisfy the first preset condition, determining the backlight ratio of the to-be-processed image;
    当所述待处理图像的背光比例满足第二预设条件时,对所述待处理图像进行色调映射处理。When the backlight ratio of the to-be-processed image satisfies the second preset condition, the tone-mapping process is performed on the to-be-processed image.
  6. 根据权利要求5所述的方法,其特征在于,所述确定所述待处理图像的背光比例,包括:The method according to claim 5, wherein the determining the backlight ratio of the to-be-processed image comprises:
    根据所述待处理图像的亮度直方图确定所述待处理图像的背光比例。The backlight ratio of the to-be-processed image is determined according to the brightness histogram of the to-be-processed image.
  7. 根据权利要求5所述的方法,其特征在于,所述方法还包括:The method according to claim 5, wherein the method further comprises:
    当所述待处理图像的背光比例不满足所述第二预设条件时,将所述待处理图像作为所述中间图像。When the backlight ratio of the to-be-processed image does not satisfy the second preset condition, the to-be-processed image is used as the intermediate image.
  8. 根据权利要求1所述的方法,其特征在于,所述利用所述连续多帧图像获取至少一帧参考图像,包括:The method according to claim 1, wherein the obtaining at least one frame of reference image by using the consecutive multiple frames of images comprises:
    在所述连续多帧图像中,获取所述待处理图像以外的至少一帧图像;Obtain at least one frame of images other than the to-be-processed image from the consecutive multiple frames of images;
    对所述至少一帧图像进行亮度映射处理,得到所述参考图像;或者Perform luminance mapping processing on the at least one frame of image to obtain the reference image; or
    获取所述至少一帧图像对应的优化图像,作为所述参考图像。An optimized image corresponding to the at least one frame of image is acquired as the reference image.
  9. 根据权利要求8所述的方法,其特征在于,所述待处理图像的曝光参数与所述 至少一帧图像的曝光参数不同。The method according to claim 8, wherein the exposure parameter of the image to be processed is different from the exposure parameter of the at least one frame of image.
  10. 根据权利要求8所述的方法,其特征在于,所述待处理图像为所述连续多帧图像中的第i帧图像;所述至少一帧图像包括所述连续多帧图像中的第i-m帧图像至第i-1帧图像;i为不小于2的正整数,m为任意正整数。The method according to claim 8, wherein the to-be-processed image is the i-th frame of images in the consecutive multi-frame images; the at least one frame of image comprises the i-m-th frame in the consecutive multi-frame images Image to the i-1th frame image; i is a positive integer not less than 2, and m is any positive integer.
  11. 根据权利要求1所述的方法,其特征在于,所述对所述参考图像和所述中间图像进行融合处理,包括:The method according to claim 1, wherein the performing fusion processing on the reference image and the intermediate image comprises:
    对所述参考图像和所述中间图像进行时序滤波处理。A time series filtering process is performed on the reference image and the intermediate image.
  12. 根据权利要求1所述的方法,其特征在于,在对所述参考图像和所述中间图像进行融合处理后,所述方法还包括:The method according to claim 1, wherein after the reference image and the intermediate image are fused, the method further comprises:
    根据所述连续多帧图像,对融合处理后的图像进行亮度调整,得到所述待处理图像对应的优化图像。According to the continuous multiple frames of images, the brightness of the fused images is adjusted to obtain an optimized image corresponding to the to-be-processed image.
  13. 根据权利要求1所述的方法,其特征在于,所述从连续多帧图像中获取待处理图像,包括:The method according to claim 1, wherein the acquiring the image to be processed from consecutive multiple frames of images comprises:
    从连续多帧RAW图像中获取当前RAW图像;Obtain the current RAW image from consecutive multi-frame RAW images;
    对所述当前RAW图像进行通道转换处理,得到所述待处理图像。Perform channel conversion processing on the current RAW image to obtain the to-be-processed image.
  14. 根据权利要求13所述的方法,其特征在于,所述对所述当前RAW图像进行通道转换处理,包括:The method according to claim 13, wherein the performing channel conversion processing on the current RAW image comprises:
    将所述当前RAW图像的R通道和B通道转换为G通道;或者Convert the R channel and B channel of the current RAW image to the G channel; or
    将所述当前RAW图像转换为灰度图像。Convert the current RAW image to a grayscale image.
  15. 根据权利要求13所述的方法,其特征在于,所述方法应用于终端设备,所述终端设备包括常开摄像头,用于采集所述连续多帧RAW图像。The method according to claim 13, wherein the method is applied to a terminal device, and the terminal device comprises a normally-on camera, which is used to collect the continuous multiple frames of RAW images.
  16. 根据权利要求1所述的方法,其特征在于,在得到所述待处理图像对应的优化图像后,所述方法还包括:The method according to claim 1, wherein after obtaining the optimized image corresponding to the to-be-processed image, the method further comprises:
    对所述优化图像进行目标检测、人脸识别、手势识别中的至少一种处理。At least one of target detection, face recognition, and gesture recognition is performed on the optimized image.
  17. 一种图像处理装置,其特征在于,包括处理器;An image processing device, comprising a processor;
    其中,所述处理器用于执行存储器存储的以下程序模块:Wherein, the processor is used to execute the following program modules stored in the memory:
    待处理图像获取模块,用于从连续多帧图像中获取待处理图像;a to-be-processed image acquisition module, used to acquire the to-be-processed image from consecutive multiple frames of images;
    中间图像生成模块,用于对所述待处理图像进行亮度映射处理,生成中间图像;an intermediate image generation module, configured to perform luminance mapping processing on the to-be-processed image to generate an intermediate image;
    参考图像获取模块,用于利用所述连续多帧图像获取至少一帧参考图像;a reference image acquisition module, configured to acquire at least one frame of reference image by using the consecutive multi-frame images;
    图像融合处理模块,用于对所述参考图像和所述中间图像进行融合处理,得到所述待处理图像对应的优化图像。An image fusion processing module, configured to perform fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the to-be-processed image.
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至16任一项所述的方法。A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the method of any one of claims 1 to 16 is implemented.
  19. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    处理器;以及processor; and
    存储器,用于存储所述处理器的可执行指令;a memory for storing executable instructions for the processor;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至16任一项所述的方法。wherein the processor is configured to perform the method of any one of claims 1 to 16 by executing the executable instructions.
PCT/CN2020/138407 2020-12-22 2020-12-22 Image processing method and apparatus, storage medium and electronic device WO2022133749A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080107017.0A CN116457822A (en) 2020-12-22 2020-12-22 Image processing method, device, storage medium and electronic equipment
PCT/CN2020/138407 WO2022133749A1 (en) 2020-12-22 2020-12-22 Image processing method and apparatus, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/138407 WO2022133749A1 (en) 2020-12-22 2020-12-22 Image processing method and apparatus, storage medium and electronic device

Publications (1)

Publication Number Publication Date
WO2022133749A1 true WO2022133749A1 (en) 2022-06-30

Family

ID=82158530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138407 WO2022133749A1 (en) 2020-12-22 2020-12-22 Image processing method and apparatus, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN116457822A (en)
WO (1) WO2022133749A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779330A (en) * 2012-06-13 2012-11-14 京东方科技集团股份有限公司 Image reinforcement method, image reinforcement device and display device
US20130314568A1 (en) * 2011-02-18 2013-11-28 DigitalOptics Corporation Europe Limited Dynamic Range Extension by Combining Differently Exposed Hand-Held Device-Acquired Images
CN106570850A (en) * 2016-10-12 2017-04-19 成都西纬科技有限公司 Image fusion method
CN109785423A (en) * 2018-12-28 2019-05-21 广州华多网络科技有限公司 Image light compensation method, device and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130314568A1 (en) * 2011-02-18 2013-11-28 DigitalOptics Corporation Europe Limited Dynamic Range Extension by Combining Differently Exposed Hand-Held Device-Acquired Images
CN102779330A (en) * 2012-06-13 2012-11-14 京东方科技集团股份有限公司 Image reinforcement method, image reinforcement device and display device
CN106570850A (en) * 2016-10-12 2017-04-19 成都西纬科技有限公司 Image fusion method
CN109785423A (en) * 2018-12-28 2019-05-21 广州华多网络科技有限公司 Image light compensation method, device and computer equipment

Also Published As

Publication number Publication date
CN116457822A (en) 2023-07-18

Similar Documents

Publication Publication Date Title
WO2019183813A1 (en) Image capture method and device
CN109671106B (en) Image processing method, device and equipment
WO2020057198A1 (en) Image processing method and device, electronic device and storage medium
WO2019148978A1 (en) Image processing method and apparatus, storage medium and electronic device
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN110766621B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2019120016A1 (en) Image processing method and apparatus, storage medium, and electronic device
KR102566998B1 (en) Apparatus and method for determining image sharpness
WO2022179335A1 (en) Video processing method and apparatus, electronic device, and storage medium
US20170070718A1 (en) Advanced Multi-Band Noise Reduction
US11715184B2 (en) Backwards-compatible high dynamic range (HDR) images
KR20150099302A (en) Electronic device and control method of the same
US9380218B2 (en) Highlight exposure metric and its applications
CN112289279B (en) Screen brightness adjusting method and device, storage medium and electronic equipment
WO2022087973A1 (en) Image processing method and apparatus, computer-readable medium, and electronic device
WO2021237732A1 (en) Image alignment method and apparatus, electronic device, and storage medium
US10187566B2 (en) Method and device for generating images
CN113810603B (en) Point light source image detection method and electronic equipment
CN110047060B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2023160285A9 (en) Video processing method and apparatus
CN113452969B (en) Image processing method and device
CN117274109B (en) Image processing method, noise reduction model training method and electronic equipment
US11521305B2 (en) Image processing method and device, mobile terminal, and storage medium
CN114463191A (en) Image processing method and electronic equipment
WO2022133749A1 (en) Image processing method and apparatus, storage medium and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20966329

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080107017.0

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20966329

Country of ref document: EP

Kind code of ref document: A1