CN116457822A - Image processing method, device, storage medium and electronic equipment - Google Patents

Image processing method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116457822A
CN116457822A CN202080107017.0A CN202080107017A CN116457822A CN 116457822 A CN116457822 A CN 116457822A CN 202080107017 A CN202080107017 A CN 202080107017A CN 116457822 A CN116457822 A CN 116457822A
Authority
CN
China
Prior art keywords
image
processed
frame
brightness
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080107017.0A
Other languages
Chinese (zh)
Inventor
罗俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN116457822A publication Critical patent/CN116457822A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

An image processing method, an image processing apparatus, a storage medium, and an electronic device. The image processing method comprises the following steps: acquiring an image to be processed from the continuous multi-frame images (S310); performing brightness mapping processing on the image to be processed to generate an intermediate image (S320); acquiring at least one frame of reference image using the continuous multi-frame image (S330); and performing time sequence filtering processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed (S340). The method improves the brightness problem in the image, realizes the image noise reduction and the repair of local information loss, and obtains a high-quality image optimization result.

Description

Image processing method, device, storage medium and electronic equipment Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer readable storage medium, and an electronic device.
Background
The brightness of an image refers to the brightness of the image, and is an important factor affecting the visual perception of people when viewing the image. When the brightness of the image is not suitable, such as too high or too low, the image content cannot be sufficiently presented, for example, faces, characters and the like in the image are difficult to recognize, thereby affecting the image quality.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, thereby improving luminance problems in an image at least to some extent.
According to a first aspect of the present disclosure, there is provided an image processing method including: acquiring an image to be processed from continuous multi-frame images; performing brightness mapping processing on the image to be processed to generate an intermediate image; acquiring at least one frame of reference image by utilizing the continuous multi-frame images; and carrying out fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including a processor; wherein the processor is configured to execute the following program modules stored in the memory: the image acquisition module to be processed is used for acquiring images to be processed from continuous multi-frame images; the intermediate image generation module is used for carrying out brightness mapping processing on the image to be processed to generate an intermediate image; a reference image acquisition module, configured to acquire at least one frame of reference image by using the continuous multi-frame images; and the image fusion processing module is used for carrying out fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image processing method of the first aspect described above and possible implementations thereof via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
on the one hand, the brightness mapping processing is carried out on the image to be processed, so that the brightness problems existing in the image, such as too low brightness, too high brightness, partial brightness imbalance and the like, can be improved, the fusion processing is carried out on the reference image and the intermediate image, the noise reduction can be realized, the partial information loss in the image is repaired, and the optimized image with clear and visible image content is obtained. On the other hand, the scheme can realize the optimization of any frame of image based on continuous multi-frame images acquired during video shooting or image previewing, does not need external information, and has lower realization cost and higher practicability.
Drawings
FIG. 1A shows a face image taken in a low-light environment;
FIG. 1B shows a face image taken in a backlit environment;
fig. 2 shows a schematic structural diagram of an electronic device in the present exemplary embodiment;
fig. 3 shows a flowchart of an image processing method in the present exemplary embodiment;
fig. 4 shows a flowchart of a method of acquiring an image to be processed in the present exemplary embodiment;
fig. 5 shows a schematic diagram of conversion of a RAW image into a single-channel image in the present exemplary embodiment;
fig. 6 shows a schematic diagram of a map curve in the present exemplary embodiment;
fig. 7 shows a flowchart of a luminance mapping processing method in the present exemplary embodiment;
fig. 8 is a flowchart showing a method of acquiring a reference image in the present exemplary embodiment;
fig. 9 shows an example diagram of image processing in the present exemplary embodiment;
fig. 10 shows an example diagram of image processing and face recognition in the present exemplary embodiment;
fig. 11 shows a schematic configuration diagram of an image processing apparatus in the present exemplary embodiment;
fig. 12 shows an architecture diagram in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The ambient light conditions and exposure parameters of the photographing device when photographing an image affect the brightness of the image. The influence of the ambient light condition is larger, and in the extreme environments such as low light, backlight and extremely strong light sources, important part of information may be lost in the photographed image. For example, as shown in fig. 1A, when a person is photographed, if the ambient light is weak, i.e., a low-light environment, the entire brightness of the photographed image is low, and it is difficult to recognize the face; referring to fig. 1B, if a light source is located behind a photographed person, i.e., a backlight environment, the brightness of a face portion of the person is low, and it is also difficult to recognize. Therefore, brightness improvement of an image photographed in an extreme environment is required.
To this end, exemplary embodiments of the present disclosure first provide an image processing method, the application scenario of which includes, but is not limited to: in an intelligent interaction scene of the mobile terminal, acquiring, monitoring and detecting human face and gesture information through an AON (Always ON) camera to realize specific interaction functions, such as automatically activating a display screen when the human face is detected and automatically turning pages of a user interface when a page-turning gesture is detected; however, in environments such as weak illumination, backlight, and extremely strong light sources, face and gesture images acquired by the AON camera may have too low or too high brightness, which affects the accuracy of the detection; by the image processing method of the present exemplary embodiment, the brightness of the image can be improved to increase the accuracy of detection of a face, a gesture, or the like. In a scene of target tracking, acquiring an image in real time, and identifying a target object in the image; if in the extreme illumination environment described above, the brightness of the target portion in the image is too low or too high, resulting in difficulty in recognition; also, the image processing method of the present exemplary embodiment can be improved to increase the recognition accuracy.
The exemplary embodiments of the present disclosure also provide an electronic device for performing the above-described image processing method. The electronic device includes, but is not limited to, a computer, a smart phone, a tablet computer, a gaming machine, a wearable device, and the like. Generally, an electronic device includes a processor and a memory. The memory is used for storing executable instructions of the processor, and can also store application data, such as image data, game data and the like; the processor is configured to execute the image processing method in the present exemplary embodiment via execution of the executable instructions.
The configuration of the above-described electronic device will be exemplarily described below taking the mobile terminal 200 in fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of fig. 2 can also be applied to stationary type devices in addition to components specifically for mobile purposes.
As shown in fig. 2, the mobile terminal 200 may specifically include: processor 210, internal memory 221, external memory interface 222, USB (Universal Serial Bus ) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headset interface 274, sensor module 280, display screen 290, camera module 291, indicator 292, motor 293, keys 294, and SIM (Subscriber Identification Module, subscriber identity module) card interface 295, and the like.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an AP (Application Processor ), modem processor, GPU (Graphics Processing Unit, graphics processor), ISP (Image Signal Processor ), controller, encoder, decoder, DSP (Digital Signal Processor ), baseband processor and/or NPU (Neural-Network Processing Unit, neural network processor), and the like.
In some embodiments, processor 210 may include one or more interfaces through which connections are made with other components of mobile terminal 200.
The internal memory 221 may be used to store computer executable program code including instructions. The internal storage area 221 may include volatile memory such as DRAM (Dynamic Random Access Memory ), SRAM (Static Random Access Memory, static random access memory), and nonvolatile memory such as at least one magnetic disk storage device, flash memory device, UFS (Universal Flash Storage, universal flash memory), etc. The processor 210 performs various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
The external memory interface 222 may be used to connect an external memory, such as a Micro SD card, to enable expansion of the memory capabilities of the mobile terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing files of music, video, etc.
The USB interface 230 is an interface conforming to the USB standard specification, and may be used to connect a charger to charge the mobile terminal 200, or may be connected to a headset or other electronic device.
The charge management module 240 is configured to receive a charge input from a charger. The charging management module 240 may also supply power to the device through the power management module 241 while charging the battery 242; the power management module 241 may also monitor the status of the battery.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the mobile terminal 200. The wireless communication module 260 may provide wireless communication solutions including WLAN (Wireless Local Area Networks, wireless local area network) (e.g., wi-Fi (Wireless Fidelity, wireless fidelity) network), BT (Bluetooth), GNSS (Global Navigation Satellite System ), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, short range wireless communication technology), IR (Infrared technology), etc. applied on the mobile terminal 200.
The mobile terminal 200 may implement a display function through a GPU, a display screen 290, an AP, and the like.
The mobile terminal 200 may implement a photographing function through an ISP, a camera module 291, an encoder, a decoder, a GPU, a display 290, an AP, and the like. The camera module 291 may include various types of cameras, such as an AON camera, a wide-angle camera, a high-definition camera, etc., and the cameras may be disposed at any position of the mobile terminal 200, such as on a side having the display screen 290, forming a front camera, or on an opposite side of the display screen 290, forming a rear camera.
The mobile terminal 200 may implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, a headphone interface 274, an AP, and the like.
The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, etc. to implement different sensing functions.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The motor 293 may generate vibration cues, may also be used for touch vibration feedback, or the like. The keys 294 include a power on key, a volume key, etc.
The mobile terminal 200 may support one or more SIM card interfaces 295 for interfacing with a SIM card for implementing functions such as voice and data communications.
Fig. 3 shows a schematic flow of an image processing method in the present exemplary embodiment, which may include:
step S310, obtaining an image to be processed from continuous multi-frame images;
step S320, performing brightness mapping processing on the image to be processed to generate an intermediate image;
step S330, at least one frame of reference image is obtained by utilizing the continuous multi-frame images;
and step S340, fusion processing is carried out on the reference image and the intermediate image, and an optimized image corresponding to the image to be processed is obtained.
According to the method, on one hand, brightness mapping processing is carried out on the image to be processed, brightness problems in the image, such as low brightness, high brightness, unbalanced local brightness and the like, fusion processing is carried out on the reference image and the intermediate image, noise reduction can be achieved, local information loss in the image is repaired, and therefore an optimized image with clear and visible image content is obtained, and further application of face recognition, gesture recognition, target detection and the like is facilitated; furthermore, the scheme improves the robustness of image shooting and processing in extreme illumination environments, reduces the dependence on the performances of hardware such as cameras, image sensors and the like, for example, for the image sensors with lower photosensitive performance, can optimize the shot images to obtain high-quality images, and is beneficial to reducing the hardware cost. On the other hand, the scheme can realize the optimization of any frame of image based on continuous multi-frame images acquired during video shooting or image previewing, does not need external information, and has lower realization cost and higher practicability.
Each step in fig. 3 is specifically described below.
In step S310, an image to be processed is acquired from the continuous multi-frame images.
The continuous multi-frame images can be images continuously acquired by a camera, for example, the camera shoots videos or continuously acquires preview images. The image to be processed may be any frame of image, generally, when the camera acquires an image, the method of fig. 3 is executed in real time by taking a currently acquired frame of image as the image to be processed, so as to implement processing of each frame of image.
In an alternative embodiment, referring to FIG. 4, step S310 may include
Step S410, acquiring a current RAW image from continuous multi-frame RAW images;
step S420, performing channel conversion processing on the current RAW image to obtain an image to be processed.
The RAW image refers to an image stored in a RAW format, and generally refers to an original image acquired by an image sensor in a camera. The image sensor collects optical signals through a Bayer filter and converts the optical signals into digital signals to obtain RAW images. Referring to the left diagram in fig. 5, each pixel point in the RAW image has only one color of RGB and is arranged in a bayer array. In an alternative embodiment, an AON camera may be disposed on the terminal device, and the continuous multi-frame RAW image may be acquired
In this exemplary embodiment, a frame of RAW image acquired at present is acquired, and the channel conversion processing is performed on the current RAW image due to different color channels of each pixel point, so as to obtain an image to be processed. The channel conversion process unifies the color channels of each pixel point, including but not limited to the following two modes:
1. each pixel point of the current RAW image is converted into any channel of RGB, for example, uniformly converted into an R channel. Generally, since human eyes are sensitive to green, referring to fig. 5, R and B channels of a current RAW image can be converted into G channels, to obtain an image with all G channel pixels. Specifically, the pixel points of the R channel and the B channel may be mapped to be converted into pixel values of the G channel, for example:
P R *a 1 =P G ,P B *a 2 =P G (1)
wherein P is R 、P G 、P B Pixel values of an R channel, a G channel and a B channel are respectively represented; a, a 1 And a 2 The coefficients of the conversion of the R channel and the B channel to the G channel, respectively, may be empirical coefficients, such as a 1 =0.299/0.587=0.509 (0.299 and 0.587 are coefficients at the time of gradation conversion of the R channel and the G channel, respectively), a 2 =0.114/0.587=0.194 (0.114 is a coefficient when the B channel performs gradation conversion). It can be seen that when the R, B channel is converted to the G channel, the uniformity of brightness, that is, the brightness level of the same pixel point before and after the conversion is substantially uniform, is considered.
2. The current RAW image is converted into a Gray image, for example, R, G, B channels can be respectively converted into Gray values according to a certain coefficient ratio, or the RAW image can be processed by Demosaic to obtain an RGB image, and then the Gray value (Gray) of each pixel point is calculated by the following formula (2), so that the Gray image is obtained.
Gray=P R *0.299+P G *0.587+P B *0.114 (2)
In the current RAW image, 10 bits are required for each pixel, wherein 8 bits record pixel values and 2 bits record channel information. The single-channel image is obtained through the channel conversion processing (the gray-scale image can be regarded as a special single-channel image), and the channel information is not required to be recorded, so that each pixel point only needs 8 bits. Compared with RGB images, the single-channel image has the advantages that the data volume of the single-channel image is greatly reduced, and meanwhile, the bit width for recording channel information is saved, so that the data volume of subsequent processing is reduced, and the realization of real-time optimization processing is facilitated. And the single-channel image can carry out brightness characterization on the image to be processed, and the information is more sufficient.
With continued reference to fig. 3, in step S320, the luminance mapping process is performed on the image to be processed, and an intermediate image is generated.
The brightness mapping process refers to mapping brightness values of pixel points in an image to be processed into new values. The brightness values of all the pixel points can be adjusted in the same direction, such as uniform increase or uniform decrease, and brightness variation values of different pixel points can be the same or different; the brightness values of different pixels can be adjusted in different directions, such as reducing the brightness of the over-bright part, increasing the brightness of the over-dark part, adjusting the gray level of the image, and the like.
In this exemplary embodiment, a mapping relationship between luminance values before and after mapping may be preconfigured, where the mapping relationship may be related to a luminance level or a luminance distribution condition of an image to be processed, and then luminance mapping processing is implemented according to the mapping relationship, and an intermediate image is obtained after processing.
In an alternative embodiment, step S320 may include:
and carrying out brightness mapping processing on the image to be processed according to the illumination information of the image to be processed.
The illumination information is information representing an ambient illumination condition of the captured image to be processed, and may include an illuminance value and a backlight ratio, for example. The illuminance value is a measure of the amount of light received per unit area of the captured image. The backlight ratio is the ratio of the backlight partial area in the image to be processed. The illumination information can reflect whether the image to be processed has defects in illumination, such as overall illumination overillumination, illumination underillumination, local illumination distribution unbalance and the like, and further, brightness mapping processing can be performed on the image to be processed in a targeted manner.
For example, the illumination value of the image to be processed may be determined according to the exposure parameters of the image to be processed. The exposure parameter includes at least one of exposure time (Exptime), sensitivity (ISO), aperture value (F value). In general, exposure parameters are recorded when an image to be processed is photographed, and for a camera on a smart phone, an aperture value of the camera is usually a fixed value. The illumination value is estimated by the exposure parameter, the more comprehensive the acquired exposure parameter is, the more accurate the estimated illumination value is, for example, the following formula (3) can be referred to:
Lumi=a 3 *F 2 /(Exptime*ISO) (3)
Lumi represents the illuminance value in lux; a, a 3 The empirical coefficient is represented, for example, the empirical coefficient can be 12.4, and the empirical coefficient can be adjusted according to actual shooting scenes or camera performance; f is an aperture value, which is the ratio of the focal length of the lens to the effective diameter of the lens; exptime is exposure time in seconds; ISO is the sensitivity.
The illumination value of the image to be processed shows the global illumination condition of the image to be processed, when the illumination value is too low, the condition that the whole image to be processed is underexposed is indicated, and when the illumination value is too high, the condition that the whole image to be processed is overexposed is indicated. Thus, in an alternative embodiment, step S320 may include:
and when the illuminance value of the image to be processed meets a first preset condition, performing global brightness mapping processing on the image to be processed.
The first preset condition may include: less than a first threshold, or greater than a second threshold. The first threshold is a threshold for measuring whether underexposure exists, the second threshold is a threshold for measuring whether overexposure exists, the two thresholds can be determined empirically, and can be adjusted according to actual scenes, for example, the first threshold and the second threshold are appropriately increased in daytime and are appropriately decreased in nighttime. When the illuminance value of the image to be processed satisfies the first preset condition, there is a case of underexposure or overexposure, and thus it is subjected to global brightness map processing (Global Luminance Mapping). The method specifically comprises the following steps:
And when the illuminance value of the image to be processed is smaller than the first threshold value, performing global brightness upward mapping processing on the image to be processed, namely global brightness enhancement.
And when the illuminance value of the image to be processed is larger than the second threshold value, performing global brightness downward mapping processing on the image to be processed, namely global brightness reduction.
Taking global brightness enhancement as an example, brightness values of different pixel points in an image to be processed can be mapped into higher brightness values according to a preset mapping curve. The mapping curve can be a primary curve, which shows a linear mapping relation, or can be a secondary curve or other nonlinear curves, which shows a nonlinear mapping relation. It should be noted that, in the present exemplary embodiment, a plurality of mapping curves may be configured, and the higher the mapping intensity, the more pronounced the highlighting is, corresponding to different mapping intensities, respectively. In specific processing, the mapping strength and which mapping curve is adopted can be determined according to the illumination value of the image to be processed or the actual scene requirement, for example, the lower the illumination value is, the higher the mapping strength is, and the mapping curve with larger overall slope is adopted.
In an alternative embodiment, step S320 may further include:
when the illuminance value of the image to be processed does not meet the first preset condition, determining the backlight proportion of the image to be processed;
And when the backlight proportion of the image to be processed meets a second preset condition, performing tone mapping processing on the image to be processed.
The illuminance value of the image to be processed does not meet the first preset condition, which indicates that the global illumination condition of the image to be processed is proper, and further calculates the backlight proportion of the image to be processed to judge whether the local unsuitable condition exists.
In an alternative embodiment, the backlight proportion of the image to be processed may be determined according to the brightness histogram of the image to be processed, which may specifically include: setting a plurality of brightness steps, and counting the proportion of pixel points in each brightness step to form a brightness histogram of the image to be processed; if there are at least two brightness levels, the brightness difference value of which reaches a set value (the set value can be an empirical value or can be determined according to a specific scene, for example, when a significant light source exists in the scene, the set value is generally larger, and the set value is generally smaller under natural light or no significant light source), the lower brightness level is determined as a backlight part; and counting the brightness histograms of all backlight parts to obtain the proportion of the backlight parts to the image to be processed, namely the backlight proportion.
In an alternative embodiment, the highest brightness value in the image to be processed may also be obtained, and the brightness threshold value is determined according to the highest brightness value, for example, the highest brightness value may be multiplied by a fixed coefficient smaller than 1 to obtain the brightness threshold value; filtering out pixel points lower than the brightness threshold value, and extracting a communication area in the pixel points, namely filtering out isolated pixel points; and taking the proportion of the communication area to the image to be processed as the backlight proportion.
The second preset condition may include: the backlight ratio is greater than the third threshold. It should be noted that, the above calculation of the backlight ratio is to estimate the backlight phenomenon that may exist in the image to be processed, and when the backlight ratio is greater than the third threshold value, the probability of existence of the backlight phenomenon may be considered to be higher. The third threshold is a threshold for measuring whether the backlight phenomenon exists or not, and can be determined according to experience or actual scenes. In this case, tone Mapping processing (Tone Mapping) is performed on the image to be processed. The tone mapping process still essentially maps luminance, unlike the global luminance mapping process described above, the tone mapping process can change the luminance range or the luminance level distribution of an image, and the luminance adjustment direction can be different for each pixel point. For example, in the global brightness mapping process, brightness adjustment is performed in the same direction for all pixel points in the whole image, such as overall brightness increase or overall brightness decrease, so as to adjust the overall brightness level; in the tone mapping process, for different pixel points in the whole image, brightness adjustment can be performed in different directions, for example, a part with higher brightness is performed with brightness down-mapping process, and a part with lower brightness is performed with brightness up-mapping process, so as to adjust brightness distribution condition.
In the present exemplary embodiment, the tone mapping process may be implemented by a mapping curve. Fig. 6 shows a mapping curve used for tone mapping, wherein the abscissa is the luminance value before mapping, the ordinate is the luminance value after mapping, curve A, B, C is the mapping curve at different mapping intensities, the mapping intensity of curve C is the highest, and the mapping intensity of curve a is the lowest. Therefore, after the brightness values are mapped by the mapping curve, the brightness distribution of the image to be processed can be mapped into a smaller range, the higher the mapping intensity is, the smaller the brightness range is after mapping, and the problem of local invisibility caused by extremely uneven different local brightness distribution is solved. In general, the higher the backlight ratio or the larger the difference between the backlight part and the high-luminance part, the higher the mapping intensity is employed. It can be seen that the mapping curve used for the tone mapping process is different from the mapping curve used for the global luminance mapping process described above: the former is generally a nonlinear curve, wherein one slope is greater than 45 degrees (i.e., the part of the brightness up-mapping process is generally the lower brightness segment), and the other slope is less than 45 degrees (i.e., the part of the brightness down-mapping process is generally the higher brightness segment); the latter may be linear or nonlinear, and the slope of the whole curve is greater than 45 degrees or less than 45 degrees.
In an alternative embodiment, when the backlight proportion of the image to be processed does not meet the second preset condition, it is indicated that the global illumination condition and the local illumination condition of the image to be processed are good, and the image to be processed can be used as the intermediate image without performing brightness mapping processing.
Fig. 7 shows a schematic flow of the luminance mapping process, including:
step S710, determining an illuminance value Lumi of the image to be processed;
step S720, comparing the illuminance value with a first threshold T1 and a second threshold T2; when Lumi < T1 or Lumi > T2, determining that the first preset condition is satisfied, and performing step S730; otherwise, executing step S740;
step S730, performing global brightness mapping processing on the image to be processed;
step S740, determining a backlight ratio (BL ratio) of the image to be processed;
step S750, comparing the backlight proportion with a third threshold T3; when the BL ratio > T3, determining that the second preset condition is satisfied, and executing step S760; otherwise, the image to be processed is not processed, and the step S770 is skipped;
step S760, tone mapping processing is performed on the image to be processed;
step S770, obtaining an intermediate image.
By the above method, the brightness of the image to be processed is improved on the global and local levels, and the image information missing due to the brightness problem can be recovered to a certain extent in the obtained intermediate image.
With continued reference to fig. 3, in step S330, at least one frame of reference image is acquired using the continuous multi-frame images described above.
Wherein the reference image is one or more frames of images having temporal continuity with the image to be processed, which may form an complement to the image information of the image to be processed.
In an alternative embodiment, referring to fig. 8, step S330 may include:
step S810, at least one frame of image except the image to be processed is acquired from the continuous multi-frame images;
step S820, performing brightness mapping processing on the at least one frame of image to obtain a reference image;
step S830, an optimized image corresponding to the at least one frame image is obtained as a reference image.
Any one or more frames of images except the images to be processed can be selected from the continuous multi-frame images, and generally, the more the number of the selected images is, the closer the time of the selected images to the images to be processed is, the more the optimization effect is improved. For example, the image to be processed is the ith frame image in the continuous multi-frame images, i is a positive integer not less than 2, and the ith-m frame image to the ith-1 frame image can be selected, namely, the continuous m frame image positioned in front of the image to be processed is used for subsequent optimization, and m is any positive integer. The value of m can be determined by combining experience, actual demand and calculation performance.
In step S820 and step S830, one of them may be selectively performed. In step S820, the brightness mapping process for the at least one frame image may refer to the brightness mapping process performed on the image to be processed in step S320 and fig. 7. In step S830, the optimized image corresponding to the at least one frame image may be an image obtained by optimizing the image by using the method flow in fig. 3, for example, for each frame image acquired by the camera in real time, the method flow in fig. 3 may be used to perform optimization to obtain an optimized image corresponding to each frame image, where the optimized image may be used as a reference image when optimizing the next frame image.
In an alternative embodiment, the exposure parameter of the image to be processed may be different from the exposure parameter of the at least one frame image, for example, any one of exposure time, sensitivity, and aperture value may be different. Thus, the image to be processed or the intermediate image can be complementary with the at least one frame image or the reference image to form information on exposure and brightness. Further, the device may control the camera to collect the continuous multi-frame images with different exposure parameters, for example, gradually increasing exposure time or gradually increasing sensitivity when collecting the images, so that the exposure parameters between the different frames of images are different, and the information in the continuous multi-frame images is maximized, so as to form more effective information complementation.
With continued reference to fig. 3, in step S340, fusion processing is performed on the reference image and the intermediate image, so as to obtain an optimized image corresponding to the image to be processed.
The reference image and the intermediate image have small differences in time, so that the difference of image information exists, the two images are fused, and a repairing effect can be formed on the image information of the detail part. For example, the image frequencies in the two images can be scanned respectively, compared, a region with the image frequency higher than that of the intermediate image in the reference image is selected, and the region in the reference image is fused with the intermediate image.
In the fusion process, the pixel points at the same position can be weighted, for example, the weight is determined according to the image frequency of the pixel points in the two images, and the weighted fusion is performed.
In an alternative embodiment, step S340 may include:
and performing time sequence filtering processing on the reference image and the intermediate image.
Specifically, the reference image and the intermediate image can be arranged into an image sequence according to the time stamps of the reference image and the intermediate image, then the image information in the image sequence is converted into a time sequence signal, and the time sequence signal is filtered, for example, various modes such as Gaussian filtering, mean filtering and the like can be adopted, so that further optimization effects such as noise reduction and the like can be realized.
In an optional implementation manner, after the fusion processing is performed on the reference image and the intermediate image, brightness adjustment may be performed on the fused image according to the continuous multi-frame image, so as to obtain an optimized image corresponding to the image to be processed. Since the intermediate image is subjected to the brightness mapping process, the brightness of the intermediate image may have a relatively obvious difference from other frame images, resulting in brightness jump of the video picture. Therefore, brightness adjustment can be performed on the fused image according to brightness of other frame images, for example, the same area is selected from the other frame images and the fused image respectively, and the integrated brightness adjustment is performed on the fused image based on brightness difference of the area in the two images, so that brightness consistency between continuous frame images is ensured. Note that, unlike the luminance mapping process in step S320, the luminance adjustment performed here is typically a luminance fine adjustment or a luminance smoothing process.
Fig. 9 shows an example of effects of the optimized image to be processed, in which the upper row is the image to be processed and the lower row is the corresponding optimized image. It can be seen that the brightness of the face or the gesture in the image to be processed is low, so that the face or the gesture cannot be seen clearly, and clear face or gesture information can be obtained after optimization.
In an optional implementation manner, after obtaining the optimized image corresponding to the image to be processed, at least one of target detection, face recognition and gesture recognition may be performed on the optimized image. Thereby, more accurate detection or recognition results can be obtained. Fig. 10 shows an example diagram of face recognition on an optimized image, and it can be seen that, after the optimization, the face part is clearly visible, so that the area where the face is located and the facial feature points are accurately detected.
By adopting the image processing method of the exemplary embodiment, the data set containing 845 human faces and gesture images is subjected to optimization processing, wherein the images are mostly illuminance values of 3-15 lux, the distance between the human faces or the gestures and the camera is 15-40 cm, and the image processing method belongs to a weak illumination environment and a backlight environment.
Under the condition that the data set is not optimized, the face and gesture recognition is tested, the accuracy is 95.2%, the recall rate is 16.7%, the F1 value (F1 value is an evaluation index aiming at an algorithm model in the machine learning field, and the calculation method is that) 28%;
under the condition that the data set is optimized, face and gesture recognition is tested, the accuracy is 99.1%, the recall rate is 82.3%, and the F1 value is 90%.
It can be seen that the image processing method of the present exemplary embodiment has a significant improvement effect on the recognition algorithm in the low-light environment and the backlight environment. Therefore, even in an extreme illumination environment, a result similar to that in a normal illumination environment can be obtained, the robustness of the identification algorithm is increased, and the application scene of the identification algorithm is widened.
Exemplary embodiments of the present disclosure also provide an image processing apparatus. As shown with reference to fig. 11, the image processing apparatus 1100 may include a processor 1110 and a memory 1120. The memory 1120 stores the following program modules:
a to-be-processed image acquisition module 1121, configured to acquire an to-be-processed image from a continuous multi-frame image;
an intermediate image generating module 1122, configured to perform brightness mapping processing on an image to be processed, and generate an intermediate image;
a reference image acquisition module 1123, configured to acquire at least one frame of reference image using the continuous multi-frame images;
the image fusion processing module 1124 is configured to perform fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed;
processor 1110 may be used to execute the program modules described above.
In an alternative embodiment, the intermediate image generation module 1122 is configured to:
And carrying out brightness mapping processing on the image to be processed according to the illumination information of the image to be processed.
In an alternative embodiment, the illumination information comprises an illumination value; an intermediate image generation module 1122 configured to:
and when the illuminance value of the image to be processed meets a first preset condition, performing global brightness mapping processing on the image to be processed.
In an alternative embodiment, the intermediate image generation module 1122 is configured to:
determining the illuminance value of the image to be processed according to the exposure parameters of the image to be processed; the exposure parameters include at least one of exposure time, sensitivity, aperture value.
In an alternative embodiment, the illumination information further includes a backlight ratio; an intermediate image generation module 1122 configured to:
when the illuminance value of the image to be processed does not meet the first preset condition, determining the backlight proportion of the image to be processed;
and when the backlight proportion of the image to be processed meets a second preset condition, performing tone mapping processing on the image to be processed.
In an alternative embodiment, the intermediate image generation module 1122 is configured to:
and determining the backlight proportion of the image to be processed according to the brightness histogram of the image to be processed.
In an alternative embodiment, the intermediate image generation module 1122 is configured to:
and when the backlight proportion of the image to be processed does not meet the second preset condition, taking the image to be processed as an intermediate image.
In an alternative embodiment, the reference image acquisition module 1123 is configured to:
acquiring at least one frame of image except an image to be processed in continuous multi-frame images;
performing brightness mapping processing on the at least one frame of image to obtain a reference image; or alternatively
And obtaining an optimized image corresponding to the at least one frame of image, and taking the optimized image as a reference image.
In an alternative embodiment, the exposure parameters of the image to be processed are different from the exposure parameters of the at least one frame of image.
In an alternative embodiment, the image to be processed is an i-th frame image of the continuous multi-frame images; the at least one frame of image comprises an ith-m frame image to an ith-1 frame image in continuous multi-frame images; i is a positive integer not less than 2, and m is any positive integer.
In an alternative embodiment, image fusion processing module 1124 is configured to:
and performing time sequence filtering processing on the reference image and the intermediate image.
In an alternative embodiment, image fusion processing module 1124 is configured to:
And after the fusion processing is carried out on the reference image and the intermediate image, carrying out brightness adjustment on the fused image according to the continuous multi-frame image to obtain an optimized image corresponding to the image to be processed.
In an alternative embodiment, the image acquisition module to be processed 1121 is configured to:
acquiring a current RAW image from continuous multi-frame RAW images;
and performing channel conversion processing on the current RAW image to obtain an image to be processed.
In an alternative embodiment, the image acquisition module to be processed 1121 is configured to:
converting an R channel and a B channel of a current RAW image into a G channel; or alternatively
The current RAW image is converted into a gray image.
In an alternative embodiment, the image processing apparatus 1100 is configured in a terminal device, where the terminal device includes an AON camera, and is configured to acquire the continuous multi-frame RAW image.
In an alternative embodiment, memory 1120 also includes the following program modules:
and the image recognition application module is used for performing at least one of target detection, face recognition and gesture recognition on the optimized image.
The specific details of the foregoing parts of the apparatus 1100 are described in the method part embodiments, and the details that are not disclosed may refer to the embodiment contents of the method part, so that they are not described in detail.
Fig. 12 shows an architecture diagram of the present exemplary embodiment. As shown in fig. 12, an AON camera is configured on an electronic device, and runs an AON camera service, and the image signal processor may implement the bottom processing of an image, for example, perform the image processing method of the present exemplary embodiment, process the collected original image to obtain a corresponding optimized image, and provide the optimized image to the AON software service. The AON software service can execute services such as monitoring, recognition and the like through the digital signal processor, for example, face recognition and gesture recognition are carried out on the optimized image, a corresponding recognition result is obtained, and the recognition result is provided for the application program service. The application program service can run related application programs through the main processor, and specific interaction instructions, such as screen locking and unlocking, page turning of a user interface and the like, are realized by using the face and gesture recognition results.
Exemplary embodiments of the present disclosure also provide a computer readable storage medium, which may be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the above section of the "exemplary method" when the program product is run on the electronic device. In an alternative embodiment, the program product may be implemented as a portable compact disc read only memory (CD-ROM) and comprises program code and may run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the exemplary embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (19)

  1. An image processing method, comprising:
    acquiring an image to be processed from continuous multi-frame images;
    performing brightness mapping processing on the image to be processed to generate an intermediate image;
    acquiring at least one frame of reference image by utilizing the continuous multi-frame images;
    and carrying out fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
  2. The method according to claim 1, wherein the performing brightness mapping processing on the image to be processed includes:
    and carrying out brightness mapping processing on the image to be processed according to the illumination information of the image to be processed.
  3. The method of claim 2, wherein the illumination information comprises an illumination value; the brightness mapping processing is performed on the image to be processed according to the illumination information of the image to be processed, including:
    and when the illuminance value of the image to be processed meets a first preset condition, performing global brightness mapping processing on the image to be processed.
  4. A method according to claim 3, characterized in that the method further comprises:
    determining the illuminance value of the image to be processed according to the exposure parameters of the image to be processed;
    the exposure parameters include at least one of exposure time, sensitivity, aperture value.
  5. The method of claim 3, wherein the illumination information further comprises a backlight ratio; the brightness mapping processing is performed on the image to be processed according to the illumination information of the image to be processed, and the method further comprises the following steps:
    when the illuminance value of the image to be processed does not meet the first preset condition, determining the backlight proportion of the image to be processed;
    and when the backlight proportion of the image to be processed meets a second preset condition, performing tone mapping processing on the image to be processed.
  6. The method of claim 5, wherein the determining the backlight scale of the image to be processed comprises:
    and determining the backlight proportion of the image to be processed according to the brightness histogram of the image to be processed.
  7. The method of claim 5, wherein the method further comprises:
    and when the backlight proportion of the image to be processed does not meet the second preset condition, taking the image to be processed as the intermediate image.
  8. The method of claim 1, wherein said acquiring at least one frame of reference image using said successive multi-frame images comprises:
    acquiring at least one frame of image except the image to be processed from the continuous multi-frame images;
    performing brightness mapping processing on the at least one frame of image to obtain the reference image; or alternatively
    And acquiring an optimized image corresponding to the at least one frame of image as the reference image.
  9. The method of claim 8, wherein the exposure parameters of the image to be processed are different from the exposure parameters of the at least one frame of image.
  10. The method according to claim 8, wherein the image to be processed is an i-th frame image of the continuous multi-frame images; the at least one frame of image comprises an ith-m frame image to an ith-1 frame image in the continuous multi-frame images; i is a positive integer not less than 2, and m is any positive integer.
  11. The method of claim 1, wherein the fusing the reference image and the intermediate image comprises:
    and carrying out time sequence filtering processing on the reference image and the intermediate image.
  12. The method according to claim 1, wherein after the fusion processing of the reference image and the intermediate image, the method further comprises:
    And according to the continuous multi-frame images, brightness adjustment is carried out on the fused images, so that an optimized image corresponding to the image to be processed is obtained.
  13. The method of claim 1, wherein the acquiring the image to be processed from the successive multi-frame images comprises:
    acquiring a current RAW image from continuous multi-frame RAW images;
    and performing channel conversion processing on the current RAW image to obtain the image to be processed.
  14. The method of claim 13, wherein the performing channel conversion processing on the current RAW image comprises:
    converting an R channel and a B channel of the current RAW image into a G channel; or alternatively
    And converting the current RAW image into a gray scale image.
  15. The method of claim 13, wherein the method is applied to a terminal device comprising a normally open camera for acquiring the continuous multi-frame RAW images.
  16. The method according to claim 1, wherein after obtaining the optimized image corresponding to the image to be processed, the method further comprises:
    and performing at least one of target detection, face recognition and gesture recognition on the optimized image.
  17. An image processing apparatus, comprising a processor;
    wherein the processor is configured to execute the following program modules stored in the memory:
    the image acquisition module to be processed is used for acquiring images to be processed from continuous multi-frame images;
    the intermediate image generation module is used for carrying out brightness mapping processing on the image to be processed to generate an intermediate image;
    a reference image acquisition module, configured to acquire at least one frame of reference image by using the continuous multi-frame images;
    and the image fusion processing module is used for carrying out fusion processing on the reference image and the intermediate image to obtain an optimized image corresponding to the image to be processed.
  18. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 16.
  19. An electronic device, comprising:
    a processor; and
    a memory for storing executable instructions of the processor;
    wherein the processor is configured to perform the method of any one of claims 1 to 16 via execution of the executable instructions.
CN202080107017.0A 2020-12-22 2020-12-22 Image processing method, device, storage medium and electronic equipment Pending CN116457822A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/138407 WO2022133749A1 (en) 2020-12-22 2020-12-22 Image processing method and apparatus, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116457822A true CN116457822A (en) 2023-07-18

Family

ID=82158530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080107017.0A Pending CN116457822A (en) 2020-12-22 2020-12-22 Image processing method, device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN116457822A (en)
WO (1) WO2022133749A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012110894A1 (en) * 2011-02-18 2012-08-23 DigitalOptics Corporation Europe Limited Dynamic range extension by combining differently exposed hand-held device-acquired images
CN102779330B (en) * 2012-06-13 2014-08-06 京东方科技集团股份有限公司 Image reinforcement method, image reinforcement device and display device
CN106570850B (en) * 2016-10-12 2019-06-04 成都西纬科技有限公司 A kind of image interfusion method
CN109785423B (en) * 2018-12-28 2023-10-03 广州方硅信息技术有限公司 Image light supplementing method and device and computer equipment

Also Published As

Publication number Publication date
WO2022133749A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
CN109218628B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112150399B (en) Image enhancement method based on wide dynamic range and electronic equipment
CN109671106B (en) Image processing method, device and equipment
WO2019148978A1 (en) Image processing method and apparatus, storage medium and electronic device
CN109005366A (en) Camera module night scene image pickup processing method, device, electronic equipment and storage medium
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN109194882A (en) Image processing method, device, electronic equipment and storage medium
CN112289279B (en) Screen brightness adjusting method and device, storage medium and electronic equipment
KR20150099302A (en) Electronic device and control method of the same
US9380218B2 (en) Highlight exposure metric and its applications
KR20170084947A (en) Photographing apparatus and method for operating thereof
CN109618102B (en) Focusing processing method and device, electronic equipment and storage medium
US10187566B2 (en) Method and device for generating images
CN113810603B (en) Point light source image detection method and electronic equipment
CN110971833B (en) Image processing method and device, electronic equipment and storage medium
US9998676B2 (en) Image adjustment apparatus, method, and imaging apparatus to determine a boundary in an image based on a position of a light source
CN111050211A (en) Video processing method, device and storage medium
CN115529411B (en) Video blurring method and device
CN116457822A (en) Image processing method, device, storage medium and electronic equipment
US11989863B2 (en) Method and device for processing image, and storage medium
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN112348738B (en) Image optimization method, image optimization device, storage medium and electronic equipment
EP4174571A1 (en) Imaging control device, imaging control method, and program
CN110971813B (en) Focusing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination