CN117459836A - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN117459836A
CN117459836A CN202311655174.7A CN202311655174A CN117459836A CN 117459836 A CN117459836 A CN 117459836A CN 202311655174 A CN202311655174 A CN 202311655174A CN 117459836 A CN117459836 A CN 117459836A
Authority
CN
China
Prior art keywords
image
sampled
fused
pixels
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311655174.7A
Other languages
Chinese (zh)
Inventor
李志方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311655174.7A priority Critical patent/CN117459836A/en
Publication of CN117459836A publication Critical patent/CN117459836A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • H04N23/811Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation by dust removal, e.g. from surfaces of the image sensor or processing of the image signal output by the electronic image sensor

Abstract

The application provides an image processing method, image processing equipment and a storage medium, which are used for acquiring an initial image by responding to a shooting instruction of a user, fusing a plurality of adjacent pixels with the same color in the initial image to obtain a plurality of fused pixels, and combining the fused pixels to obtain a sampled image. And then, carrying out image signal processing on the sampled image to obtain a processed image. In the method, based on the noise generation principle, a plurality of adjacent pixels with the same color in an initial image are fused, so that the noise can be offset to a certain extent, and a sampled image with low noise ratio is obtained. On the basis, the image signal processing is carried out on the sampled image with the reduced noise ratio, so that the display effect of the image can be effectively improved.

Description

Image processing method, device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and storage medium.
Background
With the increasing development of electronic technology, users have increasingly higher requirements on the performance of cameras in electronic devices, and the performance of cameras can be evaluated by using the imaging effect of the cameras. In an actual application scenario, the imaging effect of the camera is related to the noise suppression capability of the electronic device. Under the same condition, the stronger the suppression capability of the electronic equipment to the noise is, the better the imaging effect is, and the worse the suppression capability of the electronic equipment to the noise is, the worse the imaging effect is.
Currently, electronic devices may rely on noise reduction modules in image signal processors to process noise to improve the imaging effect of the camera. However, as miniaturization of electronic devices progresses, board-level wiring leads to coupling interference and noise becomes more serious. In the case of noise emphasis, there is a problem in that the image display effect is poor depending on only the way in which noise is processed by the noise reduction module in the image signal processor.
Disclosure of Invention
The application provides an image processing method, image processing equipment and a storage medium, and aims to solve the problem of poor image display effect.
In order to achieve the above purpose, the present application adopts the following technical scheme:
first aspect: the application provides an image processing method, which comprises the steps of responding to a shooting instruction of a user, acquiring an initial image, fusing a plurality of adjacent pixels with the same color in the initial image to obtain a plurality of fused pixels, and combining the fused pixels to obtain a sampled image. And then, carrying out image signal processing on the sampled image to obtain a processed image.
In the method, a plurality of adjacent pixels with the same color in an initial image are fused to fuse a noiseless pixel with a noisy pixel, so that noise can be offset to a certain extent, and a sampled image with low noise ratio is obtained. On the basis, the image signal processing is carried out on the sampled image with the reduced noise ratio, so that the display effect of the image can be effectively improved.
In one possible implementation manner, fusing a plurality of adjacent pixels of the same color in the initial image to obtain a plurality of fused pixels includes: taking a plurality of adjacent pixels with the same color in an initial image as a group to obtain a plurality of pixel groups; and aiming at each pixel point group in the pixel point groups, taking an average value of the sum of values of a plurality of adjacent pixels with the same color as a value of the fused pixel point corresponding to the pixel point group to obtain a plurality of fused pixel points.
In one possible implementation, the number of adjacent pixels of the initial image that undergo pixel fusion of the same color is determined by the sampling frequency.
In one possible implementation manner, fusing a plurality of adjacent pixels of the same color in the initial image to obtain a plurality of fused pixels includes: determining a sampling frequency according to the line scanning frequency of the image signal processor; and fusing a plurality of adjacent pixels with the same color in the initial image according to the sampling frequency to obtain a plurality of fused pixels.
In the method, based on the noise generation principle and the sampling frequency determined according to the line scanning frequency of the image signal processor, the noise frequency in the obtained sampled image is synchronous with the line scanning frequency of the image signal processor, so that noise can be suppressed, noise interference can be reduced, and the display effect of the image can be improved.
In one possible implementation manner, in response to a shooting instruction of a user, the process of acquiring the initial image further includes: acquiring a camera gain value; when the gain value of the camera is larger than the gain threshold value, fusing a plurality of adjacent pixels with the same color in the initial image to obtain a plurality of fused pixels. When the gain value of the camera is larger than the gain threshold value, the noise is larger, and under the condition, a plurality of adjacent pixels with the same color in the initial image can be fused, so that a sampled image with lower noise is obtained, and the display effect of the image is improved while the image processing efficiency is ensured.
In one possible implementation, different photographing modes may correspond to different gain thresholds, and acquiring an initial image in response to a photographing instruction of a user includes: responding to the selection of a shooting mode by a user and a shooting instruction of the user, and acquiring an initial image corresponding to the shooting mode; when the gain value of the camera is larger than the gain threshold value corresponding to the shooting mode, fusing a plurality of adjacent pixels with the same color in the initial image corresponding to the shooting mode to obtain a plurality of fused pixels.
In one possible implementation manner, since resolution and/or definition of the image may be affected in the process of obtaining the sampled image, after obtaining the sampled image, the resolution and/or definition of the sampled image may be adjusted to obtain an adjusted image; and performing image signal processing on the adjusted image based on the image signal processor to obtain a processed image.
In one possible implementation, since the size of the image may be reduced during the process of obtaining the sampled image, after the sampled image is obtained, the size of the sampled image may be adjusted based on the size of the initial image, so as to obtain an image with an adjusted size; and performing image signal processing on the image after the size adjustment based on the image signal processor to obtain a processed image.
Second aspect: the application provides an electronic device comprising a processor and a memory: the memory is used for storing program codes and transmitting the program codes to the processor; the processor is adapted to perform the steps of an image processing method as described above according to instructions in the program code.
Third aspect: the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of an image processing method as described above.
Drawings
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a pixel distribution pattern according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of sampling an initial image according to an embodiment of the present application;
FIG. 4 is a schematic illustration of sampling an initial image according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a dark stripe caused by a power supply ripple provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a power supply ripple provided in an embodiment of the present application with a fluctuation period not synchronized with a line scanning period of an image signal processor;
fig. 7 is a schematic diagram of synchronization of a fluctuation period of a power supply ripple and a line scanning period of an image signal processor according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an ISP processing flow provided in an embodiment of the present application;
fig. 9 is a schematic diagram of display effects of images corresponding to different preset sampling frequencies according to an embodiment of the present application;
FIG. 10 is a flowchart of another image processing method according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of adjusting a gain value of a camera according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and drawings are used for distinguishing between different objects and not for limiting the specified sequence.
With the increasing development of electronic devices, users have increasingly higher performance requirements for camera applications of electronic devices. Typically, the imaging effect of a camera is related to the size of the noise. For example, under the same conditions, the larger the noise, the worse the imaging effect of the camera, and the smaller the noise, the better the imaging effect of the corresponding camera. In an actual application scene, the electronic equipment has a certain noise suppression capability, and can improve the image quality to a certain extent. The image may be noise reduced, for example, by a noise reduction module of an image signal processor (Image Signal Process, ISP) in the electronic device, to improve the display of the image.
However, as miniaturization of electronic devices progresses, board-level wiring causes coupling interference to become more and more serious. Under the condition of noise emphasis, the noise is processed only by the noise reduction module in the image signal processor, so that the effect of the image is difficult to be ensured, and the problem of poor imaging effect of the electronic equipment exists.
Based on the above, the application provides an image processing method, which is used for acquiring an initial image and sampling pixels in the initial image in response to a shooting instruction of a user to obtain a sampled image, so that the noise ratio in the sampled image is lower than that in the initial image. And performing image signal processing on the sampled image based on the image signal processing flow to obtain a processed image. According to the noise generation principle, the sampling processing is carried out on the pixel points in the initial image, so that a sampled image with low noise ratio can be obtained, the image signal processing is carried out on the sampled image with reduced noise ratio, and the display effect of the image can be effectively improved.
An image processing method provided in the present application is described below with reference to the flowchart shown in fig. 1.
S101, starting the camera in response to the operation of opening the camera by a user.
For example, a user may activate a camera by clicking on a camera application icon in the electronic device user interface. The camera starting instruction can be sent to the electronic equipment in a voice awakening mode, so that the electronic equipment can start the camera corresponding to the camera starting instruction of the user.
S102, photographing is conducted in response to a photographing instruction of a user, and an initial image is obtained.
The initial Image may be an Image in a RAW Image Format (RAW). The RAW format is an unprocessed, uncompressed format, and may be understood as "RAW image encoded data" or "digital negative". The RAW format image may include an original signal of a sensor of the camera, and configuration information of a captured image link, such as setting of sensitivity (ISO), shutter speed, aperture value, white balance, and the like.
For an initial image in RAW format, each pixel in the image corresponds to one of red, green and blue, as shown in fig. 2, the pixels may have four distribution patterns, including a color image array (Bayer) pattern in GRBG format, a Bayer pattern in GBRG format, a Bayer pattern in BGGR format, and a Bayer pattern in RGGB format, where G corresponds to green, R corresponds to red, and B corresponds to blue.
S103, sampling the initial image to obtain a sampled image.
In one possible implementation, the electronic device may sample pixels in the original image based on the sampling frequency. In the process of sampling the pixel points in the initial image, the electronic device can fuse a plurality of adjacent pixels with the same color as a group to obtain fused pixel points, and the fused pixel points replace the plurality of adjacent pixels with the same color before fusion. The number of adjacent pixels of the same color of the initial image for pixel fusion can be determined by the sampling frequency, and the number of each group of pixels is illustratively the inverse of the sampling frequency. For example, the sampling frequency is one fourth, and the number of each group of pixels is 4; if the sampling frequency is one eighth, the number of pixels in each group is 8. In one example, an average of the sum of the values of the plurality of pixels in a group may be taken as the value of the corresponding fused pixel of the group.
The following describes, as an example, a color image array diagram in GRBG format, in which pixels in an initial image are sampled based on sampling frequency.
As shown in fig. 3, each pixel in the original image may correspond to a color, and the value corresponding to each color indicates the shade of this color, for example, gr00, gr10, gr20, gr30, and the like are all green, but the shades may be different. In fig. 3, the broken line indicates the distribution of noise, that is, noise is introduced into the pixel points in the first, fourth, seventh, and tenth columns from left to right.
In the case of a sampling frequency of 1/4, gr00 and Gr01 of the first column and Gr10 and Gr11 of the third column in the initial image can be replaced with the fused pixel Gr 00'; replacing R00 and R01 of the second column and R10 and R11 of the fourth column in the initial image with the fused pixel point R00'; replacing Gr20 and Gr21 in the fifth column and Gr30 and Gr31 in the seventh column in the initial image with the fused pixel Gr 10'; the fused pixel point B00' is used for replacing B00 and B01 of the first column and B10 and B11 of the third column in the initial image. And so on, other pixel points in the sampled image, such as R10', gb00', gb10', gr01', and the like, can be obtained, and the sampled image in FIG. 3 is further obtained.
Similarly, gr00', R00', gr10 'and B00' are described as examples. In an example, the values of the pixels corresponding to Gr00', R00', gr10', and B00' may be calculated as follows.
That is, gr00' = (gr00+gr10+gr01+gr11)/4; r00' = (r00+r01+r10+r11)/4; gr10' = (gr20+gr30+gr21+gr31)/4; b00' = (b00+b01+b10+b11)/4. Similarly, the values of the fused pixels of R00', R10', B00', etc. can be obtained.
Since there is noise in Gr00 and Gr01 in the first column and there is no noise in Gr10 and Gr11 in the third column, by fusing the noisy Gr00 and Gr01 with the noiseless Gr10 and Gr11, a part of the noise in Gr00 and Gr01 is cancelled by the noiseless Gr10 and Gr11, and the noise in Gr00' obtained by the fusion is lower than the sum of the noise in Gr00 and Gr 01.
As shown in fig. 4, in the case where the sampling frequency is 1/16, gr00, gr01, gr02, gr03 in the first column, gr10, gr11, gr12, gr13 in the third column, gr20, gr21, gr22, gr23 in the fifth column, and Gr30, gr31, gr00 in the seventh column in the initial image may be replaced with a fused pixel point Gr00″; the fused pixel point R00 "is used to replace R00, R01, R03 in the second column, R10, R11, R12, R13 in the fourth column, R20, R21, R22, R23 in the sixth column, and R30, R31, R32, R33 in the eighth column in the initial image, so on, so as to obtain B00" and Gb00 "in the sampled image, and further obtain the sampled image in fig. 4.
Illustratively, the fused pixel points Gr00", R00", B00", and Gb00" may be calculated as follows. That is, the value of Gr00 "is the product of the average value of the sum of the values of Gr00, gr01, gr02, gr03, gr10, gr11, gr12, gr13, gr20, gr21, gr22, gr23, gr30, gr31, gr00, and the sampling frequency; the value of R00 "is the product of the average value of the sum of the values of R00, R01, R03, R10, R11, R12, R13, R20, R21, R22, R23, R30, R31, R32, R33 and the sampling frequency. And so on, B00 "and Gb00" in the sampled image may be obtained, which will not be described herein.
It should be noted that, in an actual application scenario, the noise distribution is generally random, and the method provided in the present application may be applicable to various noise distribution situations, where the noise distribution in fig. 3 and fig. 4 is merely an example.
As shown in fig. 3 and 4, the initial image is sampled based on different sampling frequencies, and the obtained sampled images have different sizes, different noise frequencies, and different image resolutions and resolutions. The greater the sampling frequency, the better the corresponding image resolution, but at the same time more noise may be retained.
In one possible implementation, the desired size of the sampled image, the noise frequency, the resolution of the desired image, and the definition of the desired image may be taken into account, the sampling frequency may be determined, and the electronic device may configure the sampling frequency at the electronic device, and the electronic device may sample the initial image based on the sampling frequency.
In another possible implementation, the electronic device may also determine a corresponding sampling frequency based on a line scanning frequency of the image signal processor, and the electronic device may sample pixels in the initial image based on the sampling frequency, so that a noise frequency in the obtained sampled image may be synchronized with the line scanning frequency of the image signal processor.
The power supply noise caused by the power supply ripple belongs to external noise, is random and unavoidable in general, and is shown in fig. 5, which is a schematic diagram of a dark state stripe caused by the power supply ripple. Since the output noise corresponds to the product of the power supply noise and the camera gain value, the power supply noise due to the power supply ripple is amplified by the camera gain value. The larger the camera gain value is, the larger the power supply noise is, and therefore, under the condition that the camera gain value is larger, the display effect of the generated image is reduced, for example, obvious dark state stripes appear in the image.
In an actual application scenario, the power supply ripple has a fluctuation period, and the image signal processor has a line scanning period. Wherein the line scan period of the image signal processor may be fixed and the fluctuation period of the power supply ripple is generally random, and in general, the fluctuation period of the power supply ripple is inconsistent with the line scan period of the image signal processor.
As shown in fig. 6, the present embodiment provides a schematic diagram that the fluctuation period of the power supply ripple is not synchronous with the line scanning period of the image signal processor. Wherein the solid line with an arrow indicates the line scanning frequency of the image signal processor and the broken line with an arrow indicates the power supply noise frequency. The fluctuation period of the power supply ripple does not coincide with the line scanning period of the image signal processor, which is one of the causes of occurrence of dark-state streaks in an image.
In the process of sampling an initial image, the noise frequency is changed, that is, the noise frequency in the obtained sampled image is different from the noise frequency in the initial image. In order to improve the display effect of the image, the electronic device may determine a sampling frequency corresponding to the initial image based on the line scanning frequency of the image signal processor, and sample the initial image based on the sampling frequency, so that the noise frequency in the obtained sampled image may be synchronized with the line scanning frequency of the image signal processor.
As shown in fig. 7, the present embodiment provides a schematic diagram of synchronization of a fluctuation period of a power supply ripple and a line scanning period of an image signal processor. Wherein the solid line with an arrow indicates the line scanning frequency of the image signal processor and the broken line with an arrow indicates the noise frequency. Under the condition that the noise frequency in the sampled image is synchronous with the line scanning frequency of the image signal processor, the noise in the sampled image can be suppressed, dark state stripes in the image are improved, and the display effect of the image is effectively improved.
In the embodiment of the application, different preset sampling frequencies can be adopted to sample the initial image of the debugging stage, so as to obtain sampled images of a plurality of debugging stages. The method comprises the steps of taking a preset sampling frequency corresponding to a sampled image with noise frequency synchronous with line scanning frequency of an image signal processor in the sampled image of a plurality of debugging stages as the sampling frequency. After the subsequent electronic equipment acquires the initial image, sampling processing is carried out on the initial image based on the sampling frequency, so that the noise frequency in the obtained sampled image is synchronous with the line scanning frequency of the image signal processor, the effect of noise suppression is further achieved, and the display effect of the image is improved.
After obtaining the sampled image, the electronic device may perform S104 image signal processing on the sampled image, to obtain a processed image.
Compared with the image signal processing of the initial image, the image signal processing method and device for the image processing of the image has the advantages that the image signal processing is conducted on the sampled image after noise reduction, the processed image is obtained, noise in the processed image can be effectively reduced, and the display effect of the processed image is improved.
In one possible implementation, the electronic device may sample the initial image based on a RAW domain size adjustment (RRZ) module in the ISP to obtain a sampled image. After the sampled image is obtained, the electronic device may continue processing the sampled image based on various modules in the image signal processor.
Referring to fig. 8, the schematic diagram of an ISP processing flow provided in the embodiment of the present application mainly relates to processing of RAW domain, RGB domain and YUV domain data in the ISP processing flow.
The modules that process the RAW domain data include a dead point correction (Bad Pixel Correction, BPC) module, an optical dark space correction (Optical Black Correct, OBC) module, a lens shading correction (Lens Shading Correction, LSC) module, a Firmware Update (FUS) module, a Digital Gain (Digital Gain) module, a Local Tone Mapping (LTM) module, a RAW domain size adjustment (RRZ) module, and a demosaicing (De-mosaicing, DM) module.
The modules that process the RGB domain data include a Linear Domain Noise Reduction (LDNR) module, a camera module (Camera Compact Module, CCM), an image correction and enhancement (Image Correction and Enhancement, LCE) module, a generalized moving average processing (Generalized Moving Average, GMA) module, a digital enhanced contrast (Digital Contrast Enhancer, DCE) module, a color space conversion (Color Space Convert, CSC) module.
The CSC module is mainly configured to convert the RGB image into a YUV image, so as to continue data processing in the YUV domain. The modules for processing YUV domain data include an Edge Enhancement (EE) module, an AKS module for improving Edge smoothness, a Contrast-to-Noise Ratio (CNR) processing module, a Color (Color) processing module, a multi-frame Noise reduction (Multi Frame Noise Reduction, MFNR) module, a single-frame Noise reduction (LPNR) module, a video Noise reduction (3 DNR) module, a local Contrast Enhancement (CA-LTM) module, a detail amplification (Clear Zoom) module, an increase high frequency Noise (HFG) module, and a support end-to-end HDR recording (TCC) module.
In the ISP processing flow, the modules capable of processing noise mainly comprise an OBC module, an LDNR module, a CNR module, an MFNR module, an LPNR module, a 3DNR module, an EE module and the like. After the electronic device samples the initial image based on the RRZ module, the sampled image may be further processed based on the ISP processing flow, to generate a processed image.
The electronic device may sequentially perform linear domain noise reduction, edge enhancement, contrast-to-noise ratio, multi-frame noise reduction, single-frame noise reduction, video noise reduction, and the like on the sampled image based on the LDNR module, the EE module, the CNR module, the MFNR module, the LPNR module, and the 3DNR module, so as to perform noise reduction on the sampled image, and improve the display effect of the image.
The resolution and resolution of the sampled image may be reduced due to the sampling of the original image. In one possible implementation, to increase the resolution and sharpness of the sampled image, the electronic device may adjust the resolution and sharpness of the sampled image after obtaining the sampled image to obtain an adjusted image. After the electronic device obtains the adjusted image, the electronic device may perform image signal processing on the adjusted image based on the image signal processor to obtain a processed image.
In one possible implementation, the size of the sampled image obtained by sampling the initial image is smaller than the size of the initial image, and the electronic device may, after obtaining the sampled image, adjust the size of the sampled image according to the original size, that is, according to the size of the initial image, based on a RRZ module in the image signal processor, to obtain the image with the adjusted size. On the basis, the electronic device can perform image signal processing on the image after the size adjustment to obtain a processed image.
As shown in fig. 9, the display effect of the processed image obtained at different sampling frequencies is shown in fig. 9. The image operation modes corresponding to the first image 1301 and the second image 1302 are Video (Video) modes, the resolution is 1080P, the frame rate is 30fps, the aspect ratio is 16:9, and the original size of the raw domain is 4080×2296. The first image 1301 corresponds to a display size of 1920×1080, a size cut out in the RAW domain of 1920×1080, and a scale factor of 22%. The second image 1302 corresponds to a display size of 2040×1148, a size cut in the RAW domain of 2040×1148, and a scaling factor of 25%. Although the display effect of both the first image 1301 and the second image 1302 is improved to some extent, dark state stripes in the first image 1301 are more pronounced and dark state stripes in the second image 1302 are more suppressed in comparison.
In summary, by sampling the initial image, the method and the device obtain a sampled image with a low noise ratio compared with the initial image, and perform image signal processing on the sampled image with a reduced noise ratio, the image processing efficiency can be ensured, and meanwhile, the display effect of the image can be effectively improved. Meanwhile, the sampling frequency can be determined based on the line scanning frequency of the image processor, the initial image is sampled based on the sampling frequency, so that the noise frequency corresponding to the obtained sampled image is synchronous with the line scanning frequency of the image processor, the influence of noise is reduced, dark state stripes in the image are suppressed, and the display effect of the image is improved.
In another embodiment provided in the present application, in order to improve the image display effect, the image processing efficiency is ensured. The electronic device may sample the initial image to obtain a sampled image if the camera gain value is greater than the gain threshold. As shown in fig. 10, this figure is a flowchart of another image processing method provided in an embodiment of the present application.
After executing S101 and starting the camera in response to the user opening the camera, the electronic device may execute S201 and take a picture in response to the user' S photographing instruction, to obtain an initial image and a camera gain value.
Wherein the camera gain value may be used to adjust the sensitivity of the camera and the brightness of the generated image. The camera gain value may be represented by a sensitivity value, and the higher the camera gain value is, the higher the sensitivity value is, and the brightness of the corresponding image is also increased. In an actual application scene, if the current shooting environment is darker, the brightness of the external environment can be compensated by increasing the gain value of the camera based on an Auto Exposure (AE) mode in the electronic device, and finally the brightness of the image is adjusted to a proper value, so that details in the image are more clearly visible.
However, in the process of increasing the gain value of the camera, although the brightness of the generated image is increased, the smaller noise point in the initial image is also amplified, so that compared with the process of shooting in an environment with enough light, the image obtained by shooting in an environment with darker light has larger noise, and further the definition of the image is reduced, the display effect is reduced, for example, a more obvious dark state stripe may appear in the image.
The AE mode is an automatic exposure mode in which an exposure combination is automatically determined by a camera according to information such as brightness of an external environment, sensitivity of a film or a Charge Coupled Device (CCD) and the like, so as to obtain a shutter speed required for proper exposure, based on a diaphragm size when a photographer manually selects photographing.
In the embodiment of the application, the camera gain value can be obtained by photographing in response to the photographing instruction of the user, so that whether the photographing environment is darker or not and whether larger noise is introduced or not can be judged based on the magnitude of the camera gain value.
The camera gain value may be preset by a user or may be obtained based on a 3A algorithm in the image signal processor, wherein the 3A algorithm includes Auto Focus (AF), auto Exposure (AE), and Auto White Balance (AWB), for example.
In one possible implementation, the user may manually adjust the camera gain value before performing the shooting operation, for example, clicking a shooting button to give a shooting instruction, as shown in fig. 11.
The user can enter the professional photographing interface 1200 by clicking the professional photographing mode icon 1101 on the photographing interface 1100, and a plurality of image parameter icons, such as an exposure compensation parameter (AE) icon, a shutter speed (sec) icon, a camera Gain (Gain) icon, an Auto Focus (AF) icon, and A White Balance (AWB) icon, etc., are displayed in the professional photographing interface 1200. The user can adjust the camera gain value by dragging the cursor left and right, or click on the specific camera gain value directly to determine the adjusted camera gain value, so that the subsequent electronic equipment can shoot based on the adjusted camera gain value. Illustratively, as in fig. 11, when the user clicks on the camera gain icon 1201 and the scale bar 1202 corresponding to the camera gain icon 1201 appears, the user adjusts the camera gain value from 0.6 to 0.9 by sliding the cursor 1203 on the scale bar 1202 to the right, at which point the adjusted camera gain value may be determined to be 0.9.
Based on the above, the electronic device can acquire the camera gain value adjusted by the user based on the adjustment of the camera gain value by the user. After the user performs the shooting operation, for example, the user clicks the shooting button and then may issue a shooting instruction, and the electronic device may perform shooting based on the camera gain value adjusted by the user in response to the shooting instruction of the user, so as to obtain an initial image. Thus, the electronic device may then determine whether the initial image needs to be sampled based on the user-adjusted camera gain value.
In one possible implementation, the electronic device may automatically adjust the camera gain value based on the brightness of the current environment when the user views the view with the electronic device. For example, the user may select an auto-adjust icon corresponding to the camera gain in the professional photographing interface, so that the electronic device may automatically determine the camera gain value based on the brightness of the photographing environment. For example, the electronic device may obtain a camera gain value based on a 3A algorithm in the image signal processor, and in response to a photographing instruction by the user, perform photographing based on the automatically determined camera gain value, to obtain an initial image. The electronic device may then determine whether sampling of the initial image is required based on the automatically determined camera gain value. It should be noted that, the electronic device may also automatically determine the camera gain value based on the image signal processor by default, without manual selection by the user.
In one possible implementation, the user may adjust the automatically determined camera gain value based on the automatically determined camera gain value of the electronic device to obtain an adjusted camera gain value so that the electronic device may take a photograph based on the adjusted camera gain value.
In one possible implementation, the user may select a capture mode, and the electronic device determines the camera gain value in response to the user's selection of the capture mode. The shooting modes may include, but are not limited to, a portrait mode, a night view mode, a quick snapshot mode, etc., and different shooting modes may correspond to different camera gain values.
For example, a user can select a night scene mode to shoot under the condition of dark shooting environment, and the electronic equipment responds to the selection of the night scene mode by the user to determine a camera gain value corresponding to the night scene mode so as to improve the brightness of a generated image and obtain the image with clear details; the electronic equipment responds to the selection of a user on a quick snapshot mode, and determines a camera gain value corresponding to the quick snapshot mode so as to quickly capture image details when shooting; the electronic device determines a camera gain value corresponding to the portrait mode in response to a user selection of the portrait mode to improve picture brightness and person details in the image.
Since the camera gain value can be expressed in terms of a sensitivity value, the higher the camera gain value, the greater the corresponding value of sensitivity. In one possible implementation, the user may implement indirect adjustment of the camera gain value by adjusting the sensitivity. The electronic device may obtain the sensitivity value in response to a user adjustment of the sensitivity. After obtaining the sensitivity value, the electronic device may calculate a corresponding camera gain value based on the sensitivity value. Thus, the electronic device can take a picture based on the calculated camera gain value in response to a photographing instruction of the user, and obtain an initial image and the camera gain value.
After obtaining the initial image and the camera gain value, the electronic device may perform S202 a comparison of the camera gain value with a gain threshold to determine whether the camera gain value is greater than the gain threshold.
The gain threshold in the embodiment of the application can be set according to actual requirements.
In one possible implementation, the electronic device may determine a gain threshold corresponding to a photography mode in response to a user selection of the photography mode. For example, different shooting modes may correspond to different gain thresholds, for example, a portrait mode, a night scene mode, and a motion capture mode in the shooting modes, where the gain threshold corresponding to the portrait mode may be a first gain threshold, the gain threshold corresponding to the night scene mode may correspond to a second gain threshold, the gain threshold corresponding to the motion capture mode may correspond to a third gain threshold, and all three of the first gain threshold, the second gain threshold, and the third gain threshold are different.
If the gain value of the camera is greater than the corresponding gain threshold, S203 may be executed to sample the initial image, and obtain a sampled image.
In the case where the camera gain value is greater than the gain threshold, it can be considered that the current camera gain value is greater, and higher noise is introduced, possibly reducing the display effect of the generated image. Therefore, in the embodiment of the application, the sampled image with the reduced noise duty ratio is obtained by sampling the pixel points in the initial image. It should be noted that, the manner of sampling the pixel points in the initial image may be referred to the above description in S103, and will not be described herein.
In an example, the electronic device may determine the corresponding sampling frequency based on the camera gain value. For example, when the difference between the gain value of the camera and the gain threshold is greater than the difference threshold, the electronic device samples the initial image based on the first sampling frequency to obtain a sampled image; when the difference between the camera gain value and the gain threshold is greater than 0 and less than or equal to the difference threshold, the electronic device may sample the initial image based on a second sampling frequency, where the first sampling frequency may be less than the second sampling frequency. After obtaining the sampled image, S204 of performing image signal processing on the sampled image may be performed to obtain a processed image.
If the camera gain value is less than or equal to the gain threshold, then S205 may be performed to perform image signal processing on the initial image to obtain a processed image.
In one possible implementation, in a case where the gain value of the camera is less than or equal to the gain threshold, for example, the shooting ambient light is brighter, which indicates that the noise caused by the gain value of the camera is smaller, the noise reduction module in the image signal processor may perform noise reduction processing on the noise.
For example, in view of image processing efficiency, if the gain value of the camera is less than or equal to the gain threshold, the electronic device does not need to sample the initial image, and may directly perform image signal processing on the initial image based on the image signal processor based on the image signal processing flow shown in fig. 8, to obtain a processed image.
In summary, in the case that the gain value of the camera is greater than the gain threshold, the initial image is sampled, so that a sampled image with a low noise ratio is obtained, and the sampled image with a reduced noise ratio is subjected to image signal processing, so that the image processing efficiency is ensured, and meanwhile, the display effect of the image is effectively improved.
In some embodiments, the electronic device may be a cell phone, tablet, desktop, laptop, notebook, ultra mobile personal computer (Ultra-mobile Personal Computer, UMPC), handheld computer, netbook, personal digital assistant (Personal Digital Assistant, PDA), wearable electronic device, smart watch, etc., and the specific form of the electronic device is not particularly limited in this application. In this embodiment, the structure of the electronic device may be shown in fig. 12, and fig. 12 is a schematic structural diagram of the electronic device according to the embodiment of the present application.
As shown in fig. 12, the electronic device may include a processor 110, a sensor module 120, a camera 130, a display screen 140, and the like.
It is to be understood that the configuration illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus. In other embodiments, the electronic device may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc.
Wherein the different processing units may be separate devices or may be integrated in one or more processors. For example, in the present application, an initial image and a camera gain value may be acquired in response to a shooting instruction of a user, and when the camera gain value is greater than a gain threshold, sampling is performed on a pixel point in the initial image, so as to obtain a sampled image, so that a noise duty ratio in the sampled image is lower than a noise duty ratio in the initial image. And performing image signal processing on the sampled image based on the image signal processing flow to obtain the image.
In the embodiment of the application, the electronic device may implement the display function through the GPU, the display screen 140, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 140 and the application processor. A series of graphical user interfaces (graphical user interface, GUI) may be displayed on the display screen 140 of the electronic device, which in some embodiments may include 1 or N display screens 140, N being a positive integer greater than 1. For example, in the embodiment of the present application, the user may click on the camera icon on the menu interface of the electronic device, and enter the shooting interface. After entering the photographing interface, the user may take a picture by clicking a photographing button in the photographing interface.
In one possible implementation, the mobile phone may implement a photographing function through an ISP, a camera 130, a Graphics Processing Unit (GPU), a display 140, a sensor module 120, an application processor, and the like.
Wherein the camera 130 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device may include 1 or N cameras 130, N being a positive integer greater than 1.
The ISP may process the data fed back by the camera 130. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 130.
In this embodiment of the present application, in order to improve the display effect of the image displayed in the display screen 140, in the process of processing the initial image, when the gain value of the camera is greater than the gain threshold, the initial image may be sampled according to the preset sampling frequency based on the RRZ module in the ISP, so as to obtain the sampled image. After the sampled image is obtained based on the RRZ module, image signal processing may continue on the sampled image based on the ISP, obtaining a processed image, and displaying the processed image in the display screen 140.
The present embodiment also provides a computer-readable storage medium including instructions that, when executed on an electronic device, cause the electronic device to perform the related method steps described above to implement the method of the above embodiment.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method, comprising:
responding to a shooting instruction of a user, and acquiring an initial image;
fusing a plurality of adjacent pixels with the same color in the initial image to obtain a plurality of fused pixels;
combining the plurality of fused pixel points to obtain a sampled image;
and performing image signal processing on the sampled image to obtain a processed image.
2. The method according to claim 1, wherein fusing a plurality of adjacent pixels of the same color in the initial image to obtain a plurality of fused pixels comprises:
taking a plurality of adjacent pixels with the same color in the initial image as a group to obtain a plurality of pixel groups;
and aiming at each pixel point group in the pixel point groups, taking an average value of the sum of the values of the plurality of adjacent pixels with the same color as the value of the fused pixel point corresponding to the pixel point group to obtain the fused pixel points.
3. The method of claim 1, wherein the number of adjacent pixels of the initial image that undergo pixel fusion of the same color is determined by a sampling frequency.
4. A method according to claim 3, wherein the fusing a plurality of adjacent pixels of the same color in the initial image to obtain a plurality of fused pixels includes:
determining the sampling frequency according to the line scanning frequency of the image signal processor;
and fusing a plurality of adjacent pixels with the same color in the initial image according to the sampling frequency to obtain a plurality of fused pixels.
5. The method of claim 1, wherein the acquiring the initial image in response to the user's photographing instruction further comprises: acquiring a camera gain value;
fusing a plurality of adjacent pixels with the same color in the initial image to obtain a plurality of fused pixels, including:
and when the gain value of the camera is larger than a gain threshold value, fusing a plurality of adjacent pixels with the same color in the initial image to obtain a plurality of fused pixels.
6. The method of claim 5, wherein the acquiring the initial image in response to the user's photographing instruction comprises:
responding to the selection of a shooting mode by a user and a shooting instruction of the user, and acquiring an initial image corresponding to the shooting mode;
When the gain value of the camera is greater than a gain threshold, fusing a plurality of adjacent pixels with the same color in the initial image to obtain a plurality of fused pixels, including:
when the gain value of the camera is larger than the gain threshold value corresponding to the shooting mode, fusing a plurality of adjacent pixels with the same color in the initial image corresponding to the shooting mode to obtain a plurality of fused pixels.
7. The method of claim 1, wherein said performing image signal processing on said sampled image to obtain a processed image comprises:
adjusting the resolution and/or definition of the sampled image to obtain an adjusted image;
and carrying out image signal processing on the adjusted image based on an image signal processor to obtain the processed image.
8. The method of claim 1, wherein said performing image signal processing on said sampled image to obtain a processed image comprises:
adjusting the size of the sampled image based on the size of the initial image to obtain an image with the adjusted size;
And carrying out image signal processing on the image with the adjusted size based on an image signal processor to obtain the processed image.
9. An electronic device, the electronic device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of an image processing method according to any of claims 1-8 according to instructions in the program code.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the steps of an image processing method according to any of claims 1-8.
CN202311655174.7A 2023-12-05 2023-12-05 Image processing method, device and storage medium Pending CN117459836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311655174.7A CN117459836A (en) 2023-12-05 2023-12-05 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311655174.7A CN117459836A (en) 2023-12-05 2023-12-05 Image processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117459836A true CN117459836A (en) 2024-01-26

Family

ID=89580120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311655174.7A Pending CN117459836A (en) 2023-12-05 2023-12-05 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117459836A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011101128A (en) * 2009-11-04 2011-05-19 Canon Inc Video processing apparatus, video processing method, program and storage medium
CN103733220A (en) * 2012-08-07 2014-04-16 展讯通信(上海)有限公司 Image processing method and device based on bayer format
CN105787900A (en) * 2016-03-14 2016-07-20 哈尔滨工程大学 Wavelet-image-decomposition-based denoising method for periodic noises of side-scanning sonar power supply
CN110675404A (en) * 2019-09-03 2020-01-10 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
CN112422818A (en) * 2020-10-30 2021-02-26 上海大学 Intelligent screen dropping remote detection method based on multivariate image fusion
CN114693580A (en) * 2022-05-31 2022-07-01 荣耀终端有限公司 Image processing method and related device
CN115174832A (en) * 2022-08-09 2022-10-11 杭州联吉技术有限公司 Imaging device, interference fringe eliminating method and device thereof, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011101128A (en) * 2009-11-04 2011-05-19 Canon Inc Video processing apparatus, video processing method, program and storage medium
CN103733220A (en) * 2012-08-07 2014-04-16 展讯通信(上海)有限公司 Image processing method and device based on bayer format
CN105787900A (en) * 2016-03-14 2016-07-20 哈尔滨工程大学 Wavelet-image-decomposition-based denoising method for periodic noises of side-scanning sonar power supply
CN110675404A (en) * 2019-09-03 2020-01-10 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
CN112422818A (en) * 2020-10-30 2021-02-26 上海大学 Intelligent screen dropping remote detection method based on multivariate image fusion
CN114693580A (en) * 2022-05-31 2022-07-01 荣耀终端有限公司 Image processing method and related device
CN115174832A (en) * 2022-08-09 2022-10-11 杭州联吉技术有限公司 Imaging device, interference fringe eliminating method and device thereof, and storage medium

Similar Documents

Publication Publication Date Title
US10916036B2 (en) Method and system of generating multi-exposure camera statistics for image processing
WO2020029732A1 (en) Panoramic photographing method and apparatus, and imaging device
WO2020057199A1 (en) Imaging method and device, and electronic device
US20090102945A1 (en) System and method for generating high dynamic range images
JP5612017B2 (en) Camera shake reduction system
US9699427B2 (en) Imaging device, imaging method, and image processing device
CN110290325B (en) Image processing method, image processing device, storage medium and electronic equipment
EP3836532A1 (en) Control method and apparatus, electronic device, and computer readable storage medium
JP2022179514A (en) Control apparatus, imaging apparatus, control method, and program
CN110278375B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110012227B (en) Image processing method, image processing device, storage medium and electronic equipment
US8456551B2 (en) Photographing apparatus and smear correction method thereof
JP2015033064A (en) Imaging device, control method therefor, program, and storage medium
CN110266965B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110266967B (en) Image processing method, image processing device, storage medium and electronic equipment
US20230196529A1 (en) Image processing apparatus for capturing invisible light image, image processing method, and image capture apparatus
US10110826B2 (en) Imaging with adjustment of angle of view
US9288461B2 (en) Apparatus and method for processing image, and computer-readable storage medium
CN117459836A (en) Image processing method, device and storage medium
US10674092B2 (en) Image processing apparatus and method, and image capturing apparatus
Brown Color processing for digital cameras
JP5482427B2 (en) Imaging apparatus, camera shake correction method, and program
JP6353585B2 (en) Imaging apparatus, control method therefor, program, and storage medium
JP2014236226A (en) Image processing device and image processing method
US11711613B2 (en) Image alignment for computational photography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination