WO2019019904A1 - Procédé et appareil de traitement d'équilibrage de blancs, et terminal - Google Patents

Procédé et appareil de traitement d'équilibrage de blancs, et terminal Download PDF

Info

Publication number
WO2019019904A1
WO2019019904A1 PCT/CN2018/094971 CN2018094971W WO2019019904A1 WO 2019019904 A1 WO2019019904 A1 WO 2019019904A1 CN 2018094971 W CN2018094971 W CN 2018094971W WO 2019019904 A1 WO2019019904 A1 WO 2019019904A1
Authority
WO
WIPO (PCT)
Prior art keywords
current
camera
image
white balance
determining
Prior art date
Application number
PCT/CN2018/094971
Other languages
English (en)
Chinese (zh)
Inventor
袁全
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019019904A1 publication Critical patent/WO2019019904A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Definitions

  • the present application relates to the field of camera technologies, and in particular, to a white balance processing method, apparatus, and terminal.
  • the two cameras can be switched according to the shooting environment parameters. For example, when the zoom ratio is zoomed from 1x to 2x, the wide-angle lens is switched to the telephoto lens.
  • the object of the present invention is to solve at least one of the above technical problems to some extent.
  • the present application proposes a white balance processing method, which adjusts the weight of each camera according to changes in the shooting environment parameters, thereby ensuring that an image suitable for the current environmental parameters can be obtained, and AWB due to camera switching is avoided. Jumping phenomenon improves the user experience.
  • the present application also proposes a white balance processing device.
  • the application also proposes a terminal.
  • the application also proposes a computer readable storage medium.
  • the application also proposes a computer program product.
  • An embodiment of the present application provides a white balance processing method for a terminal that includes at least two different types of cameras.
  • the method includes:
  • the currently acquired image is subjected to white balance processing according to the current weight of each camera.
  • the white balance processing method provided by the embodiment of the present application first determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process, and then performs white balance processing on the currently acquired image according to the current weight of each camera. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • a further embodiment of the present application provides a white balance processing apparatus, wherein the white balance processing apparatus is applied to a terminal including at least two different types of cameras, the apparatus comprising:
  • the first determining module determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process
  • the processing module performs white balance processing on the currently acquired image according to the current weight of each camera.
  • the white balance processing device provided by the embodiment of the present application first determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process, and then performs white balance processing on the currently acquired image according to the current weight of each camera. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • a further aspect of the present application provides a terminal, including: a housing, a processor, a memory, a circuit board, a power supply circuit, and at least two different types of cameras, wherein the circuit board is disposed in the housing Internally, the processor and the memory are disposed on the circuit board; the power supply circuit is configured to supply power to each circuit or device of the terminal; the memory is configured to store executable program code; The processor runs a program corresponding to the executable program code by reading executable program code stored in the memory for executing the white balance processing method as described in the above embodiments.
  • the terminal provided by the embodiment of the present application first determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process, and then performs white balance processing on the currently acquired image according to the current weight of each camera. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • a further aspect of the present application provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the white balance processing method as described in the above embodiments.
  • the computer readable storage medium provided by the embodiment of the present application may be disposed in any terminal that includes at least two different types of cameras and needs white balance processing, and performs white balance processing method stored thereon when performing white balance processing.
  • the weight of each camera can be adjusted according to the change of the shooting environment parameters, thereby ensuring that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • a further aspect of the present application provides a computer program product for performing a white balance processing method as described in the foregoing embodiments when instructions in the computer program product are executed by a processor.
  • the computer program product provided by the embodiment of the present application can be set in any terminal that includes at least two different types of cameras, and needs to perform white balance adjustment.
  • the program corresponding to the white balance processing method can be implemented.
  • the weight of each camera is adjusted, thereby ensuring that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • FIG. 1 is a flow chart of a white balance processing method according to an embodiment of the present application.
  • FIG. 2 is a flow chart of a white balance processing method according to another embodiment of the present application.
  • FIG. 3 is a structural diagram of a white balance processing apparatus according to an embodiment of the present application.
  • FIG. 4 is a structural diagram of a white balance processing apparatus according to another embodiment of the present application.
  • FIG. 5 is a structural diagram of a terminal according to an embodiment of the present application.
  • Embodiments of the present application are directed to the related art.
  • the AWB settings of different cameras may be different, which causes an AWB jump to occur when switching from one camera to another.
  • a problem of poor user experience a white balance processing method is proposed.
  • the current weight of each camera corresponding to the current shooting environment parameter may be determined, so that the currently acquired image is white balanced according to the current weight of each camera. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • FIG. 1 is a flow chart of a white balance processing method according to an embodiment of the present application.
  • the method includes:
  • step 101 during the shooting, the current weight of each camera corresponding to the current shooting environment parameter is determined.
  • the white balance processing method provided by the embodiment of the present application may be performed by the white balance processing apparatus provided by the embodiment of the present application.
  • the white balance processing device can be configured in any terminal having at least two different types of cameras. Among them, there are many types of terminals, which can be selected according to application needs, such as mobile phones, computers, cameras, and the like.
  • the shooting environment parameter may include any one or more of a zoom factor, an object distance, a light source condition, and the like.
  • the sum of the current weights of each camera is 1.
  • step 101 can be implemented in the following manner:
  • the current weight of each camera is determined based on the current zoom factor, object distance, and/or light source conditions.
  • the light source condition refers to the intensity, angle, color and the like of the light.
  • the shooting environment parameters such as the zoom magnification, the object distance and/or the light source condition suitable for each camera may be determined according to the type of each camera in advance, and then the current shooting environment parameters and each are set according to the suitable shooting environment parameters of each camera.
  • the determined camera A suitable shooting environment parameters are: 1-6 times zoom magnification
  • the camera B suitable shooting environment parameter is: 6 times or more zoom magnification.
  • the current zoom factor can be set between 1-6 times, and when there is a large difference from the zoom factor of 6 times, for example, when the speed is less than 4.5 times, the weight of the camera A is 1, and the weight of the camera B is 0, and set the current zoom factor between 4.5-6 times, from low to high, the weight of camera B gradually increases, the weight of camera A gradually decreases until the weight of camera A is 0, the camera B's The weight is 1.
  • the weights corresponding to the camera A and the camera B are 1, 0 respectively; when the zoom magnification is 4.5-5 times, the weights corresponding to the camera A and the camera B are 0.8 and 0.2, respectively; When the zoom magnification is 5-5.5 times, the weights corresponding to camera A and camera B are 0.5 and 0.5 respectively; when the zoom magnification is 5.5-6 times, the weights corresponding to camera A and camera B are 0.2 and 0.8 respectively; When the multiple is 6 times or more, the weights corresponding to the camera A and the camera B are 0 and 1, respectively. Therefore, if the current shooting environment parameter is: 5.8 times the zoom magnification, it can be determined that the current weights of the camera A and the camera B corresponding to the current shooting environment parameters are 0.2 and 0.8, respectively.
  • the current zoom factor is set to be more than 6 times, and when there is a large difference with the zoom factor of 6 times, for example, when the signal is more than 8 times, the weight of the camera B is 1
  • the weight of the camera A is 0, and the current zoom factor is set between 6-8 times.
  • the weight of the camera A gradually increases, and the weight of the camera B gradually decreases until the weight of the camera A is 1.
  • the weight of camera B is 0.
  • the weights of the camera A and the camera B gradually increase or decrease the range of the shooting environment parameters corresponding to the process, and can be set as needed. For example, it can be set to a zoom ratio of 4.5-6 times, or a zoom factor of 6-8 times, or a zoom factor of 4.5-8 times. That is, it may be within the range of the shooting environment parameters suitable for the camera A, or may be within the range of the shooting environment parameters suitable for the camera B, or may be within the range of the shooting environment parameters suitable for the camera A and the camera B at the same time.
  • the process of increasing or decreasing the current weight of each camera may be linearly changed, or may be changed in a stepwise manner or other regular manner, and is not limited herein.
  • the process of increasing or decreasing the current weight of each camera, the slope when changing linearly, or the step size when changing stepwise can be set as needed. For example, if the zoom magnification is increased by 0.5 times, the weight corresponding to the camera A is decreased by 0.1, and the weight corresponding to the camera B is increased by 0.1; or, for every 0.5 times increase of the zoom magnification, the weight corresponding to the camera A is decreased by 0.2, and the corresponding to the camera B The weight is increased by 0.2, and so on.
  • the method may further include:
  • the current light source condition is determined according to the aperture size, shutter time, and sensitivity of each camera.
  • the zoom command and the image adjustment command may be automatically triggered by the terminal according to the shooting requirement, or may be manually triggered by the user according to the need, and is not limited herein.
  • the user performs an operation such as adjusting the magnification of the subject in the image according to the need, so that when the image adjustment instruction is triggered, the camera can adjust the current zoom factor according to the image adjustment instruction; Or, when the zoom command is acquired, the camera can adjust the current zoom factor according to the zoom command. Therefore, in the embodiment of the present application, the current zoom factor can be determined according to the zoom instruction or the image adjustment instruction.
  • the distance value of the distance sensor or the depth information contained in the currently acquired image will be different. Then, in the embodiment of the present application, the current object distance may be determined according to the output value of the distance sensor or the depth information included in the currently acquired image.
  • the current light source condition can be determined according to the aperture size, the shutter time, and the sensitivity of each camera.
  • the current weight of each camera can be determined according to the current zoom factor, object distance, and/or light source condition.
  • the light source condition may be an angle of the light.
  • the light in the current shooting scene may be from any direction in the three-dimensional space where the subject is the center of the sphere.
  • the angle of the light can be roughly divided into a smooth light, a back light, a side light, a side light, a side backlight, a top light, a bottom light, and the like.
  • the current ray angle can be determined in various ways.
  • the current ray angle is determined according to the color channel histogram corresponding to the currently acquired picture.
  • RGB data In practical applications, color channel histograms are usually acquired using RGB data. Therefore, in this embodiment, if the original image data of the currently acquired picture is not RGB data, it is necessary to first convert the non-RGB data into RGB data. Then, according to the RGB data, the color channel histogram corresponding to the currently collected picture is determined, which is not described in detail in this embodiment.
  • the RGB data acquired in this embodiment includes red (R), green (G), and blue (B) three color channels
  • the corresponding determined color channel histograms are three, respectively. It is a red channel histogram, a green channel histogram, and a blue channel histogram.
  • each color channel histogram determines the horizontal axis.
  • the vertical axis represents the pixel ratio of each pixel in the image at the brightness.
  • the corresponding determined color channel histogram is four, which are a red channel histogram and a green (Gr) channel histogram. , green (Gb) channel histogram and blue channel histogram.
  • the color channel histogram corresponding to the currently collected picture may include the correspondence between different brightness and pixel ratio under each color channel. Due to the different angles of light in the shooting scene, the proportions of pixels of different brightness in the currently acquired picture are different. Therefore, in the embodiment of the present application, the current ray angle can be determined according to the color channel histogram corresponding to the currently collected picture.
  • the brightness information of the current shooting scene may be obtained first, and then the pixel ratio threshold and the brightness threshold corresponding to the current shooting scene are determined according to the brightness information, and then according to the pixel ratio threshold, the brightness threshold, and the color channel histogram corresponding to the current shooting scene, Determine the angle of light in the current shooting scene.
  • the brightness information corresponding to the current shooting scene may be acquired from the automatic exposure control system.
  • the automatic exposure control system (Auto Exposure Control, AEC for short) can perform automatic exposure compensation processing on the captured image according to the brightness of the shooting scene, in actual application, when the terminal performs a shooting operation on the shooting area. Therefore, in this embodiment, the brightness information corresponding to the current shooting scene can be directly obtained from the AEC, so that the detection error of the shooting environment can be reduced, and the quality of the captured image can be improved.
  • the terminal may detect the shooting environment based on a color coding space (YUV).
  • YUV data is acquired after a series of processing by the Image Processor Processor (ISP), which causes the data in the YUV to not completely reflect the current shooting environment, resulting in
  • ISP Image Processor Processor
  • the original image data in the ISP may be used, such as the number of grids included in the image, the average brightness of each grid, and the grid.
  • the number of pixels included, the ratio of overexposed pixels included in each grid, and the like are used to determine the brightness information corresponding to the current environment.
  • the pixel ratio threshold and the brightness threshold corresponding to the current shooting scene may be determined according to the brightness information corresponding to the current shooting scene.
  • the same light angle has different features in different brightness shooting scenes, such as outdoor backlight scenes
  • the preview picture mainly shows that the dark part of the image is dark, the bright part is overexposed, and the night scene preview picture mainly shows Pixels are mainly concentrated in the dark part, and the overexposed area is small. Therefore, in order to make the pixel ratio threshold and the brightness threshold corresponding to the determined current shooting scene more accurate, the present application may perform a rough division of the shooting scene in advance, and respectively set the pixel ratio threshold and the brightness threshold for the divided scenes.
  • the shooting scene can be divided into three according to the brightness of the shooting scene, which are a high-brightness scene, a medium-brightness scene, and a low-brightness scene.
  • the high-brightness scene may be an area with good light such as outdoor, the medium-brightness scene may be a region with better indoor light, and the low-brightness scene may be a dark light or a region with poor night light, etc., which is not specifically limited in this application. .
  • the embodiment may further set the pixel ratio threshold and the brightness threshold for each scene.
  • the pixel ratio threshold and the brightness threshold of each of the above scenarios may be adaptively set according to actual usage requirements, which is not specifically limited in this application.
  • the embodiment may match the brightness information corresponding to the current shooting scene with each of the divided scenes.
  • the pixel ratio threshold and the brightness threshold corresponding to the current shooting scene are determined.
  • the target brightness range corresponding to the current shooting scene may be determined according to the corresponding relationship between the brightness threshold and the brightness range preset by each scene; and then the correspondence between the preset brightness range and the pixel ratio threshold and the brightness threshold is determined according to the preset brightness range. And determining a pixel ratio threshold and a brightness threshold corresponding to the target brightness range.
  • the target scene to which the current shooting scene belongs is determined according to the brightness range of the matching success, and then according to the target target.
  • the scene acquires a corresponding pixel ratio threshold and a brightness threshold.
  • the current ray angle may be determined according to the pixel ratio threshold, the brightness threshold, and the color channel histogram of the current preview scene.
  • the pixels in the histogram of each color channel are mostly concentrated in a position of high brightness or low brightness, and appear in a "double peak" form.
  • the angle of the light includes both forward and backlight.
  • the pixel ratio threshold may include two, which are a first pixel ratio threshold and a second pixel ratio threshold, respectively.
  • the brightness threshold also includes two, which are a first brightness threshold and a second brightness threshold, respectively.
  • the pixel ratio threshold and the brightness threshold are mutually corresponding, that is, the first pixel ratio threshold corresponds to the first brightness threshold, and the second pixel ratio threshold corresponds to the second brightness threshold.
  • the histogram of the current preview image can be used to determine whether the light angle in the current shooting scene is backlit. Specifically, it can be judged by the following methods:
  • the sum of the pixel ratio values in the histograms of the respective color channels is sequentially counted according to the direction of the brightness from low to high and from high to low;
  • the first brightness value corresponding to any color channel is smaller than the first brightness threshold, and the second brightness value is greater than the second brightness threshold, it is determined that the angle of the light in the shooting scene is back light. Otherwise, it is determined to be smooth.
  • the angle of the light may also include: side light, side back light, top light, etc., in order to make the angle of the determined light more accurate, for the same scene, the number and size of the pixel scale thresholds and the corresponding brightness threshold may be set as needed. , thereby determining the angle of the light in the current shooting scene according to the set pixel ratio threshold and the brightness threshold.
  • the angle of the light in the current shooting scene can be determined according to the current shooting time, position, and the like.
  • the location includes the geographic location where the terminal is located, the orientation of the terminal (refer to the shooting direction of the terminal), and the like. For example, at 12 noon in Beijing, the mobile phone is shooting towards the south. Because the sun is at the south at 12 o'clock in Beijing, the direction of the mobile phone is south, that is, the subject is between the sun and the terminal, that is, the light angle of the preview image of the camera is Backlighting.
  • the location of the terminal can be obtained according to a positioning system in the terminal, such as a Global Positioning System (GPS).
  • GPS Global Positioning System
  • the shooting time can be obtained from the clock on the terminal, and the orientation of the terminal can be based on the terminal.
  • the angle of the light is determined according to the direction of the shadow of the object to be photographed in the preview image, that is, the direction of the shadow.
  • the shadow in the preview image can be determined according to the contour extracted from the preview image and the position of the contour in the preview image. The direction of the shadow. Then, based on the direction of the shadow, the angle of the light in the current shooting scene is determined.
  • the current weight of each camera can be determined according to the angle of the light.
  • Step 102 Perform white balance processing on the currently acquired image according to the current weight of each camera.
  • each camera can be set to be in an open state during the shooting process, so that each camera can acquire an image. After each camera acquires the current image, white balance adjustment can be performed on each image, and each processed image is combined according to the current weight of each camera to generate a current captured image.
  • a camera with a weight of 0 may be set to be in a closed state, and a camera with a weight of not 0 may be in an activated state. Therefore, white balance adjustment is performed only on the image currently acquired by the camera whose weight is not 0, and each processed image is combined to generate a current captured image according to the current weight of each camera whose weight is not 0.
  • each camera can determine each target white balance gain value corresponding to each camera according to each color temperature value corresponding to each image, thereby performing white balance processing on each image according to each target white balance gain value. Finally, according to the current weight of each camera, the processed images are combined to generate the current captured image.
  • the weight of each camera is adjusted according to the change of the shooting environment parameter, thereby ensuring that an image suitable for the current environmental parameter can be obtained, and the camera switching is avoided.
  • the AWB hopping phenomenon has improved the user experience.
  • the white balance processing method provided by the embodiment of the present application first determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process, and then performs white balance processing on the currently acquired image according to the current weight of each camera. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • the current weight of each camera corresponding to the current shooting environment parameter can be determined, so that the currently acquired image is white balanced according to the current weight of each camera.
  • the process of performing white balance processing on the currently acquired image according to the current weight of each camera will be specifically described below with reference to FIG. 2 .
  • FIG. 2 is a flow chart of a white balance processing method according to another embodiment of the present application.
  • the white balance processing method is applied to a terminal including at least two different types of cameras, and the method includes:
  • Step 201 During the shooting process, determine the current weight of each camera corresponding to the current shooting environment parameter.
  • step 201 For the specific implementation process and the principle of the foregoing step 201, refer to the detailed description of step 101 in the foregoing embodiment, and details are not described herein again.
  • Step 202 Determine color temperature values corresponding to respective images currently acquired by each camera.
  • the color temperature values corresponding to the respective images currently acquired by each camera may be determined by using various methods.
  • the color temperature value corresponding to the currently acquired image may be determined according to the color temperature value corresponding to each white block in the image currently acquired by each camera; or, according to the image currently acquired by each camera, the area corresponding to the target shooting object The color temperature value determines the color temperature value corresponding to the currently acquired image, and so on.
  • Step 203 Determine, according to each color temperature value, a target white balance gain value corresponding to each camera.
  • the target white balance gain value is used to adjust the currently acquired image to the color in the image to be reproduced.
  • the target white balance gain value may include a target white balance gain value of three channels R, G, and B in the image acquired by the image sensor.
  • the color cast directions of the respective images can be determined, thereby calculating the target white balance gain values.
  • calculation, table lookup or iteration method may be used to calculate the target white balance gain values corresponding to the respective cameras.
  • the average of the three components R, G, and B in the color vector of all the pixels tends to be balanced (1:1:1), using weighting.
  • the grayscale algorithm can obtain a more accurate target white balance gain value.
  • the image currently acquired by each camera may be divided into several sub-blocks, and the color vector of all the pixels in each sub-block is obtained, and each pixel is represented by a (R, G, B) color vector, and then each sub-calculation is calculated.
  • the average and standard deviation of the three channels R, G, and B in the block and then weight the standard deviation of each sub-block (abandon the sub-block with low correlation and retain the sub-block with high correlation) to reduce the large block.
  • the effect of a single color makes the image colorful.
  • the average values of the three channels R, G, and B weighted by the standard deviation are calculated, and the gain coefficients of the three channels R, G, and B are finally calculated, that is, the target white balance gain value corresponding to the camera is obtained.
  • Step 204 Perform white balance processing on each image by using each target white balance value, and acquire each processed image.
  • the R value and the B value data of each pixel after the image adjustment currently obtained by each camera are calculated according to the calculated target white balance gain values, thereby realizing Color correction for each image.
  • the current camera since the human eye has the highest sensitivity to light (480 nm-600 nm) belonging to the green wavelength in the spectrum, and the Bayer array has the largest number of green pixels collected, the current camera usually adopts green.
  • the gain value of the component is fixed, and then the gain values of the red component and the blue component are respectively adjusted to achieve adjustment of the red component and the blue component.
  • Step 205 Synthesize each processed image to generate a current captured image according to the current weight of each camera.
  • the color saturation of each image after processing may be different, and then, in the embodiment of the present application, according to the processed Different color saturation of the image, using different synthesis modes, the processed images are combined to generate the current captured image.
  • the target composition mode can be determined based on the color saturation of each of the processed images, and the processed images can be combined to generate the current captured image based on the current weight of each camera based on the target composition mode.
  • the target synthesis mode is used to indicate a specific manner when the processed images are combined to generate a current captured image.
  • the color saturation threshold may be preset, so that the corresponding target synthesis mode is determined according to the relationship between the processed color saturation of each image and the preset color saturation threshold, based on the target synthesis mode, according to each camera current The weight of each of the processed images is combined.
  • the color saturation threshold may be preset to a value that is infinitely close to 0, so that each processed image is divided into a gray image and a color image, and when the color saturation of each image after the processing is determined to be less than a threshold, that is, each image
  • the processed images may be combined to generate a current captured image by using a grayscale weighted average method according to the current weight of each camera.
  • the synthesized image is F, it is possible, the image synthesized by the following equation.
  • each image when it is determined that the color saturation of each processed image is greater than a threshold, that is, when each image is a color image, each image may be regarded as three monochrome images (red, green, blue) according to the three primary color models. Superimposed, respectively, according to the current weight of each camera, the color images are synthesized, and finally the three red, green and blue monochrome images are synthesized, and then superimposed by three monochrome images to generate the current captured image.
  • a threshold that is, when each image is a color image
  • each image may be regarded as three monochrome images (red, green, blue) according to the three primary color models.
  • the color images are synthesized, and finally the three red, green and blue monochrome images are synthesized, and then superimposed by three monochrome images to generate the current captured image.
  • the synthesized image is F, it is possible, the image synthesized by the following equation.
  • F B (i,j) w 1 (i,j)N 1B (i,j)+w 2 (i,j)N 2B (i,j)+...+w M (i,j)N MB ( i, j).
  • R, G, and B respectively represent three primary colors of red, green, and blue
  • F R (i, j), F G (i, j), F B (i, j) are the values of the R, G, and B channels of the (i, j)th point of the synthesized image, respectively;
  • each image when the color saturation of each image after processing is greater than a threshold, that is, when each image is a color image, each image may be first converted into a grayscale image, and then gray weighted according to the current weight of each camera.
  • the averaging method combines the processed images to generate a current captured image.
  • the white balance processing method provided by the embodiment of the present application first determines the current weight of each camera corresponding to the current shooting environment parameter, and then determines the color temperature values corresponding to the respective images currently acquired by each camera, and then according to each The color temperature value determines the target white balance gain value corresponding to each camera, and then performs white balance processing on each image by using each target white balance value, acquires each processed image, and finally processes each processed image according to the current weight of each camera.
  • the image is synthesized to generate the current captured image. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • Fig. 3 is a structural diagram of a white balance processing apparatus according to an embodiment of the present application.
  • the white balance processing apparatus is applied to a terminal including at least two different types of cameras, and the apparatus includes:
  • the first determining module 31 determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process
  • the processing module 32 performs white balance processing on the currently acquired image according to the current weight of each camera.
  • the white balance processing apparatus can perform the white balance processing method provided by the embodiment of the present application.
  • the white balance processing device can be configured in any terminal having at least two different types of cameras. Among them, there are many types of terminals, which can be selected according to application needs, such as mobile phones, computers, cameras, and the like.
  • the first determining module 31 is specifically configured to:
  • the current weight of each camera is determined based on the current zoom factor, object distance, and/or light source conditions.
  • the processing module 32 includes:
  • a first determining unit configured to determine color temperature values corresponding to each image currently acquired by each camera
  • a second determining unit configured to determine, according to the color temperature values, respective target white balance gain values corresponding to the cameras
  • An acquiring unit configured to perform white balance processing on each image by using each target white balance value, and acquire each processed image
  • the synthesizing unit is configured to synthesize the processed images to generate a current captured image according to the current weight of each camera.
  • the foregoing first determining unit is specifically configured to:
  • processing module 32 further includes:
  • a third determining unit configured to determine a target synthesis mode according to the color saturation of each processed image
  • the processed images are combined to generate a current captured image according to the current weight of each camera.
  • the foregoing third determining unit is specifically configured to:
  • the processed images are combined to generate a current captured image by using a gray weighted average method according to the current weight of each camera;
  • each monochrome image is synthesized according to the three primary color models; and the combined three monochrome images are superimposed Generate the current captured image.
  • the white balance processing device provided by the embodiment of the present application first determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process, and then performs white balance processing on the currently acquired image according to the current weight of each camera. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • Fig. 4 is a structural diagram of a white balance processing apparatus according to another embodiment of the present application.
  • the white balance processing apparatus further includes:
  • a second determining module 41 configured to determine the current zoom factor according to the acquired zoom instruction or image adjustment instruction
  • And/or configured to determine a current object distance according to an output value of the distance sensor or depth information included in the currently acquired image
  • the light source condition is an angle of the light
  • the foregoing second determining module 41 is further configured to:
  • the white balance processing device provided by the embodiment of the present application first determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process, and then performs white balance processing on the currently acquired image according to the current weight of each camera. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • a further embodiment of the present application further provides a terminal.
  • FIG. 5 is a structural diagram of a terminal according to an embodiment of the present application.
  • Figure 5 shows the terminal as a mobile phone.
  • the terminal includes a housing 501, a processor 502, a memory 503, a circuit board 504, a power supply circuit 505, and at least two different types of cameras, which are illustrated by including two cameras 506 and 507.
  • the circuit board 504 is disposed inside the space enclosed by the housing 501, the processor 502 and the memory 503 are disposed on the circuit board 504; the power supply circuit 505 is used to supply power to each circuit or device of the terminal; and the memory 503 is used for storing
  • the program code is executed by the processor 502 by executing the executable program code stored in the memory 503 to execute a program corresponding to the executable program code for performing the white balance processing method as in the foregoing embodiment, the white balance processing Methods include:
  • the currently acquired image is subjected to white balance processing according to the current weight of each camera.
  • the terminal provided by the embodiment of the present application first determines the current weight of each camera corresponding to the current shooting environment parameter during the shooting process, and then performs white balance processing on the currently acquired image according to the current weight of each camera. Therefore, by adjusting the weight of each camera according to the change of the shooting environment parameter, it is ensured that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • Still another embodiment of the present application proposes a computer readable storage medium having stored thereon a computer program that, when executed by the processor, implements a white balance processing method as in the foregoing embodiments.
  • the computer readable storage medium provided by the embodiment of the present application may be disposed in any terminal that includes at least two different types of cameras and needs white balance adjustment, and performs white balance processing method stored thereon when performing white balance adjustment.
  • the weight of each camera can be adjusted according to the change of the shooting environment parameters, thereby ensuring that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • a further aspect of the present application is directed to a computer program product that, when executed by a processor, executes a white balance processing method as in the previous embodiments.
  • the computer program product provided by the embodiment of the present application can be set in any terminal that includes at least two different types of cameras, and needs to perform white balance adjustment.
  • the program corresponding to the white balance processing method can be implemented.
  • the weight of each camera is adjusted, thereby ensuring that an image suitable for the current environmental parameters can be obtained, and the AWB jump phenomenon caused by the camera switching is avoided, and the user experience is improved.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

La présente invention concerne un procédé et un appareil de traitement d'équilibrage de blancs, ainsi qu'un terminal, ledit procédé étant appliqué à un terminal comprenant au moins deux types différents d'appareils photographiques, le procédé comprenant : pendant le processus de photographie, déterminer le poids actuel de chaque appareil photographique correspondant à des paramètres d'environnement de photographie actuels; et effectuer un traitement d'équilibrage de blancs sur une image actuellement acquise selon le poids actuel de chaque appareil photographique. Ainsi, au moyen de l'ajustement du poids de chaque appareil photographique en fonction de changements de paramètres d'environnement de photographie, non seulement il est garanti qu'une image qui convient aux paramètres d'environnement actuels peut être obtenue, mais également un saut d'équilibrage automatique de blancs (AWB) qui est provoqué par une commutation d'appareils photographiques est empêché, permettant ainsi d'améliorer l'expérience de l'utilisateur.
PCT/CN2018/094971 2017-07-25 2018-07-09 Procédé et appareil de traitement d'équilibrage de blancs, et terminal WO2019019904A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710612896.2A CN107371007A (zh) 2017-07-25 2017-07-25 白平衡处理方法、装置和终端
CN201710612896.2 2017-07-25

Publications (1)

Publication Number Publication Date
WO2019019904A1 true WO2019019904A1 (fr) 2019-01-31

Family

ID=60307865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094971 WO2019019904A1 (fr) 2017-07-25 2018-07-09 Procédé et appareil de traitement d'équilibrage de blancs, et terminal

Country Status (2)

Country Link
CN (1) CN107371007A (fr)
WO (1) WO2019019904A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278031A (zh) * 2022-07-29 2022-11-01 盛泰光电科技股份有限公司 基于高光谱的摄像头模组及其图像处理方法和应用

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107371007A (zh) * 2017-07-25 2017-11-21 广东欧珀移动通信有限公司 白平衡处理方法、装置和终端
CN108182667B (zh) * 2017-12-29 2020-07-17 珠海大横琴科技发展有限公司 一种图像优化方法、终端及计算机可读存储介质
CN108242178B (zh) * 2018-02-26 2021-01-15 北京车和家信息技术有限公司 一种车位检测方法、装置及电子设备
CN108965835B (zh) * 2018-08-23 2019-12-27 Oppo广东移动通信有限公司 一种图像处理方法、图像处理装置及终端设备
CN111713096A (zh) * 2019-06-20 2020-09-25 深圳市大疆创新科技有限公司 增益系数的获取方法和装置
CN115484384B (zh) * 2021-09-13 2023-12-01 华为技术有限公司 控制曝光的方法、装置与电子设备
CN116761082B (zh) * 2023-08-22 2023-11-14 荣耀终端有限公司 图像处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387373A (zh) * 2010-09-02 2012-03-21 佳能株式会社 图像处理设备和图像处理方法
CN105208360A (zh) * 2015-09-23 2015-12-30 青岛海信移动通信技术股份有限公司 一种智能终端的图像预览方法、装置及终端
US20160094825A1 (en) * 2014-09-30 2016-03-31 Acer Incorporated Method for selecting metering mode and image capturing device thereof
CN106713887A (zh) * 2017-01-03 2017-05-24 捷开通讯(深圳)有限公司 移动终端及白平衡调节方法
CN107371007A (zh) * 2017-07-25 2017-11-21 广东欧珀移动通信有限公司 白平衡处理方法、装置和终端

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4284448B2 (ja) * 2005-01-28 2009-06-24 富士フイルム株式会社 画像処理装置及び方法
JP4217698B2 (ja) * 2005-06-20 2009-02-04 キヤノン株式会社 撮像装置及び画像処理方法
CN102572206B (zh) * 2010-12-31 2015-05-13 比亚迪股份有限公司 一种色彩校正方法
CN103402102B (zh) * 2013-07-17 2015-12-09 广东欧珀移动通信有限公司 双摄像头摄像系统及其白平衡调节的方法与装置
CN104853172B (zh) * 2014-02-19 2017-11-07 联想(北京)有限公司 一种信息处理方法以及一种电子设备
CN105208364A (zh) * 2014-06-27 2015-12-30 联想(北京)有限公司 一种图像白平衡控制方法、装置及电子设备
CN105657392A (zh) * 2015-04-30 2016-06-08 宇龙计算机通信科技(深圳)有限公司 一种图像处理方法及电子设备
CN105898260B (zh) * 2016-04-07 2018-01-19 广东欧珀移动通信有限公司 一种调节摄像头白平衡的方法及装置
CN106385541A (zh) * 2016-09-30 2017-02-08 虹软(杭州)科技有限公司 利用广角摄像组件及长焦摄像组件实现变焦的方法
CN106791732A (zh) * 2016-11-30 2017-05-31 努比亚技术有限公司 一种图像处理方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387373A (zh) * 2010-09-02 2012-03-21 佳能株式会社 图像处理设备和图像处理方法
US20160094825A1 (en) * 2014-09-30 2016-03-31 Acer Incorporated Method for selecting metering mode and image capturing device thereof
CN105208360A (zh) * 2015-09-23 2015-12-30 青岛海信移动通信技术股份有限公司 一种智能终端的图像预览方法、装置及终端
CN106713887A (zh) * 2017-01-03 2017-05-24 捷开通讯(深圳)有限公司 移动终端及白平衡调节方法
CN107371007A (zh) * 2017-07-25 2017-11-21 广东欧珀移动通信有限公司 白平衡处理方法、装置和终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278031A (zh) * 2022-07-29 2022-11-01 盛泰光电科技股份有限公司 基于高光谱的摄像头模组及其图像处理方法和应用

Also Published As

Publication number Publication date
CN107371007A (zh) 2017-11-21

Similar Documents

Publication Publication Date Title
WO2019019904A1 (fr) Procédé et appareil de traitement d'équilibrage de blancs, et terminal
CN110445988B (zh) 图像处理方法、装置、存储介质及电子设备
CN109644224B (zh) 用于捕获数字图像的系统和方法
US11729518B2 (en) Systems and methods for digital photography
WO2020034737A1 (fr) Procédé de commande d'imagerie, appareil, dispositif électronique et support d'informations lisible par ordinateur
JP4236433B2 (ja) 写真撮影においてフィル・フラッシュをシミュレートするシステム及び方法
US9288392B2 (en) Image capturing device capable of blending images and image processing method for blending images thereof
US9148561B2 (en) Image capturing apparatus, executable autoexposure bracketing and control method thereof
CN111028190A (zh) 图像处理方法、装置、存储介质及电子设备
CN110445989B (zh) 图像处理方法、装置、存储介质及电子设备
US20100103194A1 (en) Method and system for fusing images
US20140176757A1 (en) Color balance in digital photography
KR20130138340A (ko) 높은 다이나믹 레인지 이미지들을 이용한 화이트 밸런스 최적화
JP6685188B2 (ja) 撮像装置、画像処理装置及びそれらの制御方法、プログラム
KR20080109026A (ko) 노출 제어를 위한 시스템, 방법, 및 장치
CN110266954B (zh) 图像处理方法、装置、存储介质及电子设备
JP2006319830A (ja) 画像処理装置および撮像装置
WO2021143300A1 (fr) Procédé et appareil de traitement d'images, dispositif électronique et support de stockage
WO2020034701A1 (fr) Procédé et appareil de commande d'imagerie, dispositif électronique et support de stockage lisible
WO2022067762A1 (fr) Procédé et appareil de traitement d'image, dispositif de photographie, plateforme mobile et support de stockage lisible par ordinateur
CN105163047A (zh) 一种基于色彩空间转换的hdr图像生成方法、系统及拍摄终端
US11601600B2 (en) Control method and electronic device
WO2020034702A1 (fr) Procédé de commande, dispositif, équipement électronique et support d'informations lisible par ordinateur
JP2024502938A (ja) 画像処理のための高ダイナミックレンジ技法選択
CN116324882A (zh) 多相机系统中的图像信号处理

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18837954

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18837954

Country of ref document: EP

Kind code of ref document: A1