WO2022067762A1 - 图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质 - Google Patents

图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质 Download PDF

Info

Publication number
WO2022067762A1
WO2022067762A1 PCT/CN2020/119657 CN2020119657W WO2022067762A1 WO 2022067762 A1 WO2022067762 A1 WO 2022067762A1 CN 2020119657 W CN2020119657 W CN 2020119657W WO 2022067762 A1 WO2022067762 A1 WO 2022067762A1
Authority
WO
WIPO (PCT)
Prior art keywords
white balance
scene
image
processed
underwater
Prior art date
Application number
PCT/CN2020/119657
Other languages
English (en)
French (fr)
Inventor
吴伟霖
严毅民
王浩伟
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/119657 priority Critical patent/WO2022067762A1/zh
Publication of WO2022067762A1 publication Critical patent/WO2022067762A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, an apparatus, a photographing device, a movable platform, and a computer-readable storage medium.
  • the camera's image processing may face many problems. For example, automatic white balance processing is challenging and may not be able to accurately restore colors. In order to restore the authenticity of the scene, some underwater photography will carry filters to balance the RGB three channels of the image, but this method will obviously limit the underwater photography.
  • the present application provides an image processing method, an apparatus, a photographing device, a movable platform and a computer-readable storage medium, so as to solve the problems of poor white balance effect and many restrictions in the related art.
  • an image processing method including:
  • the first white balance data obtained by statistics in the first white balance statistics area of the image to be processed, and the second white balance data obtained by statistics in the second white balance statistics area of the to-be-processed image, wherein the first white balance
  • the statistical area is the white balance statistical area corresponding to the underwater scene
  • the second white balance statistical area is the white balance statistical area corresponding to the aquatic scene
  • Automatic white balance adjustment is performed on the image to be processed according to the first white balance data and the second white balance data.
  • an image processing apparatus includes a processor, a memory, and a computer program stored on the memory and executable by the processor, the processor implements the following steps when executing the computer program :
  • the first white balance data obtained by statistics in the first white balance statistics area of the image to be processed, and the second white balance data obtained by statistics in the second white balance statistics area of the to-be-processed image, wherein the first white balance
  • the statistical area is the white balance statistical area corresponding to the underwater scene
  • the second white balance statistical area is the white balance statistical area corresponding to the aquatic scene
  • Automatic white balance adjustment is performed on the image to be processed according to the first white balance data and the second white balance data.
  • a shooting device including:
  • a lens assembly arranged inside the casing
  • a sensor assembly disposed inside the housing for sensing light passing through the lens assembly and generating electrical signals
  • the image processing apparatus according to the second aspect.
  • a movable platform including:
  • a power system mounted within the body for powering the movable platform
  • the image processing apparatus according to the second aspect.
  • a computer-readable storage medium where several computer instructions are stored thereon, and when the computer instructions are executed, the steps of the method of the first aspect are implemented.
  • FIG. 1A is a schematic diagram of an image processing method according to an embodiment of the present application.
  • FIG. 1B is a schematic diagram of a white balance statistics area according to an embodiment of the present application.
  • FIG. 1C is a schematic diagram of a first white balance statistical area and a second white balance statistical area according to an embodiment of the present application.
  • FIG. 2A is a schematic diagram of an image processing method according to another embodiment of the present application.
  • FIG. 2B is a schematic diagram of an image processing method according to another embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a device for implementing the image method of this embodiment in the present application.
  • FIG. 4 is a block diagram of a mobile platform according to an embodiment of the present application.
  • FIG. 5 is a block diagram of a camera according to an embodiment of the present application.
  • the human visual system has the characteristic of color constancy, that is, the human eye can adapt to different illuminations and restore the scene color under different illumination to the scene color under white light illumination. For example: when the sun rises in the morning, the human eye can perceive a white object as white; in the dim light at night, the human eye can still perceive a white object as white.
  • image acquisition devices such as cameras
  • the image acquisition device will experience color reproduction distortion, that is, the image color may be reddish or bluish.
  • the color of the image output by the camera is reddish; in a light environment with a high color temperature, the color of the image output by the camera is blue.
  • the so-called white balance is the restoration of white by the device, that is, the images output by the device can correctly reproduce white objects in different light environments.
  • white The white balance processing process is to restore the color of other objects based on the restored white by detecting the illumination of the shooting scene, so as to eliminate the influence of illumination and restore the image to the color illuminated by white light.
  • image acquisition devices all have automatic white balance to truly restore the color of the photographed object.
  • the automatic white balance enables the device to automatically perform white balance correction within a certain color temperature range.
  • the automatic white balance for ordinary water scenes can better restore the color of the real scene.
  • automatic white balance processing faces certain challenges.
  • the spectrum near red has longer wavelengths, lower frequencies, and therefore lower energy.
  • the lower energy spectrum in water is easily absorbed by hydrogen and oxygen ions in water, resulting in less red spectrum in water.
  • the human eye's ability to perceive purple light is weak, so the underwater scene will appear bluish and green to the human eye.
  • the water quality in different places is different, and the absorption of the spectrum will also be different, which also leads to different deviations in the underwater color cast.
  • one solution is to provide white balance processing for underwater scenes in the shooting device, but this requires the user to manually perform a switching operation to make the camera in the white balance processing mode for underwater scenes.
  • the user sets to use the automatic white balance of the underwater scene, and when underwater, the user switches the shooting device to the white balance processing mode of the underwater scene, and activates the underwater automatic white balance function of the shooting device.
  • the present application provides an image processing solution, which obtains two pieces of white balance data according to the statistics of the image to be processed, wherein the first white balance data is obtained through the white balance statistical area corresponding to the underwater scene, and the second white balance data Obtained by the white balance statistics area corresponding to the water scene. Since the corresponding white balance data are obtained for the water scene and the underwater scene respectively, based on the two pieces of white balance data, automatic white balance adjustment can be performed on the to-be-processed image without the need for the user to manually select a white balance processing mode; When shooting is faced with switching between an underwater scene and an underwater scene, the solution of this embodiment adjusts the white balance of the image by using two pieces of white balance data, so that a smooth processing effect can be obtained.
  • the solution of this embodiment can be applied to photographing devices such as cameras or video cameras; it can also be applied to electronic devices equipped with cameras, and the electronic devices here may include devices such as movable platforms or smart phones.
  • the camera has a built-in ISP (Image Signal Processing) unit, which is mainly used to process the output signal of the front-end image sensor.
  • the ISP completes the effect processing of the digital image through a series of digital image processing algorithms. Including 3A (auto exposure, auto focus, auto white balance), dead pixel correction, denoising, glare suppression, backlight compensation, color enhancement, lens shading correction, etc.
  • 3A auto exposure, auto focus, auto white balance
  • dead pixel correction denoising, glare suppression, backlight compensation, color enhancement, lens shading correction, etc.
  • the solution of this embodiment can be applied to an ISP unit in a camera to realize automatic white balance processing of an image.
  • the image to be processed in this embodiment may be the original image raw collected by the built-in image sensor of the photographing device, or may be the image generated by the ISP unit in the process of processing the image, such as a YUV or RGB image.
  • the solution of this embodiment can also be applied to image processing software, which can run on tablet computers, smart phones, personal digital assistants (PDAs), laptop computers, desktop computers, or media content players, etc.
  • image processing software may apply the image processing method provided in this embodiment to perform white balance processing on a specified image.
  • FIG. 1A is a flowchart of an image processing method provided by an embodiment of the present application. The method includes the following steps:
  • step 102 first white balance data obtained by statistics in the first white balance statistical area of the image to be processed, and second white balance data obtained by statistics in the second white balance statistical area of the image to be processed are obtained.
  • the first white balance statistical area is a white balance statistical area corresponding to an underwater scene
  • the second white balance statistical area is a white balance statistical area corresponding to an aquatic scene.
  • step 104 automatic white balance adjustment is performed on the to-be-processed image according to the first white balance data and the second white balance data.
  • the white point/gray point in this embodiment refers to the R channel color component, G channel color component and B channel color component of the pixel after the gain correction process mentioned in this application.
  • Channel color components are equal.
  • gain correction process please refer to the subsequent description.
  • this embodiment divides the image into blocks, and each block contains multiple pixels. For the pixels of each block, the brightness values of the R channel of each block are accumulated and averaged, the brightness values of the G channel are accumulated and averaged, and the brightness values of the B channel are accumulated and averaged; then, the G channel is accumulated and averaged.
  • the grayscale gain value of the G channel is 1.
  • the image is divided into blocks, and the Rgain and Bgain of the image block are known.
  • the block is a white point based on the Rgain and Bgain of the image block.
  • FIG. 1B which is a schematic diagram of a white balance statistical region shown in this embodiment.
  • the abscissa represents Rgain
  • the ordinate represents Bgain
  • the gray part represents the statistical area
  • the statistical area represents the area belonging to the white point.
  • the weighted average Rgain value and Bgain value are applied to the entire image, and the corrected three-channel component value is obtained by multiplying the three-channel component of each pixel/block with the corresponding gain value of the channel.
  • the corrected R channel component value is equal to: R multiplied by the weighted average Rgain
  • the corrected B channel component value is equal to: B multiplied by Take the weighted average of Bgain.
  • the corrected R channel component value is equal to: R divided by the weighted average Rgain
  • the corrected B channel component value is equal to: B divided by Take the weighted average of Bgain.
  • the white balance statistical area is described by taking Rgain and Bgain as an example for the gain of the pixel.
  • Rgain and Bgain can be used to determine the white balance statistical area.
  • the two gains of Rgain and Ggain can be used to determine the white balance statistical area.
  • the image processing solution of this embodiment may be preconfigured with a first white balance statistical area and a second white balance statistical area, wherein the first white balance statistical area and the underwater The scene corresponds, and the second white balance statistics area corresponds to the water scene.
  • both the first white balance statistics area and the second white balance statistics area may be preset empirical values.
  • the first white balance statistical area and the second white balance statistical area may be set by a user, for example, a setting function for the first white balance statistical area and the second white balance statistical area is provided on the photographing device, The user inputs information of the first white balance statistical area and the second white balance statistical area through the setting function provided by the photographing device, and the photographing device obtains the information input by the user and sets the first white balance statistical area and the second white balance statistical area accordingly.
  • FIG. 1C is a schematic diagram of a first white balance statistical area and a second white balance statistical area shown in this embodiment.
  • the left part is the same as that in FIG. 1B , showing the second White balance statistics area; the rectangular box in the right part represents the first white balance statistics area.
  • the first white balance statistical area and the second white balance statistical area are significantly different, which is determined by the characteristics of the underwater scene and the water scene. For example, in some waters, the red spectrum in the water is more seriously absorbed, and the red spectrum collected by the shooting device is less, which is reflected in the image, that is, the red channel value of the image pixel is low, so the red gain of the corresponding white point will be relatively low.
  • the Rgain of the first white balance statistical region is relatively large.
  • the water quality of different water areas is different, and the absorption of the spectrum is also different.
  • the corresponding first white balance statistical area may be set according to the characteristics of the actual water area, which is not limited in this embodiment.
  • the first white balance statistics area includes: a grayscale gain interval of the white point; the second white balance statistics area includes: a grayscale gain interval of the white point.
  • the grayscale gain interval of the white point includes: a red channel grayscale gain interval and a blue channel grayscale gain interval of the white point, that is, the white balance statistical region is determined by Rgain and Bgain.
  • the grayscale gain interval of the white point may also be determined by using two gains of Rgain and Ggain, or may be determined by using two gains of Bgain and Ggain, or may be four-channel
  • the gain value of for example, the four gains of R, B, G R , and G B are determined, or may also be determined based on parameters of other color spaces, which is not limited in this embodiment.
  • the red spectrum in the water is more seriously absorbed, and the red spectrum collected by the photographing device is less, which is reflected in the image, that is, the red channel value of the image pixel is low, so the red gain of the corresponding white point is will be relatively large, and the red channel grayscale gain interval of the white point in the first white balance statistical area is greater than the red channel grayscale gain interval of the white point in the second white balance statistical area.
  • the solution of this embodiment sets corresponding white balance statistics areas for the underwater scene and the water scene respectively, so that the white balance statistics corresponding to the underwater scene and the water scene can be obtained respectively.
  • the first white balance data is determined using a white point in the image to be processed that belongs to an underwater scene.
  • the second white balance data is determined by using the white points belonging to the water scene in the image to be processed.
  • the process of determining the first white balance data may be, for the pixels of the image, according to the calculated grayscale gain of the pixel, using the grayscale of the white point represented by the first white balance statistical area Gain, by comparing the two, if the grayscale gain of an image pixel is within the first white balance statistical region, the point can be considered as a white point. Based on this, the white point can be counted from the image to be processed, and the counted white point belongs to the white point of the underwater scene. Using the counted white point, the final first white balance data can be calculated. The grayscale gain of each white point is calculated as the first white balance data based on the grayscale gain of the entire image based on the underwater scene.
  • the process of determining the second white balance data may be, for the pixel points of the image to be processed, according to the calculated grayscale gain of the pixel point, using the grayscale gain of the white point represented by the second white balance statistical area, In the comparison between the two, if the grayscale gain of the image pixel is within the second white balance statistical region, the point can be considered as a white point. Based on this, the white point can be counted from the image to be processed, and the white point is the white point belonging to the underwater scene. By using the counted white point, the final second white balance data can be calculated. For example, by using the counted white point The grayscale gain of the point calculates the grayscale gain of the entire image based on the water scene as the second white balance data.
  • the first white balance data or the second white balance data may be three-channel gain values Rgain, Bgain, Ggain, etc. of the image, or four-channel gain values, etc., or based on other color spaces
  • the white balance data for pixel correction is not limited in this embodiment.
  • the scheme of this embodiment obtains two pieces of white balance data according to the statistics of the images to be processed, wherein the first white balance data is obtained through the white balance statistical area corresponding to the underwater scene, and the second white balance data is obtained through the corresponding white balance data of the underwater scene.
  • the white balance statistics area is obtained. Since the corresponding white balance data are obtained for the water scene and the underwater scene respectively, based on the two white balance data, the automatic white balance adjustment is performed on the to-be-processed image, and the user does not need to manually select the white balance processing.
  • the solution of this embodiment adjusts the white balance of the image by using two pieces of white balance data, so that a smooth processing effect can be obtained.
  • the automatic white balance adjustment of the image to be processed may be implemented in various ways.
  • the first white balance data and the second white balance data may be fused; automatic white balance adjustment is performed on the to-be-processed image by using the fused white balance data.
  • the fusion ratio of the first white balance data and the second white balance data can be determined respectively, and the first white balance data and the second white balance data can be fused according to the fusion ratio, and the first white balance data and the second white balance data respectively correspond to The fusion ratio can be flexibly configured as needed.
  • only the first white balance data may be used to perform automatic white balance adjustment on the to-be-processed image, for example, when it is determined that the current shooting scene is likely to be underwater; only the second white balance data may be used. Perform automatic white balance adjustment on the to-be-processed image, for example, determine when the current shooting scene is likely to be on water.
  • the fusion may be performed by obtaining the confidence level of the underwater scene and the confidence level of the water scene, for example, using the confidence level of the underwater scene and/or the confidence level of the water scene to combine the first white balance data and the second white balance data.
  • White balance data is fused.
  • the underwater scene confidence represents the confidence that the current shooting scene is underwater
  • the water scene confidence represents the confidence that the current shooting scene is underwater. Based on this, the higher the confidence of the underwater scene, the higher the fusion weight of the first white balance data, and the lower the fusion weight of the second white balance data. The higher the confidence of the water scene, the lower the fusion weight of the first white balance data, and the higher the fusion weight of the second white balance data.
  • the confidence level of the underwater scene and the confidence level of the aquatic scene in this embodiment can be implemented in various ways.
  • the environmental parameters of the shooting scene can be obtained, and the confidence of the scene can be determined from the perspective of environmental parameters. Because the water scene and the underwater scene will be different from the environmental point of view, for example, the water scene has air pressure, and the underwater scene has Water pressure, the shooting scene can be determined by the detected air pressure or water pressure; or in an underwater scene, water will absorb the photons emitted by the distance sensor, and the distance value collected by the distance sensor is low, etc.
  • the environmental parameters of the shooting scene represent the underwater scene or the probability of the underwater scene, and the environmental parameters may be determined based on the difference between the underwater scene and the underwater scene.
  • the environmental parameter may be determined by one or more of a distance measured by a distance sensor, a pressure value measured by a pressure sensor, or a depth measured by a depth gauge.
  • the distance sensor may be a distance sensor using infrared light as a medium, such as a 3D ToF (Time of flight) module or an infrared ranging sensor.
  • This type of distance sensor can emit infrared light, and after irradiating the object, it is reflected to the sensor to receive the signal, and the distance to the object is calculated after signal processing.
  • the distance value output by the distance sensor is smaller, and the probability that the current shooting is in an underwater scene is higher. The reliability of the environmental parameters determined by this distance is high, and the probability of the underwater scene can be more accurately represented.
  • a pressure sensor is a device that can sense pressure signals and convert the pressure signals into usable output electrical signals according to certain rules.
  • a pressure sensor can be used to measure the pressure value in the water, and the measured pressure value can be used to determine the environmental parameter.
  • a depth gauge is a type of instrument that can measure the depth of the water using the principles of sound waves, pressure and other principles that can provide information on the thickness of water bodies.
  • the depth meter can be used to measure the underwater depth, and the measured depth can be used to determine the environmental parameter.
  • the above-mentioned devices such as distance sensor, pressure sensor measurement, or depth gauge can be configured for the photographing device, and the processor of the photographing device can communicate with the above-mentioned devices to obtain data measured by the devices.
  • some shooting devices with depth information collection function have built-in 3D ToF modules, and the acquisition of environmental parameters can be achieved through the built-in 3D ToF modules of the shooting devices.
  • This embodiment determines the scene confidence, and on the other hand, it can also be determined by analyzing the image. For example, in a water scene, using the matching first white balance statistical area for statistics, the matching statistical data can be obtained. If the second white balance statistical area that does not match is used for statistics, it may be Few statistics, or possibly none at all. For example, if statistics are performed on an underwater image in the second white balance statistical area of the underwater scene shown in FIG. 1B , the white point may not be obtained by statistics. Therefore, in this embodiment, the scene confidence can also be determined by analyzing the image. .
  • the confidence level of the underwater scene is determined by one or more of the following parameters: an environmental parameter of the shooting scene corresponding to the image to be processed or a statistical result of the image to be processed in the first white balance statistical area The first white balance statistical parameter. The higher the probability that the environmental parameter represents an underwater scene or the higher the first white balance statistical parameter is, the higher the confidence of the underwater scene is.
  • a threshold can be set as required. The smaller the distance collected by the 3D Tof module, the greater the probability of representing the underwater scene if it is less than the threshold; and the greater the probability of representing the underwater scene if it is greater than the threshold. lower probability.
  • the underwater scene confidence may be determined only according to any one of the environmental parameters or the first white balance statistical parameter, or the underwater scene confidence may be determined in combination with the environmental parameters and the first white balance statistical parameter.
  • the specific determination manner may require flexible configuration, which is not limited in this embodiment. For example, in a way of determining by combining the environmental parameters and the first white balance statistical parameter, weights can be set according to the two, and the environmental parameters and the first white balance statistical parameters can be weighted and averaged to obtain the confidence level of the underwater scene using the determined weights. .
  • the confidence level of the water scene is determined by one or more of the following parameters: environmental parameters of the shooting scene corresponding to the image to be processed or the image to be processed obtained by statistics in the second white balance statistical area The second white balance statistical parameter. The lower the probability that the environmental parameter represents the underwater scene or the higher the second white balance statistical parameter is, the higher the confidence of the underwater scene is.
  • the larger the collection distance the lower the probability of representing the underwater scene and the higher the confidence of the underwater scene; in some examples, a threshold can be set as required, and the 3D Tof module The larger the collected distance is, if it is greater than the threshold, it can be considered that the probability of representing an aquatic scene is higher; if it is less than the threshold, it can be considered that the probability of representing an aquatic scene is lower.
  • the confidence level of the aquatic scene may be determined only according to either the environmental parameter or the second white balance statistical parameter, or the confidence level of the aquatic scene may be determined in combination with the environmental parameter and the second white balance statistical parameter.
  • the specific determination manner may require flexible configuration, which is not limited in this embodiment. For example, in a way of determining by combining the environmental parameters and the second white balance statistical parameters, weights can be set according to the two, and the environmental parameters and the second white balance statistical parameters can be weighted and averaged to obtain the confidence level of the underwater scene using the determined weights. .
  • the first white balance statistical parameter and the second white balance statistical parameter may be obtained correspondingly in the aforementioned process of collecting white balance data; because when collecting white balance data, it is necessary to use white balance statistical area statistics For the white point in the image, the white balance data is obtained based on the statistical white point; based on this, the statistical white point can be used to analyze the confidence level of the water scene/the confidence level of the underwater scene from the perspective of the image.
  • the first white balance statistical parameter may include: related parameters of white points belonging to an underwater scene in the image to be processed.
  • the second white balance statistical parameter may include: related parameters of the white point belonging to the water scene in the to-be-processed image.
  • the relevant parameters of the white point include: the ratio of the white point, such as the ratio of the white point to the non-white point, or the ratio of the white point to all pixel points.
  • the automatic white balance adjustment of the image is performed by using the first white balance data and the second white balance data, and a better processing effect can be obtained.
  • the camera faces the scene switching from coming out of the water or going down from the water surface, but the camera cannot perceive whether it is currently on the water or underwater. The processing effect will be poor.
  • the environmental parameters and the white balance statistical parameters are respectively used to determine the confidence level of the underwater scene and the confidence level of the water scene, so that the scene change can be sensed.
  • the first white balance data and the second White balance data for automatic white balance adjustment of the image.
  • the underwater confidence of the first white balance data is high, and the corresponding automatic white balance adjustment on the image is consistent with the underwater scene;
  • the second white balance The water confidence of the data is high, and the corresponding automatic white balance adjustment of the image is consistent with the water scene.
  • FIG. 2A it is a schematic flowchart of another image processing method shown in this embodiment.
  • the image processing method in this embodiment is described by taking the application to a camera as an example.
  • the camera has a built-in image sensor and a distance sensor; the distance sensor in this embodiment adopts a 3D Tof module built in the camera to obtain the environmental parameters of the shooting scene.
  • step 211 the image sensor acquires an image
  • the distance sensor collects environmental parameters; in this embodiment, the distance sensor can use the built-in or external 3D Tof module of the camera, and the distance collected by the 3D Tof module; in the water scene, the 3D Tof module can collect The distance to the object; in the underwater scene, the output value of Tof is very small because water will absorb the photons emitted by Tof; based on this, the smaller the distance value collected by the 3D Tof module, it means that the camera is currently in the water.
  • the probability of the next scene is greater; as an example, a threshold can be set, and the probability of the camera currently being in an underwater scene can be determined through environmental parameters by comparing the relationship between the collected distance and the preset threshold.
  • the ISP unit of the camera can count the white balance statistical data on the image;
  • the specific statistical process includes the statistical step 2131 in the water area and the step 2132 in the underwater area:
  • the statistical step 2131 of the water area may include: counting the first white balance data, and obtaining the first white balance statistical parameter (including the number of white point ratios);
  • the statistical step 2132 of the water area may include: calculating the second white balance data, and acquiring the second white balance statistical parameter (including the white point ratio).
  • step 214 the confidence level of the underwater scene is determined; the confidence level of the underwater scene in this embodiment may be determined by the environmental parameters and the first white balance statistical parameter.
  • step 215 the confidence level of the aquatic scene is determined; the confidence level of the aquatic scene in this embodiment may be determined by the environmental parameter and the second white balance statistical parameter.
  • step 216 the first white balance data and the second white balance data are fused according to the underwater scene confidence and the aquatic scene confidence.
  • step 217 the white balance result is output.
  • the automatic white balance adjustment of the image is performed by using the first white balance data and the second white balance data, and a better processing effect can be obtained.
  • the camera faces the scene switching from coming out of the water or going down from the water surface, but the camera cannot perceive whether it is currently on the water or underwater. The processing effect will be poor.
  • the environmental parameters and the white balance statistical parameters are respectively used to determine the confidence level of the underwater scene and the confidence level of the water scene, so that the scene change can be sensed.
  • the first white balance data and the second White balance data for automatic white balance adjustment of the image.
  • Using the image processing method of this embodiment to perform white balance processing on the image captured in the underwater scene can achieve better color reproduction, the image will not appear bluish or greenish, and the effect of the traditional white balance algorithm can be more effectively improved , which is more conducive to presenting the real scene of underwater photography.
  • the ISP unit of the photographing device has various image processing functions, and the underwater scene confidence level/the underwater scene confidence level of this embodiment can also be used for other image processing except for white balance.
  • the ISP unit needs to perform uniform correction (color shading), color correction or contrast correction on the image, etc.
  • uniform correction color shading
  • contrast correction contrast correction
  • the method of this embodiment may further select an image correction matrix corresponding to an underwater scene or an image correction matrix corresponding to an aquatic scene to correct the image to be processed according to the confidence level of the underwater scene/the confidence level of the aquatic scene.
  • the image correction matrix includes any of the following: a uniformity correction matrix, a color correction matrix, or a contrast correction matrix.
  • the selection may be made only by using the confidence level of the underwater scene/the confidence level of the aquatic scene; specifically, selecting the corresponding underwater scene according to the confidence level of the underwater scene/the confidence level of the aquatic scene.
  • the image correction matrix or the image correction matrix corresponding to the water scene corrects the to-be-processed image, including:
  • an image correction matrix corresponding to the water scene is selected to correct the image to be processed.
  • the first threshold and the second threshold may be configured as required, which are not limited in this embodiment.
  • the selection can also be made in combination with the color temperature CCT (Correlated color temperature, correlated color temperature).
  • the image correction corresponding to the underwater scene is selected according to the confidence level of the underwater scene/the confidence level of the underwater scene.
  • the matrix or the image correction matrix corresponding to the water scene corrects the to-be-processed image, including:
  • the image correction matrix corresponding to the underwater scene is selected, otherwise the image correction matrix corresponding to the aquatic scene is selected.
  • the color temperature in this embodiment may be determined by using the fused white balance data.
  • Color temperature represents a unit of measurement that includes color components in light.
  • the fused white balance data represents the white point data in the image, that is, light source data. Based on the white point, the color temperature can be calculated.
  • the ratio of Rgain and Bgain can determine the color temperature. For example, if the ratio of Rgain and Bgain falls on the preset color temperature line in the white balance statistics area, the point on the color temperature line is Indicates the color temperature; if it does not fall on the preset color temperature line, draw a line with the ratio of Rgain and Bgain as the starting point and perpendicular to the preset color temperature line. The intersection of the line and the preset color temperature line is the color temperature.
  • FIG. 2B it is another schematic diagram of image processing shown in this embodiment, which may include the following processing flow:
  • the three-channel gain value obtained by automatic white balance processing is converted into a color temperature value CCT, and the atmosphere sense is adjusted according to the CCT to obtain a three-channel gain value with an atmosphere sense;
  • the Color Shading table, color correction matrix and contrast correction matrix corresponding to the underwater scene are selected; otherwise, the underwater scene is used.
  • the Color Shading table, color correction matrix and contrast correction matrix corresponding to the scene are selected if the CCT is greater than the CCT threshold and the underwater scene confidence is greater than the underwater confidence threshold.
  • the depth information map of the image is obtained by calculating the parameters collected by the distance sensor, or calculating the raw data of the image through deep learning or dark channel;
  • the final shading table is fused through Luma shading, color shading, depth information map, and the previously calculated correlated color temperature CCT and underwater scene confidence;
  • the original image data enters the shading fusion and correction module for correction, and the shading correction image is obtained;
  • the shading correction image is entered into the white balance three-channel gain correction module for correction, and the white balance three-channel gain value correction diagram is obtained;
  • the white balance three-channel gain value correction map enters the color correction module for correction, and obtains a color-corrected image; in practical applications, it can also be four-channel correction, and this embodiment takes three channels as an example;
  • the color-corrected image enters the contrast adjustment module to adjust the image to obtain a contrast image adjusted image
  • the foregoing method embodiments may be implemented by software, and may also be implemented by hardware or a combination of software and hardware.
  • a device in a logical sense is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory for operation by the image processing processor where it is located.
  • FIG. 3 it is a hardware structure diagram of an image processing apparatus 300 for implementing the image processing method of this embodiment, except for the processor 301 , the memory 302 , and the non-volatile memory shown in FIG. 3 .
  • the image processing device used for implementing the image processing method in the embodiment may also include other hardware, usually according to the actual function of the image processing device, which will not be repeated here.
  • the processor 301 implements the following steps when executing the computer program:
  • the first white balance data obtained by statistics in the first white balance statistics area of the image to be processed, and the second white balance data obtained by statistics in the second white balance statistics area of the to-be-processed image, wherein the first white balance
  • the statistical area is the white balance statistical area corresponding to the underwater scene
  • the second white balance statistical area is the white balance statistical area corresponding to the aquatic scene
  • Automatic white balance adjustment is performed on the image to be processed according to the first white balance data and the second white balance data.
  • the processor 301 processes the steps of performing automatic white balance adjustment on the to-be-processed image according to the first white balance data and the second white balance data, including:
  • the processor 301 processes the step of fusing the first white balance data and the second white balance data, including:
  • the first white balance data and the second white balance data are fused using the underwater scene confidence and/or the aquatic scene confidence.
  • the underwater scene confidence is determined by one or more of the following parameters:
  • the environmental parameters of the shooting scene corresponding to the to-be-processed image or the first white balance statistical parameter obtained by statistics of the to-be-processed image in the first white balance statistical area.
  • the confidence of the water scene is determined by one or more of the following parameters:
  • the environmental parameters of the shooting scene corresponding to the to-be-processed image or the second white balance statistical parameter obtained by statistics of the to-be-processed image in the second white balance statistical area.
  • the environmental parameters are determined in one or more of the following ways:
  • the distance measured by the distance sensor the pressure value measured by the pressure sensor, or the depth measured by the depth gauge.
  • the distance sensor includes: a distance sensor using infrared light as a medium.
  • the distance sensor using infrared light as a medium includes: a 3D ToF module or an infrared ranging sensor.
  • the device is applied to photographing equipment, and the 3D ToF module is configured in the photographing equipment.
  • the first white balance statistical parameter includes: related parameters of the white point belonging to the underwater scene in the to-be-processed image.
  • the second white balance statistical parameters include: related parameters of white points belonging to the water scene in the to-be-processed image.
  • the relevant parameters of the white point include: the ratio of the white point.
  • the first white balance statistical area includes: a grayscale gain interval of the white point
  • the second white balance statistical area includes: a grayscale gain interval of the white point.
  • the grayscale gain interval of the white point includes: a red channel grayscale gain interval and a blue channel grayscale gain interval of the white point.
  • the red channel grayscale gain interval of the white point in the first white balance statistical area is greater than the red channel grayscale gain interval of the white point in the second white balance statistical area.
  • the first white balance data is determined by using a white point belonging to an underwater scene in the image to be processed.
  • the second white balance data is determined by using the white points belonging to the water scene in the image to be processed.
  • the processor 301 is also used for:
  • An image correction matrix corresponding to an underwater scene or an image correction matrix corresponding to an aquatic scene is selected to correct the image to be processed according to the confidence level of the underwater scene/the confidence level of the aquatic scene.
  • the processor 301 processes the step of correcting the to-be-processed image by selecting an image correction matrix corresponding to the underwater scene or an image correction matrix corresponding to the water scene according to the underwater scene confidence/the above-water scene confidence. ,include:
  • an image correction matrix corresponding to the water scene is selected to correct the to-be-processed image.
  • the processor 301 processes the step of correcting the to-be-processed image by selecting an image correction matrix corresponding to the underwater scene or an image correction matrix corresponding to the water scene according to the underwater scene confidence/the above-water scene confidence. ,include:
  • the image correction matrix corresponding to the underwater scene is selected, otherwise the image correction matrix corresponding to the aquatic scene is selected.
  • the image correction matrix includes any one of the following: a uniform correction matrix, a color correction matrix or a contrast correction matrix.
  • the color temperature is determined using the fused white balance data.
  • an embodiment of the application further provides a photographing device 400 , including: a housing (not shown); and a lens assembly 402 .
  • the sensor assembly 403, disposed inside the housing, is used for sensing the light passing through the lens assembly 402 and generating electrical signals; and the image processing device 300 according to any one of the embodiments.
  • an embodiment of the present application further provides a movable platform 500, including: a body 501; a power system 502 installed in the body 501 and used to provide power for the movable platform; and any The image processing apparatus 300 described in the embodiment.
  • the movable platform 500 is a vehicle, a drone or a mobile robot.
  • the embodiments of this specification further provide a computer-readable storage medium, where several computer instructions are stored on the readable storage medium, and when the computer instructions are executed, the steps of the image processing method in any one of the embodiments are implemented.
  • Embodiments of the present specification may take the form of a computer program product embodied on one or more storage media having program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
  • Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • Flash Memory or other memory technology
  • CD-ROM Compact Disc Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

一种图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质;该方法针对待处理图像统计得到两份白平衡数据,其中第一白平衡数据通过水下场景对应的白平衡统计区域得到,第二白平衡数据通过水上场景对应的白平衡统计区域得到,由于分别针对水上场景和水下场景获得对应的白平衡数据,因此基于这两份白平衡数据,对所述待处理图像进行自动白平衡调整,无需用户手动选择白平衡处理模式;并且在连续拍摄面临水上场景和水下场景的切换时,该方法通过两份白平衡数据对图像进行白平衡调整,能够获得平滑的处理效果。

Description

图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质 技术领域
本申请涉及图像处理技术领域,具体而言,涉及一种图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质。
背景技术
在水下环境等特殊场景下,相机的图像处理可能会面临很多问题,例如自动白平衡处理挑战较大,可能无法准确还原色彩。一些水下摄影为了还原场景的真实性,会携带滤光片来平衡图像的RGB三通道,但这种方式显然会使水下摄影受到很多的限制。
发明内容
有鉴于此,本申请提供一种图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质,以解决相关技术中白平衡效果较差、限制较多的问题。
第一方面,提供一种图像处理方法,包括:
获取待处理图像在第一白平衡统计区域统计得到的第一白平衡数据,以及所述待处理图像在第二白平衡统计区域统计得到的第二白平衡数据,其中,所述第一白平衡统计区域为水下场景对应的白平衡统计区域,所述第二白平衡统计区域为水上场景对应的白平衡统计区域;
根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整。
第二方面,提供一种图像处理装置,所述装置包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取待处理图像在第一白平衡统计区域统计得到的第一白平衡数据,以及所述待处理图像在第二白平衡统计区域统计得到的第二白平衡数据,其中,所述第一白平衡统计区域为水下场景对应的白平衡统计区域,所述第二白平衡统计区域为水上场景对应的白平衡统计区域;
根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整。
第三方面,提供一种拍摄设备,包括:
外壳;
镜头组件,设于所述外壳内部;
传感器组件,设于所述外壳内部,用于感知通过所述镜头组件的光并生成电信号;以及,
如第二方面所述的图像处理装置。
第四方面,提供一种可移动平台,包括:
机体;
动力系统,安装在所述机体内,用于为所述可移动平台提供动力;以及,
如第二方面所述的图像处理装置。
第四方面,提供一种计算机可读存储介质,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现第一方面所述方法的步骤。
应用本申请提供的方案,由于分别针对水上场景和水下场景获得对应的白平衡数据,因此基于这两份白平衡数据,能够对所述待处理图像进行自动白平衡调整,无需用户手动选择白平衡处理模式;并且在连续拍摄面临水上场景和水下场景的切换时,本实施例方案通过两份白平衡数据对图像进行白平衡调整,能够获得平滑的处理效果。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请一个实施例的图像处理方法的示意图。
图1B是本申请一个实施例的白平衡统计区域的示意图。
图1C是本申请一个实施例的第一白平衡统计区域和第二白平衡统计区域的示意图。
图2A是本申请另一个实施例的图像处理方法的示意图。
图2B是本申请另一个实施例的图像处理方法的示意图。
图3是本申请中用于实施本实施例的图像方法的一种设备的结构示意图。
图4是本申请一个实施例的可移动平台的框图。
图5是本申请一个实施例的相机的框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。
如果照射物体的光线发生了变化,则光的色温也发生变化,那么,其反映出的色彩也会发生变化。人眼视觉系统具有颜色恒常特性,即人眼可以适应不同的光照,将不同光照下的场景颜色恢复成白光照射下的场景颜色。比如:在早晨旭日初升时,人眼看一个白色的物体能够感知它是白的;在夜晚昏暗灯光下,人眼看一个白色的物体仍能够感知它是白的。
但图像采集设备(如相机)没有人眼的适应性,因此,不同光线环境下,图像采集设备会出现彩色还原失真的现象,即图像颜色或者偏红,或者偏蓝等。比如,在色温低的光线环境中,摄像机输出图像颜色偏红;在色温高的光线环境中,摄像机输出的图像颜色偏蓝。
为解决这个问题,需要对图像采集装置采集到的图像进行白平衡处理,所谓白平衡,就是设备对白色的还原,即在不同的光线环境下设备输出的图像都能正确的重现出白色物体的白色。白平衡处理过程是通过检测出拍摄场景的光照,以还原出的白色为基础,还原出其他物体的颜色,从而消除光照的影响,使图像恢复到白色光照射的 颜色。
目前图像采集设备都具有自动白平衡,以真实地还原拍摄物体的色彩。自动白平衡使得设备能够在一定色温范围内自动地进行白平衡校正,目前针对普通的水上场景的自动白平衡能够比较好的还原真实场景的色彩。然而,针对水下拍摄的特殊场景,自动白平衡处理面临一定的挑战。
水下场景中光子会受到水分子的电子吸收能量实现电子跃迁,而激发跃迁需要遵循爱因斯坦的光子理论:E=hv;该理论表征频率较低的光子能量越低,其中E为光子能量,h为普朗克常数,v为光的频率。
在可见光中,接近红色的光谱的波长较长,频率较低,因此能量也就较低。在水中能量较低的光谱容易被水中的氢离子和氧离子所吸收,从而导致水中的红色光谱较为少。除此之外,人眼对于紫光的感受能力较弱,因此水下的场景在人眼中会呈现偏蓝、偏绿的现象。不同地方的水质不一样,光谱被吸收的情况也会不一样,这也导致水下偏色情况会存在不同的偏差。
在水上运动、水下探险、游泳摄影等水下场景,由于一些拍摄设备无法自动进行较好的色彩还原,需要用户后期花大量的时间精力进行色彩的调整。
还有一些用户为了还原场景的真实性,会在拍摄设备镜头前加装滤光片来平衡图像的颜色通道,使得拍摄的图像具有较好的色彩还原,但这种方式显然会使水下摄影受到很多的限制。
针对水下的白平衡处理,一种解决方案是拍摄设备中提供针对水下场景的白平衡处理,但这需要用户手动执行切换操作,以使相机处于水下场景的白平衡处理模式。例如在水上场景拍摄时用户设置采用水上场景的自动白平衡,至水下时用户切换拍摄设备至水下场景的白平衡处理模式,启动拍摄设备的水下自动白平衡功能。而且,在连续拍摄场景下,可能出现水下场景与水上场景切换的情况,比如拍摄设备从水下出来到水面上,或者是从水上至水下拍摄;这对白平衡处理带来更多的挑战,由于相机是根据用户的操作切换不同的白平衡处理模式,切换前后的画面使用不同场景的白平衡处理方法,会导致切换前后的图像具有突兀感,拍摄画面的色彩不连贯。
基于此,本申请提供了一种图像处理方案,该方案针对待处理图像统计得到两份白平衡数据,其中第一白平衡数据通过水下场景对应的白平衡统计区域得到,第二白平衡数据通过水上场景对应的白平衡统计区域得到。由于分别针对水上场景和水下场景获得对应的白平衡数据,因此基于这两份白平衡数据,能够对所述待处理图像进行自动白平衡调整,无需用户手动选择白平衡处理模式;并且在连续拍摄面临水上场景和水下场景的切换时,本实施例方案通过两份白平衡数据对图像进行白平衡调整,能够获得平滑的处理效果。
本实施例方案可应用于如相机或摄像机等拍摄设备;也可应用于搭载有相机的电子设备,此处的电子设备可以包括可移动平台或智能手机等设备。
其中,所述相机内置有ISP(Image Signal Processing,即图像信号处理)单元,主要用来对前端图像传感器输出信号处理的单元,ISP通过一系列数字图像处理算法完成对数字图像的效果处理,主要包括3A(自动曝光、自动对焦、自动白平衡)、坏点校正、去噪、强光抑制、背光补偿、色彩增强、镜头阴影校正等。本实施例的方案可应用于相机中的ISP单元中,实现对图像的自动白平衡处理。其中,本实施例的待 处理图像可以是拍摄设备内置的图像传感器采集的原始图像raw,也可以是ISP单元在处理图像的过程中产生的图像,例如YUV或RGB图像等。
在另一些例子中,本实施例方案也可以应用于图像处理软件,该图像处理软件可运行于平板电脑、智能手机、个人数字助理(PDA)、膝上计算机、台式计算机或媒体内容播放器等能够处理图像数据的任意电子设备中,该图像处理软件可应用本实施例提供的图像处理方法,对指定的图像进行白平衡处理。
接下来对该图像处理方案进行详细说明。请参见图1A,图1A是本申请实施例提供的一种图像处理方法的流程图,该方法包括以下步骤:
在步骤102中,获取待处理图像在第一白平衡统计区域统计得到的第一白平衡数据,以及所述待处理图像在第二白平衡统计区域统计得到的第二白平衡数据。
其中,所述第一白平衡统计区域为水下场景对应的白平衡统计区域,所述第二白平衡统计区域为水上场景对应的白平衡统计区域。
在步骤104中,根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整。
在本实施例中的白点/灰点(后续统一采用白点为例进行说明),是指经过本申请提及的增益矫正处理后该像素点的R通道颜色分量、G通道颜色分量和B通道颜色分量相等。具体增益矫正处理过程的可参见后续描述。
接下来说明图像的白平衡处理过程。出于计算效率的考虑,本实施例将图像进行分块,每个块包含有多个像素点。针对每个块的像素点,将每个块的R通道的亮度值进行累加求平均值,G通道的亮度值进行累加求平均值,B通道的亮度值进行累加求平均值;接着,G通道的平均值Gavg除以R通道的平均值Ravg得到R通道的灰度增益值Rgain(即Rgain=Gavg/Ravg),G通道的平均值Gavg除以B通道的平均值Bavg得到B通道的灰度增益值Bgain(即Bgain=Gavg/Bavg),G通道的灰度增益值为1。或者,也可以利用下述公式计算:Rgain=Ravg/Gavg,Bgain=Bavg/Gavg。
本实施例对图像进行分块,知道了图像块的Rgain和Bgain,接下来需要基于图像块的Rgain和Bgain来确定这个块是不是白点,具体的,需要利用设定的白平衡统计区域来确定,如图1B所示,是本实施例示出的一种白平衡统计区域的示意图。在该示意图中,横坐标表示Rgain,纵坐标表示Bgain,灰色部分表示统计区域,统计区域表征属于白点的区域。具体的,判断每个块的Rgain和Bgain是否落入统计区域内,若在统计区域内,即该块为白点,不在该统计区域内则该块不是白点。
至此,确定出来所有属于白点的图像块,对这些属于白点的图像块,根据各个属于白点的图像块的Rgain加权平均得到一个Rgain值,根据各个属于白点的图像块的Bgain加权平均得到一个Bgain值;未落入该统计区域内的块不是白点,不需要加入计算。该过程即为上述提及的增益矫正处理过程。
最后,将加权平均后的Rgain值和Bgain值作用到整张图像中,利用每个像素点/块的三通道分量分别与该通道对应的增益值相乘得到校正后的三通道分量值。
若上述确定增益的公式为G/R=Rgain、G/B=Bgain,则校正时,校正的R通道分量值等于:R乘以加权平均后的Rgain,校正的B通道分量值等于:B乘以加权平均后的Bgain。
若上述确定增益的公式为R/G=Rgain、B/G=Bgain,则校正时,校正的R通道分 量值等于:R除以加权平均后的Rgain,校正的B通道分量值等于:B除以加权平均后的Bgain。
实际应用中,不同白平衡算法的计算过程略有不同,上述实施例是以灰度世界法为例进行说明的,实际应用中还可以是其他算法,例如最大亮度法、色域界限法、光源预测法、完美反射法、动态阈值法、模糊逻辑法等,本实施例对此不作限定。
上述实施例是通过对图像进行分块来说明的。实际应用中,还可以有其他实现方式,例如可以是不分块的方式,即采用确定图像中每个像素点是否是白点的处理方式;或者,也是采用分块的方式,但在每个块内分别进行白平衡处理。
上述实施例中,是以像素的增益采用Rgain和Bgain为例进行白平衡统计区域的说明,实际应用中还可以采用其他方式,例如可以采用Rgain和Ggain这两个增益来确定白平衡统计区域,或者采用Bgain和Ggain这两个增益来确定白平衡统计区域,还可以是采用四通道的增益值,比如R,B,G R,G B这四个增益来确定白平衡统计区域,或者还可以是基于其他颜色空间的白平衡,本实施例对此不作限定。
由前述分析可知,白平衡处理过程中,需要准确查找出图像中的白点,因此白平衡统计区域的设置是影响白平衡处理效果的关键因素之一。基于此,针对前述水下场景和水上场景的问题,本实施例的图像处理方案可以预先配置有第一白平衡统计区域和第二白平衡统计区域,其中,第一白平衡统计区域与水下场景对应,第二白平衡统计区域与水上场景对应。
在一些例子中,第一白平衡统计区域和第二白平衡统计区域均可以是预设的经验值。
在另一些例子中,第一白平衡统计区域和第二白平衡统计区域可以是由用户设置的,例如拍摄设备上提供有针对第一白平衡统计区域和第二白平衡统计区域的设置功能,用户通过拍摄设备提供的设置功能输入第一白平衡统计区域和第二白平衡统计区域的信息,拍摄设备获取用户输入的信息,相应地设置第一白平衡统计区域和第二白平衡统计区域。
图1C所示,是本实施例示出的一种第一白平衡统计区域和第二白平衡统计区域的示意图,图1C所示坐标系中,左侧部分与图1B相同,示出了第二白平衡统计区域;右侧部分中矩形框内表示第一白平衡统计区域。由该图可见,第一白平衡统计区域和第二白平衡统计区域具有明显差异,这是由于水下场景与水上场景的特点决定的。例如,在某些水域,水中的红色光谱被吸收的比较严重,拍摄设备采集的红色光谱较少,体现到图像即图像像素的红色通道值较低,因此相应的白点的红色增益就会比较大,如图1C中示出第一白平衡统计区域的Rgain相对较大。正如前述所言,不同水域的水质不一样,光谱被吸收的情况也会不一样,可以根据实际水域特点设置对应的第一白平衡统计区域,本实施例对此不作限定。
在一些例子中,所述第一白平衡统计区域包括:白点的灰度增益区间;所述第二白平衡统计区域包括:白点的灰度增益区间。
在一些例子中,所述白点的灰度增益区间包括:白点的红色通道灰度增益区间和蓝色通道灰度增益区间,即由Rgain和Bgain来确定白平衡统计区域。在另一些例子中,所述白点的灰度增益区间还可以是采用Rgain和Ggain这两个增益来确定的,或者还可以是采用Bgain和Ggain这两个增益来确定,还可以是四通道的增益值,比如 R,B,G R,G B这四个增益来确定,或者还可以是基于其他颜色空间的参数来确定,本实施例对此不作限定。
在一些例子中,由于在一些水域,水中的红色光谱被吸收的比较严重,拍摄设备采集的红色光谱较少,体现到图像即图像像素的红色通道值较低,因此相应的白点的红色增益就会比较大,所述第一白平衡统计区域中白点的红色通道灰度增益区间大于所述第二白平衡统计区域中白点的红色通道灰度增益区间。
由上述实施例可见,本实施例方案针对水下场景和水上场景,分别设置对应的白平衡统计区域,从而可以获取到与水下场景和水上场景分别对应的白平衡统计数据。在一些例子中,所述第一白平衡数据是利用所述待处理图像中属于水下场景的白点确定的。所述第二白平衡数据是利用所述待处理图像中属于水上场景的白点确定的。
正如前述的白平衡处理流程,第一白平衡数据的确定过程,可以是针对图像的像素点,根据计算出的像素点的灰度增益,利用第一白平衡统计区域表征的白点的灰度增益,将两者进行的对比,若图像像素点的灰度增益在第一白平衡统计区域内,则可认为该点为白点。基于此,可以从待处理图像中统计出白点,统计出的白点即属于水下场景的白点,利用统计出的白点,即可计算最终的第一白平衡数据,如利用统计出的各个白点的灰度增益计算出整张图像的基于水下场景的灰度增益作为第一白平衡数据。
同理,第二白平衡数据的确定过程,可以是针对待处理图像的像素点,根据计算出的像素点的灰度增益,利用第二白平衡统计区域表征的白点的灰度增益,将两者进行的对比,若图像像素点的灰度增益在第二白平衡统计区域内,则可认为该点为白点。基于此,可以从待处理图像中统计出白点,白点即属于水下场景的白点,利用统计出的白点,即可计算最终的第二白平衡数据,如利用统计出的各个白点的灰度增益计算出整张图像的基于水上场景的灰度增益作为第二白平衡数据。
本实施例中,所述第一白平衡数据或所述第二白平衡数据可以是图像的三通道的增益值Rgain、Bgain、Ggain等或四通道的增益值等,或者是基于其他颜色空间的对像素进行校正的白平衡数据,本实施例对此不作限制。
由上述实施例可见,本实施例方案针对待处理图像统计得到两份白平衡数据,其中第一白平衡数据通过水下场景对应的白平衡统计区域得到,第二白平衡数据通过水上场景对应的白平衡统计区域得到,由于分别针对水上场景和水下场景获得对应的白平衡数据,因此基于这两份白平衡数据,对所述待处理图像进行自动白平衡调整,无需用户手动选择白平衡处理模式;并且在连续拍摄面临水上场景和水下场景的切换时,本实施例方案通过两份白平衡数据对图像进行白平衡调整,能够获得平滑的处理效果。
实际应用中,根据第一白平衡数据和第二白平衡数据,对待处理图像进行自动白平衡调整,可以有多种实现方式。在一些例子中,可以对所述第一白平衡数据和所述第二白平衡数据进行融合;利用融合后的白平衡数据对所述待处理图像进行自动白平衡调整。例如,可以分别确定第一白平衡数据和第二白平衡数据的融合比例,根据融合比例融合第一白平衡数据和第二白平衡数据,而第一白平衡数据与第二白平衡数据分别对应的融合比例,可以根据需要灵活配置。
在另一些例子中,还可以是只采用第一白平衡数据对所述待处理图像进行自动白平衡调整,例如确定当前拍摄场景大概率处于水下的时候;可以是只采用第二白平衡 数据对所述待处理图像进行自动白平衡调整,例如确定当前拍摄场景大概率处于水上的时候。
在一些例子中,可以通过获取水下场景置信度和水上场景置信度的方式进行融合,例如利用水下场景置信度和/或水上场景置信度对所述第一白平衡数据和所述第二白平衡数据进行融合。
作为例子,水下场景置信度表征当前拍摄场景处于水下的置信度,水上场景置信度表征当前拍摄场景处于水上的置信度。基于此,所述水下场景置信度越高,则所述第一白平衡数据的融合权重越高,所述第二白平衡数据的融合权重越低。所述水上场景置信度越高,则所述第一白平衡数据的融合权重越低,所述第二白平衡数据的融合权重越高。
本实施例水下场景置信度和水上场景置信度可以通过多种方式实现。作为例子,可以通过获取拍摄场景的环境参数,从环境参数的角度来确定场景置信度,由于水上场景与水下场景从环境来看会有较多不同,例如水上场景具有气压,水下场景具有水压,通过检测的气压或水压可以来确定拍摄场景;或者在水下场景中水会吸收距离传感器发射的光子,距离传感器采集的距离数值较低等等。
本实施例中,拍摄场景的环境参数表征水下场景或水下场景的概率,可以基于水下场景和水下场景的区别来确定环境参数。作为例子,可以通过距离传感器测量的距离、压力传感器测量的压力值或者深度计测量的深度中的一种或多种来确定环境参数。
其中,距离传感器可以是以红外光线为介质的距离传感器,例如3D ToF(Time of flight,飞行时间)模组或红外测距传感器等。此类距离传感器能够发射红外光,在照射到物体后反射到传感器接收信号,经信号处理后计算出物体的距离。在水下场景中,由于距离传感器发射的光子被水吸收,因此距离传感器输出的距离值较小,则当前拍摄处于水下场景的概率就越高。通过该距离确定出的环境参数的可靠性较高,能够较为准确地表征水下场景的概率。
压力传感器,是能感受压力信号、并能按照一定的规律将压力信号转换成可用的输出的电信号的器件。本实施例可以利用压力传感器来测量水中的压力值,利用测量的压力值来确定环境参数,压力值越大,环境参数越大,其表征的当前拍摄处于水下场景的概率就越高。
深度计,是利用声波、压力等能提供水体厚度信息的原理制成的、可测量水下深度的一类仪器。本实施例可以利用深度计来测量水下深度,利用测量的深度来确定环境参数,深度值越大,环境参数越大,其表征的当前拍摄处于水下场景的概率就越高。
在一些例子中,可以为拍摄设备配置上述距离传感器、压力传感器测量或者深度计等器件,拍摄设备的处理器可以与上述器件通信,以获取所述器件测量的数据。在另一些例子中,一些具有深度信息采集功能的拍摄设备内置有3D ToF模组,则可以通过拍摄设备内置的3D ToF模组实现环境参数的获取。
本实施例确定场景置信度,在另一方面,还可以通过对图像的分析来确定。例如,处于水上场景时,利用与之匹配的第一白平衡统计区域进行统计,能够统计得到相匹配的统计数据,若利用与之不匹配的第二白平衡统计区域进行统计,则可能统计到很少的统计数据,或者还可能无法统计到数据。例如以图1B所示的水上场景的第二白平衡统计区域对一张水下图像进行统计,则可能无法统计得到白点,以此本实施例还 可以通过对图像的分析来确定场景置信度。
在一些例子中,所述水下场景置信度由如下一种或多种参数确定:所述待处理图像对应的拍摄场景的环境参数或所述待处理图像在第一白平衡统计区域统计得到的第一白平衡统计参数。所述环境参数表征水下场景的概率越大或者所述第一白平衡统计参数越高,则所述水下场景置信度越高。
作为例子,以相机的3D Tof模组采集的距离作为环境参数为例,相机的3D Tof模组采集的距离越小,则表征水下场景的概率越大,水下场景置信度越高;在一些例子中,可以根据需要设置一阈值,3D Tof模组采集的距离越小,若小于该阈值则可认为其表征水下场景的概率越大;大于阈值,则可认为其表征水下场景的概率越低。
第一白平衡统计参数中的白点比例越高,则水下场景置信度越高。
实际应用中可以只根据环境参数或第一白平衡统计参数中的任一来确定水下场景置信度,也可以结合环境参数和第一白平衡统计参数来确定水下场景置信度。具体的确定方式可以需要灵活配置,本实施例对此不作限定。例如,以结合环境参数和第一白平衡统计参数来确定的方式,可以根据两者分别设置权重,将环境参数和第一白平衡统计参数利用确定的权重,加权平均后得到水下场景置信度。
在另一些例子中,所述水上场景置信度由如下一种或多种参数确定:所述待处理图像对应的拍摄场景的环境参数或所述待处理图像在第二白平衡统计区域统计得到的第二白平衡统计参数。所述环境参数表征水下场景的概率越低或者所述第二白平衡统计参数越高,则所述水上场景置信度越高。
仍以相机的3D Tof模组为例,采集的距离越大,则表征水下场景的概率越低,水上场景置信度越高;在一些例子中,可以根据需要设置一阈值,3D Tof模组采集的距离越大,若大于该阈值,则可认为其表征水上场景的概率越大;若小于阈值,则可认为其表征水上场景的概率越低。
第二白平衡统计参数中的白点比例越高,则水上场景置信度越高。
实际应用中可以只根据环境参数或第二白平衡统计参数中的任一来确定水上场景置信度,也可以结合环境参数和第二白平衡统计参数来确定水上场景置信度。具体的确定方式可以需要灵活配置,本实施例对此不作限定。例如,以结合环境参数和第二白平衡统计参数来确定的方式,可以根据两者分别设置权重,将环境参数和第二白平衡统计参数利用确定的权重,加权平均后得到水下场景置信度。
在一些例子中,所述第一白平衡统计参数和第二白平衡统计参数,可以是前述统计白平衡数据的过程中相应获取的;由于在统计白平衡数据时,需要利用白平衡统计区域统计图像中的白点,基于统计的白点获取白平衡数据;基于此,可以利用统计的白点,从图像的角度来分析水上场景置信度/水下场景置信度。作为例子,所述第一白平衡统计参数可以包括:所述待处理图像中属于水下场景的白点的相关参数。所述第二白平衡统计参数可以包括:所述待处理图像中属于水上场景的白点的相关参数。在一些例子中,所述白点的相关参数包括:白点的比例,例如白点与非白点的比例,或者是白点占所有像素点的比例。
由上述实施例可见,本实施例通过第一白平衡数据和第二白平衡数据对图像进行自动白平衡调整,能够获得较好的处理效果。例如,相机面临从从水下出来或者从水面下去的场景切换,但相机不能感知自己当前是在水上还是水下;假设在水上去到水 下,在水下时还是沿用水上白平衡算法,那么处理效果会比较差。同样,从水下到水面上,也需要感知场景变化,才可以得到较好的处理效果。本实施例针对水下场景和水上场景,分别通过环境参数和白平衡统计参数,从而可以确定水下场景置信度和水上场景置信度,从而能够感知场景变化,根据第一白平衡数据和第二白平衡数据对图像进行自动白平衡调整。例如,当相机处于水下场景,则第一白平衡数据的水下置信度较高,相应地对图像进行的自动白平衡调整与水下场景相符;当相机处于水上场景,则第二白平衡数据的水上置信度较高,相应地对图像进行的自动白平衡调整与水上场景相符。当相机面临场景切换时,例如从水下至水上的过程中,由于是根据两份白平衡数据对图像进行自动白平衡调整,切换后处于水上场景的图像,能够兼顾切换前的水下场景进行处理,因此切换前后的图像能够平滑过渡,图像处理效果较好。
接下来再通过一实施例对图像处理方案进行说明。如图2A所示,是本实施例示出的另一种图像处理方法的流程示意图。本实施例图像处理方法以应用于相机为例进行说明,相机内置有图像传感器以及距离传感器;本实施例的距离传感器采用相机内置的3D Tof模组,以获取拍摄场景的环境参数。
在步骤211中,图像传感器获取图像;
在步骤212中,距离传感器采集环境参数;本实施例中,距离传感器可以采用相机内置或外接的3D Tof模组,通过3D Tof模组采集的距离;在水上场景中,3D Tof模组能够采集到与物体的距离;在水下场景中,由于水会对Tof发射的光子进行吸收导致Tof的输出值非常小;基于此,3D Tof模组采集的距离值越小,则表征相机当前在水下场景的概率就越大;作为例子,可以设置一阈值,通过比较采集的距离与预设的阈值的关系,以此通过环境参数来确定相机当前在水下场景的概率。
在步骤213中,相机的ISP单元可以对图像统计白平衡统计数据;具体的统计过程包括在水上区域的统计步骤2131和在水下区域的步骤2132:
水上区域的统计步骤2131,可以包括:统计第一白平衡数据,并获取第一白平衡统计参数(包含白点比例数);
水上区域的统计步骤2132,可以包括:统计第二白平衡数据,并获取第二白平衡统计参数(包含白点比例数)。
在步骤214中,确定水下场景置信度;本实施例的水下场景置信度可以通过环境参数和第一白平衡统计参数确定。
在步骤215中,确定水上场景置信度;本实施例的水上场景置信度可以通过环境参数和第二白平衡统计参数确定。
在步骤216中,根据水下场景置信度和水上场景置信度对第一白平衡数据和第二白平衡数据进行融合。
在步骤217中,输出白平衡结果。
由上述实施例可见,本实施例通过第一白平衡数据和第二白平衡数据对图像进行自动白平衡调整,能够获得较好的处理效果。例如,相机面临从从水下出来或者从水面下去的场景切换,但相机不能感知自己当前是在水上还是水下;假设在水上去到水下,在水下时还是沿用水上白平衡算法,那么处理效果会比较差。同样,从水下到水面上,也需要感知场景变化,才可以得到较好的处理效果。本实施例针对水下场景和水上场景,分别通过环境参数和白平衡统计参数,从而可以确定水下场景置信度和水 上场景置信度,从而能够感知场景变化,根据第一白平衡数据和第二白平衡数据对图像进行自动白平衡调整。
利用本实施例图像处理方法对水下场景拍摄的图像进行白平衡处理,可以取得较好的色彩还原,图像不会产生偏蓝或偏绿的现象,能够更加有效地提升传统白平衡算法的效果,更有利于呈现水下拍照真实场景。
拍摄设备的ISP单元具有多种图像处理功能,本实施例的水下场景置信度/所述水上场景置信度还可用于除白平衡之外的其他图像处理。作为例子,ISP单元中需要对图像进行均匀校正(color shading)、色彩校正或对比度校正等,针对水上场景或水下场景也可以有不同的处理方式,可以针对实际拍摄场景选择对应的符合拍摄场景的处理。
基于此,本实施例方法还可以根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正。在一些例子中,所述图像校正矩阵包括如下任一:均匀校正矩阵、色彩校正矩阵或对比度校正矩阵。
在一些例子中,可以是只利用水下场景置信度/所述水上场景置信度进行选取;具体的,所述根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行矫正,包括:
若所述水下场景置信度高于第一阈值,则选取水下场景对应的图像校正矩阵对所述待处理图像进行矫正;或者,
若所述水上场景置信度高于第二阈值,则选取水上场景对应的图像校正矩阵对所述待处理图像进行矫正。
所述第一阈值和第二阈值可以根据需要进行配置,本实施例中对此不做限定。
在另一些例子中,还可以结合色温CCT(Correlated color temperature,相关色温)进行选择,具体的,所述根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行矫正,包括:
若所述待处理图像的色温大于色温阈值且所述水上场景置信度大于第一阈值,则选取水下场景对应的图像校正矩阵,否则选取水上场景对应的图像校正矩阵。
本实施例的色温可以利用所述融合后的白平衡数据确定。色温表示光线中包含颜色成分的一个计量单位,本实施例中,融合后的白平衡数据,表征了图像中白点数据,也即光源数据,基于该白点,可以计算得到色温。以融合后的白平衡数据为Rgain和Bgain为例,Rgain与Bgain的比值可确定色温,例如Rgain和Bgain的比值落在白平衡统计区域中的预设色温线上,则色温线上该点即表示色温;若未落在预设色温线上,以Rgain和Bgain的比值为起点画线并垂直于预设色温线,该线条与预设色温线的交点即为色温。
接下来通过图2B对拍摄设备的ISP单元的图像处理过程进行再次说明。如图2B所示,是本实施例示出的另一种图像处理示意图,可以包括如下处理流程:
对于待处理图像,经过自动白平衡处理得到的三通道增益值转换成色温值CCT,根据CCT对氛围感进行调整得到带氛围感的三通道增益值;
根据CCT与水下场景置信度判断,若CCT大于CCT阈值、水下场景置信度大于 水下置信度阈值时选择采用水下场景对应的Color Shading table、色彩校正矩阵及对比度校正矩阵;否则采用水上场景对应的Color Shading table、色彩校正矩阵及对比度校正矩阵。
通过距离传感器采集的参数、或者对图像原始数据通过深度学习或暗通道的方式计算得到图像的深度信息图;
通过Luma shading、color shading、深度信息图、以及前述计算的相关色温CCT、水下场景置信度对最终的shading table进行融合;
图像原始数据进入到shading融合与矫正模块进行校正,得到shading校正图像;
shading校正图像进入到白平衡三通道增益校正模块进行校正,得到白平衡三通道增益值校正图;
白平衡三通道增益值校正图进入到色彩校正模块进行校正,得到色彩校正图像;实际应用中也可以是四通道的校正,本实施例以三通道作为例子;
色彩校正图像进入到对比度调整模块对图像进行调整,得到对比图调整图像;
对比度调整图像输出到预览屏幕显示、或者进行图像存储。
上述方法实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在图像处理的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图3所示,为实施本实施例图像处理方法的图像处理装置300的一种硬件结构图,除了图3所示的处理器301、内存302、以及非易失性存储器303之外,实施例中用于实施本图像处理方法的图像处理设备,通常根据该图像处理设备的实际功能,还可以包括其他硬件,对此不再赘述。
本实施例中,所述处理器301执行所述计算机程序时实现以下步骤:
获取待处理图像在第一白平衡统计区域统计得到的第一白平衡数据,以及所述待处理图像在第二白平衡统计区域统计得到的第二白平衡数据,其中,所述第一白平衡统计区域为水下场景对应的白平衡统计区域,所述第二白平衡统计区域为水上场景对应的白平衡统计区域;
根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整。
所述处理器301处理所述根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整的步骤,包括:
对所述第一白平衡数据和所述第二白平衡数据进行融合;
利用融合后的白平衡数据对所述待处理图像进行自动白平衡调整。
所述处理器301处理所述对所述第一白平衡数据和所述第二白平衡数据进行融合的步骤,包括:
利用水下场景置信度和/或水上场景置信度对所述第一白平衡数据和所述第二白平衡数据进行融合。
所述水下场景置信度越高,则所述第一白平衡数据的融合权重越高,所述第二白平衡数据的融合权重越低。
所述水上场景置信度越高,则所述第一白平衡数据的融合权重越低,所述第二白平衡数据的融合权重越高。
所述水下场景置信度由如下一种或多种参数确定:
所述待处理图像对应的拍摄场景的环境参数或所述待处理图像在第一白平衡统计区域统计得到的第一白平衡统计参数。
所述水上场景置信度由如下一种或多种参数确定:
所述待处理图像对应的拍摄场景的环境参数或所述待处理图像在第二白平衡统计区域统计得到的第二白平衡统计参数。
所述环境参数表征水下场景的概率越大或者所述第一白平衡统计参数越高,则所述水下场景置信度越高。
所述环境参数表征水下场景的概率越低或者所述第二白平衡统计参数越高,则所述水上场景置信度越高。
所述环境参数通过如下一种或多种方式确定:
距离传感器测量的距离、压力传感器测量的压力值或者深度计测量的深度。
所述距离传感器包括:以红外光线为介质的距离传感器。
所述以红外光线为介质的距离传感器包括:3D ToF模组或红外测距传感器。
所述装置应用于拍摄设备,所述3D ToF模组配置于所述拍摄设备内。
所述第一白平衡统计参数包括:所述待处理图像中属于水下场景的白点的相关参数。
所述第二白平衡统计参数包括:所述待处理图像中属于水上场景的白点的相关参数。
所述白点的相关参数包括:白点的比例。
所述第一白平衡统计区域包括:白点的灰度增益区间;
所述第二白平衡统计区域包括:白点的灰度增益区间。
所述白点的灰度增益区间包括:白点的红色通道灰度增益区间和蓝色通道灰度增益区间。
所述第一白平衡统计区域中白点的红色通道灰度增益区间大于所述第二白平衡统计区域中白点的红色通道灰度增益区间。
所述第一白平衡数据是利用所述待处理图像中属于水下场景的白点确定的。
所述第二白平衡数据是利用所述待处理图像中属于水上场景的白点确定的。
所述处理器301还用于:
根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正。
所述处理器301处理所述根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正的步骤,包括:
若所述水下场景置信度高于第一阈值,则选取水下场景对应的图像校正矩阵对所述待处理图像进行校正;或者,
若所述水上场景置信度高于第二阈值,则选取水上场景对应的图像校正矩阵对所述待处理图像进行校正。
所述处理器301处理所述根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校 正的步骤,包括:
若所述待处理图像的色温大于色温阈值且所述水上场景置信度大于第一阈值,则选取水下场景对应的图像校正矩阵,否则选取水上场景对应的图像校正矩阵。
所述图像校正矩阵包括如下任一:均匀校正矩阵、色彩校正矩阵或对比度校正矩阵。
所述色温是利用所述融合后的白平衡数据确定的。
如图4所示,是申请实施例还提供一种拍摄设备400,包括:外壳(未示出);镜头组件402。传感器组件403,设于所述外壳内部,用于感知通过所述镜头组件402的光并生成电信号;以及任一实施例所述的图像处理装置300。
如图5所示,本申请实施例还提供一种可移动平台500,包括:机体501;动力系统502,安装在所述机体501内,用于为所述可移动平台提供动力;以及任一实施例所述的图像处理装置300。
可选地,所述可移动平台500为车辆、无人机或者可移动机器人。
本说明书实施例还提供一种计算机可读存储介质,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实任一实施例所述图像处理方法的步骤。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发 明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (56)

  1. 一种图像处理方法,其特征在于,包括:
    获取待处理图像在第一白平衡统计区域统计得到的第一白平衡数据,以及所述待处理图像在第二白平衡统计区域统计得到的第二白平衡数据,其中,所述第一白平衡统计区域为水下场景对应的白平衡统计区域,所述第二白平衡统计区域为水上场景对应的白平衡统计区域;
    根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整,包括:
    对所述第一白平衡数据和所述第二白平衡数据进行融合;
    利用融合后的白平衡数据对所述待处理图像进行自动白平衡调整。
  3. 根据权利要求1所述的方法,其特征在于,所述对所述第一白平衡数据和所述第二白平衡数据进行融合,包括:
    利用水下场景置信度和/或水上场景置信度对所述第一白平衡数据和所述第二白平衡数据进行融合。
  4. 根据权利要求3所述的方法,其特征在于,所述水下场景置信度越高,则所述第一白平衡数据的融合权重越高,所述第二白平衡数据的融合权重越低。
  5. 根据权利要求3所述的方法,其特征在于,所述水上场景置信度越高,则所述第一白平衡数据的融合权重越低,所述第二白平衡数据的融合权重越高。
  6. 根据权利要求3所述的方法,其特征在于,所述水下场景置信度由如下一种或多种参数确定:
    所述待处理图像对应的拍摄场景的环境参数或所述待处理图像在第一白平衡统计区域统计得到的第一白平衡统计参数。
  7. 根据权利要求3所述的方法,其特征在于,所述水上场景置信度由如下一种或多种参数确定:
    所述待处理图像对应的拍摄场景的环境参数或所述待处理图像在第二白平衡统计区域统计得到的第二白平衡统计参数。
  8. 根据权利要求6所述的方法,其特征在于,所述环境参数表征水下场景的概率越大或者所述第一白平衡统计参数越高,则所述水下场景置信度越高。
  9. 根据权利要求7所述的方法,其特征在于,所述环境参数表征水下场景的概率越低或者所述第二白平衡统计参数越高,则所述水上场景置信度越高。
  10. 根据权利要求6或7所述的方法,其特征在于,所述环境参数通过如下一种或多种方式确定:
    距离传感器测量的距离、压力传感器测量的压力值或者深度计测量的深度。
  11. 根据权利要求10所述的方法,其特征在于,所述距离传感器包括:以红外光线为介质的距离传感器。
  12. 根据权利要求11所述的方法,其特征在于,所述以红外光线为介质的距离传感器包括:3D ToF模组或红外测距传感器。
  13. 根据权利要求12所述的方法,其特征在于,所述方法应用于拍摄设备,所 述3D ToF模组配置于所述拍摄设备内。
  14. 根据权利要求6所述的方法,其特征在于,所述第一白平衡统计参数包括:所述待处理图像中属于水下场景的白点的相关参数。
  15. 根据权利要求6所述的方法,其特征在于,所述第二白平衡统计参数包括:所述待处理图像中属于水上场景的白点的相关参数。
  16. 根据权利要求14或15所述的方法,其特征在于,所述白点的相关参数包括:白点的比例。
  17. 根据权利要求11或12所述的方法,其特征在于,所述第一白平衡统计区域包括:白点的灰度增益区间;
    所述第二白平衡统计区域包括:白点的灰度增益区间。
  18. 根据权利要求17所述的方法,其特征在于,所述白点的灰度增益区间包括:白点的红色通道灰度增益区间和蓝色通道灰度增益区间。
  19. 根据权利要求18所述的方法,其特征在于,所述第一白平衡统计区域中白点的红色通道灰度增益区间大于所述第二白平衡统计区域中白点的红色通道灰度增益区间。
  20. 根据权利要求11所述的方法,所述第一白平衡数据是利用所述待处理图像中属于水下场景的白点确定的。
  21. 根据权利要求11所述的方法,所述第二白平衡数据是利用所述待处理图像中属于水上场景的白点确定的。
  22. 根据权利要求3所述的方法,其特征在于,还包括:
    根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正。
  23. 根据权利要求22所述的方法,其特征在于,所述根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正,包括:
    若所述水下场景置信度高于第一阈值,则选取水下场景对应的图像校正矩阵对所述待处理图像进行校正;或者,
    若所述水上场景置信度高于第二阈值,则选取水上场景对应的图像校正矩阵对所述待处理图像进行校正。
  24. 根据权利要求22所述的方法,其特征在于,所述根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正,包括:
    若所述待处理图像的色温大于色温阈值且所述水上场景置信度大于第一阈值,则选取水下场景对应的图像校正矩阵,否则选取水上场景对应的图像校正矩阵。
  25. 根据权利要求21所述的方法,其特征在于,所述图像校正矩阵包括如下任一:均匀校正矩阵、色彩校正矩阵或对比度校正矩阵。
  26. 根据权利要求23所述的方法,其特征在于,所述色温是利用所述融合后的白平衡数据确定的。
  27. 一种图像处理装置,其特征在于,所述装置包括处理器、存储器、存储在所 述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
    获取待处理图像在第一白平衡统计区域统计得到的第一白平衡数据,以及所述待处理图像在第二白平衡统计区域统计得到的第二白平衡数据,其中,所述第一白平衡统计区域为水下场景对应的白平衡统计区域,所述第二白平衡统计区域为水上场景对应的白平衡统计区域;
    根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整。
  28. 根据权利要求27所述的装置,其特征在于,所述处理器处理所述根据所述第一白平衡数据和所述第二白平衡数据对所述待处理图像进行自动白平衡调整的步骤,包括:
    对所述第一白平衡数据和所述第二白平衡数据进行融合;
    利用融合后的白平衡数据对所述待处理图像进行自动白平衡调整。
  29. 根据权利要求27所述的装置,其特征在于,所述处理器处理所述对所述第一白平衡数据和所述第二白平衡数据进行融合的步骤,包括:
    利用水下场景置信度和/或水上场景置信度对所述第一白平衡数据和所述第二白平衡数据进行融合。
  30. 根据权利要求29所述的装置,其特征在于,所述水下场景置信度越高,则所述第一白平衡数据的融合权重越高,所述第二白平衡数据的融合权重越低。
  31. 根据权利要求29所述的装置,其特征在于,所述水上场景置信度越高,则所述第一白平衡数据的融合权重越低,所述第二白平衡数据的融合权重越高。
  32. 根据权利要求29所述的装置,其特征在于,所述水下场景置信度由如下一种或多种参数确定:
    所述待处理图像对应的拍摄场景的环境参数或所述待处理图像在第一白平衡统计区域统计得到的第一白平衡统计参数。
  33. 根据权利要求29所述的装置,其特征在于,所述水上场景置信度由如下一种或多种参数确定:
    所述待处理图像对应的拍摄场景的环境参数或所述待处理图像在第二白平衡统计区域统计得到的第二白平衡统计参数。
  34. 根据权利要求32所述的装置,其特征在于,所述环境参数表征水下场景的概率越大或者所述第一白平衡统计参数越高,则所述水下场景置信度越高。
  35. 根据权利要求33所述的装置,其特征在于,所述环境参数表征水下场景的概率越低或者所述第二白平衡统计参数越高,则所述水上场景置信度越高。
  36. 根据权利要求32或33所述的装置,其特征在于,所述环境参数通过如下一种或多种方式确定:
    距离传感器测量的距离、压力传感器测量的压力值或者深度计测量的深度。
  37. 根据权利要求36所述的装置,其特征在于,所述距离传感器包括:以红外光线为介质的距离传感器。
  38. 根据权利要求37所述的装置,其特征在于,所述以红外光线为介质的距离传感器包括:3D ToF模组或红外测距传感器。
  39. 根据权利要求38所述的装置,其特征在于,所述装置还包括3D ToF模组。
  40. 根据权利要求32所述的装置,其特征在于,所述第一白平衡统计参数包括:所述待处理图像中属于水下场景的白点的相关参数。
  41. 根据权利要求32所述的装置,其特征在于,所述第二白平衡统计参数包括:所述待处理图像中属于水上场景的白点的相关参数。
  42. 根据权利要求40或41所述的装置,其特征在于,所述白点的相关参数包括:白点的比例。
  43. 根据权利要求40或41所述的装置,其特征在于,所述第一白平衡统计区域包括:白点的灰度增益区间;
    所述第二白平衡统计区域包括:白点的灰度增益区间。
  44. 根据权利要求43所述的装置,其特征在于,所述白点的灰度增益区间包括:白点的红色通道灰度增益区间和蓝色通道灰度增益区间。
  45. 根据权利要求44所述的装置,其特征在于,所述第一白平衡统计区域中白点的红色通道灰度增益区间大于所述第二白平衡统计区域中白点的红色通道灰度增益区间。
  46. 根据权利要求42所述的装置,所述第一白平衡数据是利用所述待处理图像中属于水下场景的白点确定的。
  47. 根据权利要求42所述的装置,所述第二白平衡数据是利用所述待处理图像中属于水上场景的白点确定的。
  48. 根据权利要求29所述的装置,其特征在于,所述处理器还用于:
    根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正。
  49. 根据权利要求48所述的装置,其特征在于,所述处理器处理所述根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正的步骤,包括:
    若所述水下场景置信度高于第一阈值,则选取水下场景对应的图像校正矩阵对所述待处理图像进行校正;或者,
    若所述水上场景置信度高于第二阈值,则选取水上场景对应的图像校正矩阵对所述待处理图像进行校正。
  50. 根据权利要求48所述的装置,其特征在于,所述处理器处理所述根据所述水下场景置信度/所述水上场景置信度选取水下场景对应的图像校正矩阵或水上场景对应的图像校正矩阵对所述待处理图像进行校正的步骤,包括:
    若所述待处理图像的色温大于色温阈值且所述水上场景置信度大于第一阈值,则选取水下场景对应的图像校正矩阵,否则选取水上场景对应的图像校正矩阵。
  51. 根据权利要求47所述的装置,其特征在于,所述图像校正矩阵包括如下任一:均匀校正矩阵、色彩校正矩阵或对比度校正矩阵。
  52. 根据权利要求49所述的装置,其特征在于,所述色温是利用所述融合后的白平衡数据确定的。
  53. 一种拍摄设备,其特征在于,包括:
    外壳;
    镜头组件,设于所述外壳内部;
    传感器组件,设于所述外壳内部,用于感知通过所述镜头组件的光并生成电信号;以及,
    如权利要求27至52任意一项所述的图像处理装置。
  54. 一种可移动平台,其特征在于,包括:
    机体;
    动力系统,安装在所述机体内,用于为所述可移动平台提供动力;以及,
    如权利要求27至52任意一项所述的图像处理装置。
  55. 根据权利要求54所述的可移动平台,其特征在于,所述可移动平台为车辆、无人机或者可移动机器人。
  56. 一种计算机可读存储介质,其特征在于,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现权利要求1至26任一项所述方法的步骤。
PCT/CN2020/119657 2020-09-30 2020-09-30 图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质 WO2022067762A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/119657 WO2022067762A1 (zh) 2020-09-30 2020-09-30 图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/119657 WO2022067762A1 (zh) 2020-09-30 2020-09-30 图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022067762A1 true WO2022067762A1 (zh) 2022-04-07

Family

ID=80949433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/119657 WO2022067762A1 (zh) 2020-09-30 2020-09-30 图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质

Country Status (1)

Country Link
WO (1) WO2022067762A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115022610A (zh) * 2022-05-26 2022-09-06 秦皇岛威卡威佛吉亚汽车内饰件有限公司 一种线阵相机平场校正方法
CN116055699A (zh) * 2022-07-28 2023-05-02 荣耀终端有限公司 一种图像处理方法及相关电子设备
CN117082362A (zh) * 2023-08-25 2023-11-17 山东中清智能科技股份有限公司 一种水下成像方法和设备
WO2024056014A1 (zh) * 2022-09-14 2024-03-21 影石创新科技股份有限公司 图像白平衡处理方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120188362A1 (en) * 2011-01-21 2012-07-26 Panasonic Corporation Electronic device and imaging device
CN110460826A (zh) * 2019-08-12 2019-11-15 Oppo广东移动通信有限公司 白平衡处理方法、处理装置和移动终端
CN110889812A (zh) * 2019-10-11 2020-03-17 大连海事大学 一种多尺度融合图像特征信息的水下图像增强方法
CN111047530A (zh) * 2019-11-29 2020-04-21 大连海事大学 基于多特征融合的水下图像颜色校正和对比度增强方法
CN111260543A (zh) * 2020-01-19 2020-06-09 浙江大学 一种基于多尺度图像融合和sift特征的水下图像拼接方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120188362A1 (en) * 2011-01-21 2012-07-26 Panasonic Corporation Electronic device and imaging device
CN110460826A (zh) * 2019-08-12 2019-11-15 Oppo广东移动通信有限公司 白平衡处理方法、处理装置和移动终端
CN110889812A (zh) * 2019-10-11 2020-03-17 大连海事大学 一种多尺度融合图像特征信息的水下图像增强方法
CN111047530A (zh) * 2019-11-29 2020-04-21 大连海事大学 基于多特征融合的水下图像颜色校正和对比度增强方法
CN111260543A (zh) * 2020-01-19 2020-06-09 浙江大学 一种基于多尺度图像融合和sift特征的水下图像拼接方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115022610A (zh) * 2022-05-26 2022-09-06 秦皇岛威卡威佛吉亚汽车内饰件有限公司 一种线阵相机平场校正方法
CN116055699A (zh) * 2022-07-28 2023-05-02 荣耀终端有限公司 一种图像处理方法及相关电子设备
CN116055699B (zh) * 2022-07-28 2023-10-20 荣耀终端有限公司 一种图像处理方法及相关电子设备
WO2024056014A1 (zh) * 2022-09-14 2024-03-21 影石创新科技股份有限公司 图像白平衡处理方法、装置、计算机设备和存储介质
CN117082362A (zh) * 2023-08-25 2023-11-17 山东中清智能科技股份有限公司 一种水下成像方法和设备
CN117082362B (zh) * 2023-08-25 2024-05-28 山东中清智能科技股份有限公司 一种水下成像方法和设备

Similar Documents

Publication Publication Date Title
WO2022067762A1 (zh) 图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质
AU2016200002B2 (en) High dynamic range transition
US10949958B2 (en) Fast fourier color constancy
US8803994B2 (en) Adaptive spatial sampling using an imaging assembly having a tunable spectral response
US7551797B2 (en) White balance adjustment
US8629919B2 (en) Image capture with identification of illuminant
CN104363434B (zh) 图像处理设备
CN111028190A (zh) 图像处理方法、装置、存储介质及电子设备
WO2019019904A1 (zh) 白平衡处理方法、装置和终端
CN101322153A (zh) 调节数字图像的曝光和色标
US9307213B2 (en) Robust selection and weighting for gray patch automatic white balancing
WO2022067761A1 (zh) 图像处理方法、装置、拍摄设备、可移动平台及计算机可读存储介质
Vazquez-Corral et al. Color stabilization along time and across shots of the same scene, for one or several cameras of unknown specifications
CN102067582A (zh) 颜色调整
US20140071264A1 (en) Image capture apparatus and control method thereof
US20180025476A1 (en) Apparatus and method for processing image, and storage medium
US20200228770A1 (en) Lens rolloff assisted auto white balance
JP2012134625A (ja) 光源推定装置及び光源推定方法
JP7277158B2 (ja) 設定装置及び方法、プログラム、記憶媒体
JP2013143593A (ja) 撮像装置、その制御方法およびプログラム
Brown Color processing for digital cameras
JP4752381B2 (ja) 撮像装置
JP2018182700A (ja) 画像処理装置およびその制御方法、プログラム、並びに記憶媒体
US20200228769A1 (en) Lens rolloff assisted auto white balance
JP2020182179A (ja) 画像処理装置およびその制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20955785

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20955785

Country of ref document: EP

Kind code of ref document: A1