CN114449243A - White balance method and terminal equipment - Google Patents

White balance method and terminal equipment Download PDF

Info

Publication number
CN114449243A
CN114449243A CN202210104395.4A CN202210104395A CN114449243A CN 114449243 A CN114449243 A CN 114449243A CN 202210104395 A CN202210104395 A CN 202210104395A CN 114449243 A CN114449243 A CN 114449243A
Authority
CN
China
Prior art keywords
color
point
color temperature
target image
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210104395.4A
Other languages
Chinese (zh)
Other versions
CN114449243B (en
Inventor
闫三锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202210104395.4A priority Critical patent/CN114449243B/en
Publication of CN114449243A publication Critical patent/CN114449243A/en
Application granted granted Critical
Publication of CN114449243B publication Critical patent/CN114449243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Abstract

The application relates to the technical field of image processing, and provides a white balance method and terminal equipment, which are used for solving the problem that in the prior art, when misleading color regions exist in an image, white balance operation easily causes color cast of the image in the related art. The method and the device guide the user to select the target image block so as to effectively remove the color blocks in the misleading color area. The mapping points with the color temperature values close to the actual color temperature can be screened out through the distribution condition of the color blocks selected by the user in the two-dimensional coordinate system, and the accuracy of the color temperature is improved when the color temperature is calculated. Therefore, the problem that the color temperature of the image is calculated to generate deviation by the pixel points in the misleading color zone is relieved, the image quality is effectively improved through the accurate calculation of the color temperature, and the problem of color cast of the image is relieved.

Description

White balance method and terminal equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a white balance method and a terminal device.
Background
In the field of image processing, an existing Automatic White Balance (AWB) technology determines a position of each pixel point in an image in a two-dimensional coordinate system (an abscissa in the two-dimensional coordinate system is R/G, the R/G represents a ratio of R to G, an ordinate in the two-dimensional coordinate system is B/G, and the B/G represents a ratio of B to G) by obtaining values of a red component (R, red), a green component (G, green), and a blue component (B, blue) of the pixel point of the image. Because different positions in the two-dimensional coordinate system correspond to different color temperature values, in the related technology, the color temperature value of each pixel point can be determined according to the position of the pixel point in the two-dimensional coordinate system, then the average color temperature is calculated according to the color temperature value of each pixel point, and then the white balance is carried out according to the average color temperature.
However, in practical applications, it is found that the white balance operation in the related art easily causes color cast of an image when a misleading color region exists in the image.
Disclosure of Invention
The application discloses a white balance method and terminal equipment, which are used for solving the problem that in the prior art, when misleading color regions exist in an image, the color of the image is easily deviated due to white balance operation in the related art.
In a first aspect, the present application provides a white balance method, including:
displaying the collected image;
guiding a user to select a target image block for determining a target color temperature in the image;
converting each pixel point in the target image block into a two-dimensional coordinate system to obtain each mapping point of the target image block in the two-dimensional coordinate system; a first coordinate axis in the two-dimensional coordinate system is used for expressing the ratio of the red component to the green component, and a second coordinate axis is used for expressing the ratio of the blue component to the green component;
screening out mapping points for calculating color temperature based on the distribution condition of each mapping point of the target image block in the two-dimensional coordinate system to obtain an effective point set;
obtaining a target color temperature based on the color temperature of each mapping point in the effective point set;
and correcting the image based on the target color temperature.
Optionally, the guiding the user to select a target image block for determining a target color temperature in the image specifically includes:
outputting color block types for selection, wherein the color block types comprise effective point color blocks and ineffective point color blocks, the effective point color blocks are used for indicating image blocks participating in color temperature calculation, and the ineffective point color blocks are used for indicating image blocks not participating in color temperature calculation;
determining a selected color block type in response to a selection operation of the color block type by a user;
and responding to the selection operation of the image blocks, and determining the target image blocks corresponding to the color block types.
Optionally, the color block type is the effective point color block, and the mapping points used for calculating the color temperature are screened out based on the distribution of the mapping points of the target image block in the two-dimensional coordinate system to obtain an effective point set, which specifically includes:
determining a mean value and a variance of coordinate values of each mapping point in the target image block, wherein the mean value of the coordinate values is a central point of each mapping point in the target image block;
constructing a normal distribution function based on the mean and the variance;
and screening the mapping points in the designated probability interval from the mapping points of the target image block according to the probability distribution table of the normal distribution function by taking the central point as a reference to obtain an effective point set.
Optionally, the color block type is the failure point color block, and the mapping points used for calculating the color temperature are screened out based on the distribution of the mapping points of the target image block in the two-dimensional coordinate system to obtain an effective point set, which specifically includes:
determining a mean value and a variance of coordinate values of each mapping point in the target image block, wherein the mean value of the coordinate values is a central point of each mapping point in the target image block;
constructing a normal distribution function based on the mean and the variance;
screening mapping points in a designated probability interval from the mapping points of the target image block according to a probability distribution table of the normal distribution function by taking the central point as a reference to obtain a failure point set;
and screening out points in the failure point set from the mapping points of the image to obtain an effective point set.
Optionally, the assigning a weight to each mapping point in the effective point set specifically includes:
for each mapping point, acquiring the color temperature of the mapping point;
determining a color temperature calibration point corresponding to the mapping point on the color temperature curve according to the color temperature;
calculating the distance between the mapping point and the color temperature calibration point;
and assigning a weight to the mapping point based on an inverse correlation of the weight and the distance.
Optionally, the screening, with the central point as a reference, mapping points within a specified probability interval from the mapping points of the target image block according to the probability distribution table of the normal distribution function to obtain an effective point set specifically includes:
determining the designated probability interval in the probability distribution table according to the mean and the variance;
determining the number of mapping points in the effective point set according to the designated probability interval;
and screening out mapping points belonging to the designated probability region according to the sequence of Euclidean distances from the central point from near to far, and obtaining the effective point set.
Optionally, the screening, with the central point as a reference, mapping points within a specified probability interval from the mapping points of the target image block according to the probability distribution table of the normal distribution function to obtain a failure point set specifically includes:
determining the designated probability interval in the probability distribution table according to the mean and the variance;
determining the number of mapping points in the effective point set according to the designated probability interval;
and screening the mapping points belonging to the designated probability area according to the sequence of the Euclidean distance from the central point from near to far, and obtaining the failure point set.
Optionally, the target image block includes one image sub-block or a plurality of image sub-blocks.
In a second aspect, the present application provides a terminal device, including:
a memory for storing processor-executable instructions;
a processor configured to execute the instructions to implement any of the methods as provided in the first aspect of the application.
In a third aspect, an embodiment of the present application further provides a computer-readable storage medium, where instructions, when executed by a processor of a terminal device, enable the terminal device to perform any one of the methods as provided in the first aspect of the present application.
In a fourth aspect, a computer program product is provided in an embodiment of the present application, comprising a computer program that, when executed by a processor, performs any of the methods as provided in the first aspect of the present application.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
according to the method and the device, after the image is collected, the target image block selected by the user in the image according to the requirement is obtained, the effective point set is screened out according to the mapping point of the target pixel block in the two-dimensional coordinate system, the target color temperature is calculated based on the mapping point color temperature in the effective point set, and finally the white balance operation is carried out on the image according to the target color temperature. The method and the device guide the user to select the target image block so as to effectively remove the color blocks in the misleading color area. The mapping points with the color temperature values close to the actual color temperature can be screened out through the distribution condition of the color blocks selected by the user in the two-dimensional coordinate system, and the accuracy of the color temperature is improved when the color temperature is calculated. Therefore, the problem that the color temperature of the image is calculated to generate deviation by the pixel points in the misleading color zone is relieved, the image quality is effectively improved through the accurate calculation of the color temperature, and the problem of color cast of the image is relieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 1b is a schematic diagram of a hardware structure provided in the embodiment of the present application;
FIG. 1c is a schematic diagram of a software architecture provided in an embodiment of the present application;
fig. 2a is a schematic flow chart of white balance according to an embodiment of the present disclosure;
fig. 2b is a schematic diagram illustrating a method for guiding a user to select a target image block in an image through a voice prompt according to an embodiment of the present application;
fig. 2c is a schematic diagram illustrating an embodiment of the present application, which guides a user to select a target image block in an image through animation;
fig. 2d is a schematic diagram illustrating a static interface guiding a user to select a target image block in an image according to an embodiment of the present application;
fig. 3a is a second schematic flow chart of white balance according to an embodiment of the present application;
FIG. 3b is a schematic diagram of a white balance method selection provided in an embodiment of the present application;
fig. 4a is a third schematic flow chart of white balance according to an embodiment of the present application;
fig. 4b is a schematic diagram of a minimum circumscribed rectangle of a plurality of target image blocks according to an embodiment of the present application;
fig. 5 is a fourth schematic flowchart of white balance according to an embodiment of the present application;
FIG. 6a is a schematic diagram of a color temperature curve provided in an embodiment of the present application;
FIG. 6b is a schematic diagram of a gray area provided in an embodiment of the present application;
fig. 6c is a schematic diagram of a relationship between an image and a two-dimensional coordinate system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Wherein in the description of the embodiments of the present application, "/" means or, unless otherwise stated, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first", "second", may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application "a plurality" means two or more unless stated otherwise.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
Automatic White Balance (AWB): the terminal equipment automatically calculates the color temperature value of the image according to the R/G, B/G and the color temperature in the image data collected by the camera, and corrects the image.
A two-dimensional coordinate system: in the two-dimensional coordinate system in the embodiment of the application, the ratio of the R to the G value is used as an abscissa, and the ratio of the B to the G value is used as an ordinate to establish the two-dimensional coordinate system.
Misleading color zones (MLC, misslidingcolor zones): after transforming the image into a two-dimensional coordinate system, the area of the image that affects the calculation of the AWB color temperature.
Target color temperature: the embodiment of the present application refers to a color temperature used for white balance operation.
In the field of image processing, when terminal equipment adjusts the color temperature of an image by using an AWB function, the position of each pixel point in the image in a two-dimensional coordinate system is determined by obtaining R, G, B numerical values and the color temperature of the pixel points of the image. Because different positions in the two-dimensional coordinate system correspond to different color temperature values, the color temperature value of each pixel point can be determined according to the positions of the pixel points in the two-dimensional coordinate system, and white balance operation is performed according to the color temperature value of each pixel point.
In a complex environment, misleading color regions exist in an image, and the color temperature of the regions is greatly different from the color temperature of other pixels in the image, so the regions are called misleading color regions. When the color temperature is calculated, the influence of the pixel points in the misleading color region on the color temperature is large, which causes the color temperature of the image to deviate from the true color temperature greatly, thereby causing color cast of the image after the white balance operation.
In view of the above, in order to solve the above problem, embodiments of the present application provide a white balance method and a terminal device.
According to the method and the device, after the image is collected, the target image block selected by the user in the image according to the requirement is obtained, the effective point set is screened out according to the mapping point of the target pixel block in the two-dimensional coordinate system, the target color temperature is calculated based on the mapping point color temperature in the effective point set, and finally the white balance operation is carried out on the image according to the target color temperature. The method and the device guide the user to select the target image block so as to effectively remove the color blocks in the misleading color area. The mapping points with the color temperature values close to the actual color temperature can be screened out through the distribution condition of the color blocks selected by the user in the two-dimensional coordinate system, and the accuracy of the color temperature is improved when the color temperature is calculated. Therefore, the problem that the color temperature of the image is calculated to generate deviation by the pixel points in the misleading color zone is relieved, the image quality is effectively improved through the accurate calculation of the color temperature, and the problem of color cast of the image is relieved.
After introducing the design concept of the embodiment of the present application, some simple descriptions are provided below for application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
In the page shown in fig. 1a, a user may start a camera by a camera function provided by the terminal device 101, for example, clicking a camera icon on fig. 1a, and capture an image based on the camera. The user operates the terminal device 101, selects a target image block on the collected image, determines color temperature according to pixel points in the target image block, and finally performs white balance operation on the image according to the color temperature. Of course, it should be noted that any information about the user in the embodiment of the present application may be obtained after the user authorizes the user.
Fig. 1b is a schematic structural diagram of a terminal device 101.
The following specifically describes the embodiment by taking the terminal apparatus 101 as an example. It should be understood that the terminal apparatus 101 shown in fig. 1b is only an example, and the terminal apparatus 101 may have more or less components than those shown in fig. 1c, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of the hardware configuration of the terminal device 101 according to an exemplary embodiment is exemplarily shown in fig. 1 b. As shown in fig. 1c, the terminal apparatus 101 includes: a Radio Frequency (RF) circuit 110, a memory 120, a display unit 130, a camera 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a processor 180, a bluetooth module 181, and a power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then give the downlink data to the processor 180 for processing and may transmit uplink data to the base station. In general, RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 180 executes various functions of the terminal apparatus 101 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the terminal apparatus 101 to operate. The memory 120 may store an operating system and various application programs, and may also store codes for performing the access balancing method according to the embodiment of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal apparatus 101, and specifically, the display unit 130 may include a touch screen 131 disposed on the front surface of the terminal apparatus 101 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal 101. Specifically, the display unit 130 may include a display screen 132 disposed on the front surface of the terminal apparatus 101. The display screen 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display various graphical user interfaces described herein.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the terminal device 101, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals.
The terminal device 101 may further comprise at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal device 101 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the terminal device 101. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The terminal device 101 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another terminal or outputs the audio data to the memory 120 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the terminal device 101 can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 170, and provides wireless broadband internet access for the user.
The processor 180 is a control center of the terminal apparatus 101, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal apparatus 101 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. The processor 180 may run an operating system, an application program, a user interface display, a touch response, and the white balance method described in the embodiments of the present application. Further, the processor 180 is coupled with the display unit 130.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal device 101 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) also equipped with a bluetooth module through the bluetooth module 181, so as to perform data interaction.
Terminal equipment 101 also includes a power supply 190 (such as a battery) that powers the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The terminal device 101 may also be configured with a power button for powering on and off the terminal, and locking the screen.
Fig. 1c is a block diagram of the software configuration of the terminal apparatus 101 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 1c, the application package may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 1c, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
Content providers are used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The telephone manager is used to provide the communication function of the terminal apparatus 101. Such as management of call status (including on, off, etc.).
The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal device vibrates, an indicator light flickers, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library can comprise a plurality of functional modules, a daemon process of user space hatched by an init process, a HAL layer, a startup animation and the like, and the processes mentioned in the application run at the layer. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The terminal device 101 in the embodiment of the present application may be an electronic device including, but not limited to, a mobile terminal, a desktop computer, a mobile computer, a tablet computer, and the like.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the operational steps of the methods as described in the following embodiments or shown in the drawings, more or less operational steps may be included in the methods based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
Fig. 2a is a schematic flow chart of a white balance method according to an embodiment of the present application, which includes the following steps.
Step 201, displaying the collected image.
Step 202, the user is guided to select a target image block in the image for determining the target color temperature.
In some embodiments, an image block or a plurality of image blocks may be selected as the target image block. The guiding mode can be a mode of voice prompting the user to select the target image block, and can also be a mode of guiding the user to select the target image block in a static interface or an animation mode.
As shown in fig. 2b, if the user is guided to select the target image block by using the voice prompt, the user is guided to select one image block or a plurality of image blocks as the target image block by using the voice prompt.
As shown in fig. 2c, if the animation mode is used to guide the user to select the target image block, the animation is played on the interface to guide the user to select one image block or a plurality of image blocks as the target image block.
As shown in fig. 2d, if the static interface is used to guide the user to select the target image block, the user is prompted to select one image block or a plurality of image blocks as the target image block on the target image block selection interface.
In another embodiment, for convenience of user operation, different color patch types may be provided to help the user select the target image block. The color block types in the embodiment of the application comprise effective point color blocks and failure point color blocks. The effective point color blocks are used for selecting image blocks participating in color temperature calculation, and the invalid point color blocks are used for screening image blocks not participating in target color temperature calculation. The scheme for selecting the target image block based on the type of the color block is shown in fig. 3a, and can be implemented as follows:
and step 301, outputting the color block types for selection.
And step 302, responding to the selection operation of the user on the color block type, and determining the selected color block type.
As shown in fig. 3b, before the terminal device executes the AWB operation, the user may select a valid dot patch or a dead dot patch according to the requirement of the user to perform the AWB processing on the image. When a user wants to select an area close to the standard gray card color to correct the image, the area close to the standard gray card color in the image can be used as a target image block by selecting the effective point color block.
When a user wants to remove an area with a larger color difference with the standard gray card, the area with the larger color difference with the standard gray card in the image can be used as a target image block by selecting the dead point color block, and white balance operation is carried out on the image according to the target color temperature of the color temperature area except the target image block.
And 303, responding to the selection operation of the image blocks, and determining the target image block corresponding to the color lump type.
The operation of selecting image blocks under the selected color block type is the same as the manner shown in fig. 2 b-2 d, and is not described here again.
After the target image block is determined, in step 203, each pixel point in the target image block is converted into a two-dimensional coordinate system, so as to obtain each mapping point of the target image block in the two-dimensional coordinate system.
And 204, screening out mapping points for calculating color temperature based on the distribution condition of each mapping point of the target image block in the two-dimensional coordinate system to obtain an effective point set.
In the embodiment of the present application, since the color block types of the target image block include an effective point color block and a failure point color block, different operation methods are executed to determine the effective point set according to different color block types of the target image block selected by the user. The following description is made for the cases of valid dot patches and invalid dot patches, respectively:
1) selecting effective point color blocks:
when the target image block selected by the user is the valid point color block, a method for determining the valid point set based on the mapping points in the target image block is shown in fig. 4 a:
step 401, determine the mean and variance of the coordinate values of each mapping point in the target image block. Wherein the mean value of the coordinate values is the central point of each mapping point in the target image block.
In the embodiment of the present application, the range of the effective point set is related to the coordinate value. The coordinate value of the pixel point is formed by R, G, B values of the pixel point. Wherein the abscissa of the mapping point is R/G and the ordinate is B/G. The R/G and B/G are used to determine the location of the mapped points in a two-dimensional coordinate system. After the position of the mapping point in the two-dimensional coordinate system is determined, the mean value of R/G and B/G is obtained according to the coordinate values of R/G and B/G of each pixel point in the target image block
Figure BDA0003493434340000121
And finding the variances sigma x, sigma y of R/G and B/G, wherein the seating of the center pointIs marked as
Figure BDA0003493434340000122
Step 402, a normal distribution function is constructed based on the mean and variance.
In the examples of the present application, the mean value is calculated according to
Figure BDA0003493434340000123
After the center point is determined, the mean value is determined
Figure BDA0003493434340000124
And the variances σ x and σ y construct a normal distribution function. In a normal distribution function, according to
Figure BDA0003493434340000125
Figure BDA0003493434340000126
The ratio of the number of mapping points in the valid point set to the number of mapping points in the target image block can be determined.
And step 403, screening the mapping points in the designated probability interval from the mapping points of the target image block according to the probability distribution table of the normal distribution function by taking the central point as a reference to obtain an effective point set.
In the embodiment of the present application, mapping points of the target image block in the two-dimensional coordinate system are distributed discretely, and are distributed most at the center position of the target image block, so that it is necessary to determine the effective point set according to the center point of the target image block.
In the embodiments of the present application, the value of n is different,
Figure BDA0003493434340000131
the determined specified probability intervals are different, and the ratio of the number of the mapping points in the effective point set to the number of the mapping points in the target image block is also different. And after the value of n is determined according to the designated probability interval, the number of mapping points in the effective point set is further obtained. At the moment, the Euclidean distances from the central point are in the order of near to farAnd screening out mapping points belonging to the designated probability region to obtain an effective point set.
For example, the coordinates of the center points of all the mapped points in the target image block are
Figure BDA0003493434340000132
According to
Figure BDA0003493434340000133
Determining the number of mapping points in the effective point set, taking the number as a reference value, calculating the Euclidean distance between each mapping point and the central point until the number of the mapping points which is equal to the reference value is screened out from each mapping point, wherein the set formed by the screened mapping points is the effective point set.
Assuming that n is 1, the number of mapping points in the valid point set is 68.26% of the number of mapping points in the target image block; if n is 2, the number of the mapping points in the effective point set is 95.44% of the number of the mapping points in the target image block; similarly, if n is 3, the number of mapping points in the valid point set is 99.74% of the number of mapping points in the target image block. In the present application, the larger the value of n is, the larger the number of mapping points in the effective point set is.
If an area with more concentrated mapping point distribution is required to be selected as an effective point set, taking 1 as the value of n to calculate the effective point set; if it is required that the mapping points in the target image block are included as many as possible in the valid point set, the valid point set is calculated by taking the value of n to 3.
If the color block type of the target image block selected by the user is an effective point color block, the value of n is 1, so that the calculation result is more accurate; if the color block type of the target image block selected by the user is a failure point color block, the influence of the failure point on the target color temperature can be better eliminated if the value of n is 3.
In another embodiment of the present application, the center point of each mapping point may also be determined according to a clustering algorithm, so as to determine the effective point set. If the radius R of the sliding window is determined, any mapping point in the target image block is selected as the center of a circle to establish the sliding window, and the mapping point density in the sliding window is calculated according to the area of the sliding window and the number of the mapping points in the sliding window. And after calculating the mapping point density, sliding the sliding window in the target image block, and calculating the mapping point density again until the mapping point density in the sliding window is not increased any more. And taking the mean value of the mapping point coordinates in the sliding window corresponding to the maximum mapping point density as the central point of the mapping point in the target image block. After determining the center of the cluster point, the effective point set can also be determined according to the radius of the sliding window.
There are various clustering algorithms, and one of the clustering algorithms is exemplarily given in the present application, and the present application does not limit what clustering algorithm is adopted to calculate the central point and the effective point set of the target image block in the present application.
In an embodiment of the present application, as shown in fig. 4b, if there are multiple target image blocks selected by the user (for example, there are image block 1, image block 2, and image block 3 in fig. 4 b), the union of all selected image blocks is used as the target image block.
2) Selecting a failure point color block: when a user selects a target image block as a fail point color block, a method for determining an effective point set based on pixel points in the target image block is shown in fig. 5.
Step 501, determining the mean and variance of the coordinate values of each mapping point in the target image block.
Step 502, a normal distribution function is constructed based on the mean and variance.
And 503, screening the mapping points in the designated probability interval from the mapping points of the target image block according to the probability distribution table of the normal distribution function by taking the central point as a reference to obtain a failure point set.
And step 504, screening out points in the failure point set from the mapping points of the image to obtain an effective point set.
In the embodiment of the application, when a pixel point with a low color temperature exists in an image, in order to ensure that the color temperature of the pixel point does not affect the white balance operation of the image, an area where the pixel point is located is used as a target image block by selecting a color block of a failure point, namely, the color block of a misleading color area is used as the target image block, and a failure point set in the target image block is calculated. And calculating the target color temperature by taking the points in the image from which the failure point set is removed as an effective point set, thereby reducing the deviation in calculating the color temperature.
In the embodiment of the present application, the method for determining the failure point set is the same as the method for determining the valid point set. And after the failure point set is determined, taking points out of the failure point set as an effective point set, and correcting the image according to the color temperature of the pixel points in the effective point set.
In summary, when the user wishes to select an area close to the color of the standard gray card to correct the image, the image can be corrected by selecting the color blocks of the effective dots; when the user wants to remove the area with larger color difference from the standard gray card, the image can be corrected by selecting the failure point color block.
In some embodiments, if the target image block includes the pixel points of the misleading color region, the color temperature of the pixel points of the two regions is greatly different from the actual color temperature, so that the calculation of the target color temperature is affected, and the pixel points of the misleading color region need to be excluded as failure points.
After determining the valid point set, a target color temperature is obtained in step 205 based on the color temperature of each mapping point in the valid point set.
In the embodiment of the present application, the color temperature curve in the two-dimensional coordinate system can be established by a standard gray card with 18% reflectivity (which has only one gray color). Because the color temperature values of different light sources are different, and the R, G, B value of the same standard gray card is different under different light sources. Therefore, by taking R/G as a horizontal axis and B/G as a vertical axis, the coordinates corresponding to the standard gray cards under different light sources can be determined in a two-dimensional coordinate system by acquiring R, G, B values of the standard gray cards under different light sources. And connecting all coordinates by using a smooth curve to obtain a color temperature curve.
That is, for the same standard gray card, the R, G, B value of the standard gray card is different under different color temperatures. The standard gray card has different R, G, B values in different color temperatures, so that the R/G and G/B values of the standard gray card are different in different color temperatures. The values of R/G and G/B are the coordinates of a standard gray card in a two-dimensional coordinate system, so that the coordinates of the same standard color card in the two-dimensional coordinate system are different under different color temperatures. Therefore, under the condition of different color temperatures, different coordinate points of the same standard gray card are obtained, and then the coordinate points are fitted to obtain a color temperature curve.
For example, under a light source with a color temperature value of 2500k, point E is calculated from R, G, B values of a standard gray card (as shown in FIG. 6 a). Similarly, point A was calculated at R, G, B for a standard gray card under a light source with a color temperature value of 6500 k. By analogy, different points of the same standard gray card in the two-dimensional coordinate system are obtained under a plurality of color temperatures, and as shown in fig. 6a, the higher the color temperature, the closer the point of the standard gray card in the two-dimensional coordinate system is to the ordinate axis and is far away from the abscissa axis, and the lower the color temperature, the closer the point of the standard gray card in the two-dimensional coordinate system is to the abscissa axis and is far away from the ordinate axis. Thus, at a low color temperature of 2500k, the position of point E is close to the abscissa axis and away from the ordinate axis on the color temperature curve, and at a high color temperature of 6500k, the position of point a is close to the ordinate axis and away from the abscissa axis on the color temperature curve. Therefore, after different points of the same standard gray card in a two-dimensional coordinate system are obtained under a plurality of color temperatures, each point on the color temperature curve can be obtained to respectively correspond to one color temperature. And establishing a corresponding relation between each point on the color temperature curve and the corresponding color temperature while constructing the color temperature curve, so that the color temperature corresponding to each point on the curve can be found through the color temperature curve.
As shown in fig. 6b, since the color temperatures of the similar colors are close, in order to facilitate compatibility of more colors, the limited region where the color temperature curve is located is called a gray region, and the gray region range of the color temperature curve can be determined by adjusting the light source, the profile parameters of the gray region, and the like. The gray area almost comprises the color temperatures of all pixel points in the image, if the mapping points corresponding to the pixel points are not in the gray area, the color temperatures of the pixel points are abnormal, and the pixel points with abnormal color temperatures can be eliminated in the calculation process. For example, point C in fig. 6b is outside the gray area, and if point C is within the set of valid points, point C is excluded, i.e. it does not participate in determining the final target color temperature.
If the mapping point is in the gray area, the deviation between the color temperature of the mapping point and the color temperature of the corresponding color temperature calibration point is small, and the closer the mapping point is to the point on the color temperature curve under the condition of the same color temperature in the gray area, the more accurate the color temperature value of the mapping point is.
In the embodiment of the present application, the sets of mapping points of different image blocks in the same image may overlap. As shown in fig. 6c, in the failure point set 1 and the failure point set 2, since the color temperature values of the pixel points in the failure point set 1 and the failure point set 2 are similar, the sets of the mapping points in the two-dimensional coordinate system of the failure point set 1 and the failure point set 2 are the same, and the valid point sets obtained by the user by selecting the failure point set 1 and/or the failure point set 2 are also the same. Therefore, the selection of some image blocks in the embodiment of the present application can basically cover the case of similar or identical image blocks in the whole image, without having to select each adjacent or similar image block once.
In the embodiment of the present application, after the valid point set is obtained, in order to facilitate determination of the target color temperature, a weight may be assigned to each point in the valid point set (i.e., the valid point set excluding the points that are not in the gray region). When the method is implemented, a color temperature calibration point is determined, then a weight is distributed to each mapping point in an effective point set based on the color temperature calibration point, and then the target color temperature is determined by adopting the distributed weight and a mode of weighted summation and averaging for the color temperature value corresponding to each mapping point in the effective point set. In practice, there are many ways to calculate the color temperature calibration point in the AWB, and this application is not limited thereto. The embodiment of the application provides a method for determining a color temperature calibration point, which comprises the following steps:
and calculating a color temperature calibration point according to the color temperature values of pixel points with the same or similar colors to the standard gray card in the image, and distributing weights to the mapping points in the effective point set according to the color temperature calibration point. For example, if there are 100 pixels in the image that have the same or similar color as the standard gray card, wherein 80 pixels are distributed near the point on the color temperature curve with the color temperature value of 4500k, the color temperature calibration point of the image is the point on the color temperature curve corresponding to the color temperature value of 4500 k.
After the color temperature calibration point is obtained, when the weight is distributed, the Euclidean distance between the mapping point and the color temperature calibration point can be calculated through the coordinate value of the mapping point and the coordinate value of the color temperature calibration point to be the mapping point color temperature distribution weight in the effective point set. The Euclidean distance and the weight have an inverse correlation relationship, namely the larger the Euclidean distance is, the smaller the weight of the mapping point is; the smaller the euclidean distance, the greater the weight of the mapped point.
As shown in fig. 6b, point F is the color temperature calibration point, and the euclidean distance between point C and point F is further than the euclidean distance between point D and point F, the weight of point C is less than the weight of point D.
After determining the weight of each mapping point in the effective point set, the AWB weights the mapping point color temperatures in the effective point set according to the weight to obtain an average value, and the target color temperature is obtained. Then, the image is corrected based on the target color temperature in step 206.
Since the weights of the mapped color temperatures are different, the larger the difference from the color temperature calibration point, the smaller the proportion of the mapped color temperature in the target color temperature, and the closer the mapped color temperature to the color temperature calibration point, the larger the proportion of the mapped color temperature in the target color temperature, and therefore the target color temperature obtained by weighting and averaging is closer to the standard color temperature under the light source in the image.
In the application, the color temperature of the image can be determined by the user through the method of determining the color temperature of the target image block selected by the user, so that the color temperature of the image is prevented from generating deviation due to the influence of misleading color areas in the AWB operation.
In an exemplary embodiment, the present application also provides a computer-readable storage medium comprising instructions, such as the memory 120 comprising instructions, executable by the processor 180 of the terminal device 101 to perform the above-described white balancing method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by the processor 180, implements the white balancing method as provided herein.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of white balancing, the method comprising:
displaying the collected image;
guiding a user to select a target image block for determining a target color temperature in the image;
converting each pixel point in the target image block into a two-dimensional coordinate system to obtain each mapping point of the target image block in the two-dimensional coordinate system; a first coordinate axis in the two-dimensional coordinate system is used for expressing the ratio of the red component to the green component, and a second coordinate axis is used for expressing the ratio of the blue component to the green component;
screening out mapping points for calculating color temperature based on the distribution condition of each mapping point of the target image block in the two-dimensional coordinate system to obtain an effective point set;
obtaining a target color temperature based on the color temperature of each mapping point in the effective point set;
and correcting the image based on the target color temperature.
2. The method according to claim 1, wherein the guiding the user to select a target image block in the image for determining a target color temperature comprises:
outputting color block types for selection, wherein the color block types comprise effective point color blocks and ineffective point color blocks, the effective point color blocks are used for indicating image blocks participating in color temperature calculation, and the ineffective point color blocks are used for indicating image blocks not participating in color temperature calculation;
determining a selected color block type in response to a selection operation of the color block type by a user;
and responding to the selection operation of the image blocks, and determining the target image blocks corresponding to the color block types.
3. The method according to claim 2, wherein the color block type is the valid point color block, and the screening of the mapping points for calculating the color temperature based on the distribution of the mapping points of the target image block in the two-dimensional coordinate system to obtain the valid point set specifically includes:
determining the mean value and the variance of coordinate values of all mapping points in the target image block, wherein the mean value of the coordinate values is the central point of each mapping point in the target image block;
constructing a normal distribution function based on the mean and the variance;
and screening the mapping points in the designated probability interval from the mapping points of the target image block according to the probability distribution table of the normal distribution function by taking the central point as a reference to obtain an effective point set.
4. The method according to claim 2, wherein the color block type is the failing point color block, and the screening of the mapping points for calculating the color temperature based on the distribution of the mapping points of the target image block in the two-dimensional coordinate system to obtain the valid point set specifically includes:
determining a mean value and a variance of coordinate values of each mapping point in the target image block, wherein the mean value of the coordinate values is a central point of each mapping point in the target image block;
constructing a normal distribution function based on the mean and the variance;
screening mapping points in a designated probability interval from the mapping points of the target image block according to a probability distribution table of the normal distribution function by taking the central point as a reference to obtain a failure point set;
and screening out points in the failure point set from the mapping points of the image to obtain an effective point set.
5. The method according to claim 3, wherein the selecting mapping points within a designated probability interval from the mapping points of the target image block according to the probability distribution table of the normal distribution function with the central point as a reference to obtain the valid point set specifically includes:
determining the designated probability interval in the probability distribution table according to the mean and the variance;
determining the number of mapping points in the effective point set according to the designated probability interval;
and screening the mapping points belonging to the designated probability region according to the sequence of the Euclidean distance from the central point from near to far, and obtaining the effective point set.
6. The method according to claim 4, wherein the step of obtaining the failure point set by using the central point as a reference and screening mapping points within a designated probability interval from the mapping points of the target image block according to the probability distribution table of the normal distribution function comprises:
determining the designated probability interval in the probability distribution table according to the mean and the variance;
determining the number of mapping points in the effective point set according to the designated probability interval;
and screening the mapping points belonging to the designated probability area according to the sequence of the Euclidean distance from the central point from near to far, and obtaining the failure point set.
7. The method according to any of claims 1-6, wherein the target image block comprises one image sub-block or a plurality of image sub-blocks.
8. A terminal device, comprising:
a memory for storing processor-executable instructions;
a processor configured to execute the instructions to implement the method of any one of claims 1-7.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of a terminal device, enable the terminal device to perform to implement the method of any one of claims 1-7.
10. A computer program product, comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-7.
CN202210104395.4A 2022-01-28 2022-01-28 White balance method and terminal equipment Active CN114449243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210104395.4A CN114449243B (en) 2022-01-28 2022-01-28 White balance method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210104395.4A CN114449243B (en) 2022-01-28 2022-01-28 White balance method and terminal equipment

Publications (2)

Publication Number Publication Date
CN114449243A true CN114449243A (en) 2022-05-06
CN114449243B CN114449243B (en) 2023-12-12

Family

ID=81370394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210104395.4A Active CN114449243B (en) 2022-01-28 2022-01-28 White balance method and terminal equipment

Country Status (1)

Country Link
CN (1) CN114449243B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866755A (en) * 2022-05-17 2022-08-05 北京奕斯伟计算技术有限公司 Automatic white balance method and device, computer storage medium and electronic equipment
CN115190283A (en) * 2022-07-05 2022-10-14 北京地平线信息技术有限公司 White balance adjusting method and device

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPN743096A0 (en) * 1996-01-05 1996-02-01 Canon Kabushiki Kaisha Force field halftoning
CA2348325A1 (en) * 2000-05-23 2001-11-23 Jonathan Martin Shekter System for manipulating noise in digital images
CN101072365A (en) * 2006-05-11 2007-11-14 奥林巴斯映像株式会社 White balance control method, imaging apparatus and storage medium storing white balance control program
CN101079953A (en) * 2006-05-24 2007-11-28 索尼株式会社 Information processing system, information processing device, information processing method, and program
CN101079954A (en) * 2007-06-26 2007-11-28 北京中星微电子有限公司 Method and device for realizing white balance correction
WO2008115547A1 (en) * 2007-03-19 2008-09-25 Sti Medical Systems, Llc A method of automated image color calibration
JP2008271096A (en) * 2007-04-19 2008-11-06 Mitsubishi Denki Micom Kiki Software Kk Method and device for correcting gray balance of image data, and storage medium
CN101819626A (en) * 2009-02-26 2010-09-01 何玉青 Image fusion-based iris spot elimination method
US20130050236A1 (en) * 2011-08-23 2013-02-28 Novatek Microelectronics Corp. White balance method for display image
CN103546732A (en) * 2013-10-18 2014-01-29 广州市浩云安防科技股份有限公司 Image processing method and system
CN103841390A (en) * 2014-02-22 2014-06-04 深圳市中兴移动通信有限公司 Shooting method and device
US20140218540A1 (en) * 2013-02-05 2014-08-07 Google Inc. Noise Models for Image Processing
US20170078636A1 (en) * 2015-09-10 2017-03-16 Samsung Electronics Co., Ltd. Image processing device and auto white balancing method
WO2017206400A1 (en) * 2016-05-30 2017-12-07 乐视控股(北京)有限公司 Image processing method, apparatus, and electronic device
CN110351537A (en) * 2019-07-31 2019-10-18 深圳前海达闼云端智能科技有限公司 White balance method, device, storage medium and the electronic equipment of Image Acquisition
WO2020142871A1 (en) * 2019-01-07 2020-07-16 华为技术有限公司 White balance processing method and device for image
WO2020172888A1 (en) * 2019-02-28 2020-09-03 华为技术有限公司 Image processing method and device
CN112655193A (en) * 2020-05-07 2021-04-13 深圳市大疆创新科技有限公司 Camera, alignment method and device thereof and cradle head
CN112788322A (en) * 2019-11-07 2021-05-11 浙江宇视科技有限公司 Adaptive white balance processing method, device, medium, and electronic apparatus
CN113301318A (en) * 2021-05-24 2021-08-24 展讯半导体(南京)有限公司 Image white balance processing method and device, storage medium and terminal
CN113542711A (en) * 2020-04-14 2021-10-22 青岛海信移动通信技术股份有限公司 Image display method and terminal
CN113676716A (en) * 2021-08-23 2021-11-19 深圳创维-Rgb电子有限公司 White balance control method, white balance control device, terminal equipment and storage medium
CN113766203A (en) * 2020-06-03 2021-12-07 杭州海康威视数字技术股份有限公司 Image white balance processing method

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPN743096A0 (en) * 1996-01-05 1996-02-01 Canon Kabushiki Kaisha Force field halftoning
CA2348325A1 (en) * 2000-05-23 2001-11-23 Jonathan Martin Shekter System for manipulating noise in digital images
CN101072365A (en) * 2006-05-11 2007-11-14 奥林巴斯映像株式会社 White balance control method, imaging apparatus and storage medium storing white balance control program
CN101079953A (en) * 2006-05-24 2007-11-28 索尼株式会社 Information processing system, information processing device, information processing method, and program
WO2008115547A1 (en) * 2007-03-19 2008-09-25 Sti Medical Systems, Llc A method of automated image color calibration
JP2008271096A (en) * 2007-04-19 2008-11-06 Mitsubishi Denki Micom Kiki Software Kk Method and device for correcting gray balance of image data, and storage medium
CN101079954A (en) * 2007-06-26 2007-11-28 北京中星微电子有限公司 Method and device for realizing white balance correction
CN101819626A (en) * 2009-02-26 2010-09-01 何玉青 Image fusion-based iris spot elimination method
US20130050236A1 (en) * 2011-08-23 2013-02-28 Novatek Microelectronics Corp. White balance method for display image
US20140218540A1 (en) * 2013-02-05 2014-08-07 Google Inc. Noise Models for Image Processing
CN103546732A (en) * 2013-10-18 2014-01-29 广州市浩云安防科技股份有限公司 Image processing method and system
CN103841390A (en) * 2014-02-22 2014-06-04 深圳市中兴移动通信有限公司 Shooting method and device
US20170078636A1 (en) * 2015-09-10 2017-03-16 Samsung Electronics Co., Ltd. Image processing device and auto white balancing method
WO2017206400A1 (en) * 2016-05-30 2017-12-07 乐视控股(北京)有限公司 Image processing method, apparatus, and electronic device
WO2020142871A1 (en) * 2019-01-07 2020-07-16 华为技术有限公司 White balance processing method and device for image
WO2020172888A1 (en) * 2019-02-28 2020-09-03 华为技术有限公司 Image processing method and device
CN110351537A (en) * 2019-07-31 2019-10-18 深圳前海达闼云端智能科技有限公司 White balance method, device, storage medium and the electronic equipment of Image Acquisition
CN112788322A (en) * 2019-11-07 2021-05-11 浙江宇视科技有限公司 Adaptive white balance processing method, device, medium, and electronic apparatus
CN113542711A (en) * 2020-04-14 2021-10-22 青岛海信移动通信技术股份有限公司 Image display method and terminal
CN112655193A (en) * 2020-05-07 2021-04-13 深圳市大疆创新科技有限公司 Camera, alignment method and device thereof and cradle head
CN113766203A (en) * 2020-06-03 2021-12-07 杭州海康威视数字技术股份有限公司 Image white balance processing method
CN113301318A (en) * 2021-05-24 2021-08-24 展讯半导体(南京)有限公司 Image white balance processing method and device, storage medium and terminal
CN113676716A (en) * 2021-08-23 2021-11-19 深圳创维-Rgb电子有限公司 White balance control method, white balance control device, terminal equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866755A (en) * 2022-05-17 2022-08-05 北京奕斯伟计算技术有限公司 Automatic white balance method and device, computer storage medium and electronic equipment
CN114866755B (en) * 2022-05-17 2023-12-19 北京奕斯伟计算技术股份有限公司 Automatic white balance method and device, computer storage medium and electronic equipment
CN115190283A (en) * 2022-07-05 2022-10-14 北京地平线信息技术有限公司 White balance adjusting method and device
CN115190283B (en) * 2022-07-05 2023-09-19 北京地平线信息技术有限公司 White balance adjustment method and device

Also Published As

Publication number Publication date
CN114449243B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
US11385857B2 (en) Method for displaying UI component and electronic device
CN111343339B (en) Mobile terminal and image display method thereof
CN114449243B (en) White balance method and terminal equipment
CN112925596B (en) Mobile terminal and display method of display object thereof
CN112184595B (en) Mobile terminal and image display method thereof
CN113038141B (en) Video frame processing method and electronic equipment
CN111031377B (en) Mobile terminal and video production method
CN115460355B (en) Image acquisition method and device
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
CN114063945B (en) Mobile terminal and image display method thereof
CN113360122B (en) Mobile terminal and text display method thereof
CN112799557B (en) Ink screen display control method, terminal and computer readable storage medium
CN114666497A (en) Imaging method, terminal device, storage medium, and program product
CN113254132B (en) Application display method and related device
CN111399955B (en) Mobile terminal and interface display method of application program thereof
CN115033199A (en) Mobile terminal and image display method thereof
CN114067758A (en) Mobile terminal and image display method thereof
CN113838437A (en) Method, terminal device and medium for adjusting screen brightness
CN113760164A (en) Display device and response method of control operation thereof
CN111479075B (en) Photographing terminal and image processing method thereof
CN113253905B (en) Touch method based on multi-finger operation and intelligent terminal
WO2024088225A1 (en) Bluetooth ranging method and system, and electronic device
CN111988530B (en) Mobile terminal and photographing method thereof
CN111913772A (en) Terminal and desktop display method
CN117978308A (en) Bluetooth ranging method, electronic equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Applicant after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Applicant before: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant