CN115088253A - Image color processing method and device - Google Patents

Image color processing method and device Download PDF

Info

Publication number
CN115088253A
CN115088253A CN202080096708.5A CN202080096708A CN115088253A CN 115088253 A CN115088253 A CN 115088253A CN 202080096708 A CN202080096708 A CN 202080096708A CN 115088253 A CN115088253 A CN 115088253A
Authority
CN
China
Prior art keywords
value
color
lookup table
component
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080096708.5A
Other languages
Chinese (zh)
Inventor
李蒙
陈海
王海军
张秀峰
郑成林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN115088253A publication Critical patent/CN115088253A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/02Digital function generators
    • G06F1/03Digital function generators working, at least partly, by table look-up
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/544Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
    • G06F7/556Logarithmic or exponential functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Optimization (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Color Image Communication Systems (AREA)

Abstract

An image color processing method and device, the method comprises: determining color values of a plurality of color components of pixels of an image to be processed (S301); determining ratios of luminance values of the pixels to color values of the plurality of color components, respectively (S302); determining a first color adjustment coefficient of the pixel according to the ratio and the first lookup table (S303); color processing is performed on the pixels according to the first color adjustment coefficient to obtain a target image (S304). The color deviation phenomenon of the image subjected to color processing can be reduced, and the quality of the image subjected to color processing is improved. Each step can be implemented by a hardware circuit of the terminal device, for example, the processing of the adjustment data can be implemented by determining the first color adjustment coefficient through the first lookup table, so that the execution process of the image color processing flow can be implemented to the hardware circuit, and the practical application possibility of the image color processing method is improved.

Description

Image color processing method and device Technical Field
The present disclosure relates to image processing technologies, and in particular, to an image color processing method and apparatus.
Background
The optical digital imaging process is to convert the light radiation of a real scene into an electric signal through an image sensor and store the electric signal in a digital image mode. The purpose of the image display is to reproduce a real scene depicted by a digital image by means of a display device. Thereby enabling the user to obtain the same visual perception as if he were directly observing the real scene.
In the field of image processing, it is necessary to adjust the color of an image in order to solve the problem that the color space is not uniform. How to adjust the color of the image is a problem to be solved.
Disclosure of Invention
The application provides an image color processing method and device, which are used for realizing the adjustment of image colors and improving the image quality.
In a first aspect, an image color processing method is provided, where an execution subject of the method may be a terminal device, and the method specifically includes: determining color values for a plurality of color components of pixels of an image to be processed; determining ratios of luminance values of the pixels to color values of the plurality of color components, respectively; determining a first color adjustment coefficient of the pixel according to the ratio and a first lookup table; and carrying out color processing on the pixels according to the first color adjustment coefficient to obtain a target image. The first color adjustment coefficient is determined according to the ratio of the brightness value of the pixel of the image to be processed to the color value of the color component, so that the image to be processed is subjected to color processing, the color deviation phenomenon of the image subjected to color processing can be reduced, and the quality of the image subjected to color processing is improved. Each step may be implemented by a hardware circuit of the terminal device, for example, processing of the adjustment data may be implemented by determining the first color adjustment coefficient through the first lookup table, so that the execution process of the image color processing flow may be implemented to the hardware circuit, and the practical application possibility of the image color processing method may be improved.
In one possible design, the first lookup table includes a mapping relationship between color adjustment coefficients and preset ratios.
In one possible design, the determining the first color adjustment coefficient of the pixel according to the ratio and the first lookup table may include the following cases: when the preset ratio comprises the ratio: determining a first color adjustment coefficient corresponding to the ratio according to the mapping relation; when the preset ratio does not include the ratio: determining a first preset ratio and a second preset ratio in the first lookup table; respectively determining a first coefficient and a second coefficient corresponding to the first preset ratio and the second preset ratio according to the mapping relation; and carrying out interpolation operation on the first coefficient and the second coefficient to obtain a first color adjustment coefficient corresponding to the ratio. Therefore, the first color adjustment system is determined by an interpolation method, the number of table entry numerical values of the first lookup table can be reduced, the occupied space of the first lookup table is reduced, and the complexity of a hardware circuit is reduced.
Optionally, the interpolation operation includes any one of the following types of operations: linear interpolation, near interpolation, bilinear quadratic interpolation, cubic interpolation or Lanczos interpolation.
In one possible design, a mapping relationship between a first entry value of the first lookup table and a first lookup table value satisfies a first power function. The first table entry value may correspond to the color adjustment coefficient, and the first table entry value may correspond to the preset ratio. For example, the first power function may be represented as f (x) x b
Optionally, the exponent b of the first power function is a function coefficient. The coefficient b of the first power function may be determined by means of a look-up table using image or image sequence statistics, which may include the maximum, minimum, mean, standard deviation, and histogram distribution information of the image or image sequence.
In one possible design, the first look-up table value is a fixed-point value; the determination of the first entry value of the first lookup table may be implemented by: performing inverse quantization on the fixed point numerical value according to the maximum value of the value range determined by the bit width of the color value to obtain a floating point numerical value; determining a function value of the floating-point number based on the first power function; and quantizing the function value according to a preset quantization coefficient to obtain a first table item numerical value of the first lookup table. The first table entry value of the first lookup table is determined by the method, the lookup table value and the first table entry value of the first lookup table can be fixed point values, the mapping relation accords with the first power function, the input data and the output data of the first lookup table can be fixed point values, and the realization possibility of a hardware circuit is realized. The method can be realized by software, so that the actual usability of the image color processing flow is improved by a software and hardware separation mode. And the method is realized by software, and the software process can be updated at any time according to the image color processing effect, so that the adaptability is high, and the effect adjustability is good.
In one possible design, the first lookup table value is determined based on a step size between an index value of the first lookup table and an index value of the first lookup table. The step size may be an integer equal to 1 or greater than 1.
In one possible design, the method further includes: determining the first lookup table corresponding to the first value range in which the ratio is located; and the value range determined by the bit width of the color value comprises the first value range and a second value range corresponding to a second lookup table.
In one possible design, a mapping relationship between a second entry value of the second lookup table and a second lookup table value satisfies the first power function. The second lookup table is generated in a similar manner to the first lookup table and may be cross-referenced.
In one possible design, the minimum value of the first value range is greater than the maximum value of the second value range;
correspondingly, the first lookup table value of the first lookup table is determined based on the index value of the first lookup table, the step length between the index values of the first lookup table, and the maximum value of the second value range.
Similarly, if the minimum value of the second value range is larger than the maximum value of the first value range; correspondingly, the first lookup table value of the first lookup table is determined based on the index value of the first lookup table and the step length between the index values of the first lookup table.
In one possible design, the step size between the index values of the first lookup table and the step size between the index values of the second lookup table may be different.
In one possible design, the color processing on the pixel according to the first color adjustment coefficient may be implemented by: determining a second color adjustment coefficient for the pixel; multiplying the first color adjustment coefficient by the second color adjustment coefficient; and carrying out color processing on the pixel according to the multiplied product.
In one possible design, the image to be processed is an image subjected to dynamic range adjustment processing; the determining of the second color adjustment coefficient of the pixel comprises the following steps: determining an electrical signal ratio of the pixel after the dynamic range adjustment process and before the dynamic range adjustment process; and determining the second color adjustment coefficient according to the ratio of the electric signals. The color deviation phenomenon of the image to be processed caused by dynamic range adjustment processing can be reduced, and the quality of the image subjected to color processing is improved.
In one possible design, the second color adjustment coefficient corresponding to the ratio of the electrical signals is determined by a look-up table. For example, determining the second color adjustment factor according to the ratio of the electrical signals and a third lookup table; the third lookup table includes a mapping relationship between the color adjustment coefficient and a preset ratio, and the mapping relationship conforms to the second power function.
In one possible design, the plurality of color components includes an R component, a G component, and a B component in an RGB space, and the target image is obtained according to the following formula:
Figure PCTCN2020088476-APPB-000001
r, G, B represents the color value of the R component, the color value of the G component, and the color value of the B component of the pixel, respectively, R ', G ', B ' represents the color value of the R component, the color value of the G component, and the color value of the B component of the corresponding pixel in the target image, Y represents the luminance value of the pixel, Alphy R0 represents the first color adjustment coefficient corresponding to the R component of the pixel, Alphy G0 represents the first color adjustment coefficient corresponding to the G component of the pixel, Alphy B0 represents the first color adjustment coefficient corresponding to the B component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy R0, A3 is the quantization coefficient of Alphy G0, and a4 is the quantization coefficient of Alphy B0.
In one possible design, the plurality of color components includes a U component, a V component in YUV space, and the target image is obtained according to the following formula:
Figure PCTCN2020088476-APPB-000002
u, V represents the color value of the U component and the color value of the V component of the pixel, U 'and V' represent the color value of the U component and the color value of the V component of the corresponding pixel in the target image, respectively, Alphy U0 represents the first color adjustment coefficient corresponding to the U component of the pixel, Alphy V0 represents the first color adjustment coefficient corresponding to the V component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy U0, and A3 is the quantization coefficient of Alphy V0.
In a second aspect, an image color processing apparatus is provided, where the apparatus may be a terminal device, or an apparatus (e.g., a chip, or a system of chips, or a circuit) in the terminal device, or an apparatus capable of being used with the terminal device. In one design, the apparatus may include a module corresponding to one or more of the methods/operations/steps/actions described in the first aspect, where the module may be implemented by hardware circuit, software, or a combination of hardware circuit and software. In one design, the apparatus may include a determination module and a processing module. By way of example:
a determining module for determining color values of a plurality of color components of pixels of an image to be processed; determining ratios of luminance values of the pixels to color values of the plurality of color components, respectively; determining a first color adjustment coefficient of the pixel according to the ratio and a first lookup table; and the processing module is used for carrying out color processing on the pixels according to the first color adjustment coefficient so as to obtain a target image.
In one possible design, the first lookup table includes a mapping relationship between color adjustment coefficients and preset ratios.
In a possible design, when determining the first color adjustment coefficient of the pixel according to the ratio and the first lookup table, the determining module is specifically configured to: when the preset ratio comprises the ratio: determining a first color adjustment coefficient corresponding to the ratio according to the mapping relation; when the preset ratio does not include the ratio: determining a first preset ratio and a second preset ratio in the first lookup table; respectively determining a first coefficient and a second coefficient corresponding to the first preset ratio and the second preset ratio according to the mapping relation; and carrying out interpolation operation on the first coefficient and the second coefficient to obtain a first color adjustment coefficient corresponding to the ratio.
In one possible design, the interpolation operation includes any one of the following types of operations: linear interpolation, near interpolation, bilinear quadratic interpolation, cubic interpolation or Lanczos interpolation.
In one possible design, a mapping relationship between a first entry value of the first lookup table and a first lookup table value satisfies a first power function. The first table entry value may correspond to the color adjustment coefficient, and the first table entry value may correspond to the preset ratio. For example, the first power function may be expressed as f (x) x b
Optionally, the exponent b of the first power function is a function coefficient. The coefficient b of the first power function may be determined by means of a look-up table using image or image sequence statistics, which may include the maximum, minimum, mean, standard deviation, and histogram distribution information of the image or image sequence.
In one possible design, the first look-up table value is a fixed-point value;
when determining the first entry value of the first lookup table, the determining module is specifically configured to: performing inverse quantization on the fixed point numerical value according to the maximum value of the value range determined by the bit width of the color value to obtain a floating point numerical value; determining a function value of the floating-point number based on the first power function; and quantizing the function value according to a preset quantization coefficient to obtain a first table item numerical value of the first lookup table.
In one possible design, the first lookup table value is determined based on a step size between an index value of the first lookup table and an index value of the first lookup table.
In one possible design, the determining module is further configured to: determining the first lookup table corresponding to the first value range in which the ratio is located; and the value range determined by the bit width of the color value comprises the first value range and a second value range corresponding to a second lookup table.
In one possible design, a mapping relationship between a second entry value of the second lookup table and a second lookup table value satisfies the first power function.
In one possible design, the minimum value of the first value range is greater than the maximum value of the second value range;
correspondingly, the first lookup table value of the first lookup table is determined based on the index value of the first lookup table, the step length between the index values of the first lookup table, and the maximum value of the second value range.
In one possible design, the step size between the index values of the first lookup table is different from the step size between the index values of the second lookup table.
In a possible design, when performing color processing on the pixel according to the first color adjustment coefficient, the processing module is specifically configured to: determining a second color adjustment coefficient for the pixel; multiplying the first color adjustment coefficient by the second color adjustment coefficient; and carrying out color processing on the pixel according to the multiplied product.
In one possible design, the image to be processed is an image subjected to dynamic range adjustment processing;
when determining the second color adjustment coefficient of the pixel, the determining module is specifically configured to: determining a ratio of electrical signals of the pixel after the dynamic range adjustment process and before the dynamic range adjustment process; and determining the second color adjustment coefficient according to the ratio of the electric signals.
In one possible design, the second color adjustment coefficient corresponding to the ratio of the electrical signals is determined by a look-up table.
In one possible design, the plurality of color components includes an R component, a G component, and a B component in an RGB space, and the target image is obtained according to the following formula:
Figure PCTCN2020088476-APPB-000003
r, G, B represents the color value of the R component, the color value of the G component, and the color value of the B component of the pixel, respectively, R ', G ', B ' represents the color value of the R component, the color value of the G component, and the color value of the B component of the corresponding pixel in the target image, Y represents the luminance value of the pixel, Alphy R0 represents the first color adjustment coefficient corresponding to the R component of the pixel, Alphy G0 represents the first color adjustment coefficient corresponding to the G component of the pixel, Alphy B0 represents the first color adjustment coefficient corresponding to the B component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy R0, A3 is the quantization coefficient of Alphy G0, and a4 is the quantization coefficient of Alphy B0.
In one possible design, the plurality of color components includes a U component, a V component in YUV space, and the target image is obtained according to the following formula:
Figure PCTCN2020088476-APPB-000004
u, V represents the color value of the U component and the color value of the V component of the pixel, U 'and V' represent the color value of the U component and the color value of the V component of the corresponding pixel in the target image, respectively, Alphy U0 represents the first color adjustment coefficient corresponding to the U component of the pixel, Alphy V0 represents the first color adjustment coefficient corresponding to the V component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy U0, and A3 is the quantization coefficient of Alphy V0.
For the second aspect and the beneficial effects of each possible design, reference may be made to the corresponding effects of the first aspect, which are not described herein again.
In a third aspect, embodiments of the present application provide an image color processing apparatus, the apparatus comprising a processor configured to invoke a set of programs, instructions or data to perform the method described in the first aspect or any of the possible designs of the first aspect. The apparatus may also include a memory for storing programs, instructions or data called by the processor. The memory is coupled to the processor, and the processor, when executing instructions or data stored in the memory, may implement the method of the first aspect or any possible design description above.
In a fourth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor and may further include a memory, and is configured to implement the method described in the first aspect or any one of the possible designs of the first aspect. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
In a fifth aspect, this application further provides a computer-readable storage medium having stored thereon computer-readable instructions that, when executed on a computer, cause a method as described in the first aspect or any one of the possible designs of the first aspect to be performed.
In a sixth aspect, this embodiment also provides a computer program product including instructions that, when executed on a computer, cause the computer to perform the method described in the first aspect or any possible design of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of a terminal device in an embodiment of the present application;
FIG. 2 is a schematic diagram of a terminal device for processing image colors in an embodiment of the present application;
FIG. 3 is a schematic flowchart illustrating an image color processing method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating a color processing method for an RGB format image according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an image color processing apparatus according to an embodiment of the present application;
FIG. 6 is a second schematic structural diagram of an image color processing apparatus according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image color processing method and device, aiming at realizing the adjustment of image colors and improving the image quality. The method and the device are based on the same or similar technical conception, and because the principle of solving the problems of the method and the device is similar, the implementation of the device and the method can be mutually referred, and repeated parts are not repeated.
It should be noted that, in the description of the embodiment of the present application, "and/or" describes an association relationship of an associated object, which means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. At least one referred to in this application means one or more; plural means two or more. In addition, it is to be understood that the terms first, second, third and the like in the description of the present application are used for distinguishing between the descriptions and are not to be construed as indicating or implying relative importance or order. Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The image color processing method and device can be applied to electronic equipment. The electronic device may be a mobile device such as a mobile terminal (mobile terminal), a Mobile Station (MS), a User Equipment (UE), or a fixed device such as a fixed phone or a desktop computer, or a video monitor. The electronic device has an image color processing function. The electronic device may also optionally have wireless connectivity to provide voice and/or data connectivity to a user of the handheld device or other processing device connected to a wireless modem, such as: the electronic device may be a mobile phone (or referred to as a "cellular" phone), a computer with a mobile terminal, or a portable, pocket-sized, handheld, computer-embedded, or vehicle-mounted mobile device, or may be a wearable device (such as a smart watch, a smart bracelet, or the like), a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a Point of sale (POS), or the like. In the embodiment of the present application, a terminal device is not taken as an example for description.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram illustrating an alternative hardware structure of a terminal device 100 according to an embodiment of the present application.
As shown in fig. 1, the terminal device 100 mainly includes a chip set, wherein the chip set may be used for processing image colors, for example, the chip set includes an Image Signal Processor (ISP) and the ISP processes image colors. Optionally, the chipset in the terminal device 100 further includes other modules, and the terminal device 100 may further include a peripheral device. The details are as follows. The Power Management Unit (PMU), the voice data codec (codec), the short-range module and the Radio Frequency (RF), the arithmetic processor, the random-access memory (RAM), the input/output (I/O), the display interface, the Sensor interface (Sensor hub), the baseband communication module, and other components in the solid-line frame in fig. 1 constitute a chip or a chip set. USB interface, memory, display screen, battery/mains, headset/speaker, antenna, Sensor etc. may be understood as peripheral devices. The components of the chipset, such as the arithmetic processor, the RAM, the I/O, the display interface, the ISP, the Sensor hub, and the baseband, may constitute a system-on-a-chip (SOC), which is a main part of the chipset. All components in the SOC can be integrated into a complete chip, or parts of the components in the SOC can be integrated, and another part of the components is not integrated, for example, a baseband communication module in the SOC can be not integrated with other parts and becomes an independent part. The components in the SOC may be interconnected by a bus or other connection line. PMUs, voice codecs, RF, etc. external to the SOC typically include analog circuit portions and are therefore often not integrated with each other outside of the SOC.
In fig. 1, the PMU is externally connected to a commercial power or a battery to supply power to the SOC, and the commercial power may be used to charge the battery. The voice codec is used as a voice coding and decoding unit and is externally connected with an earphone or a loudspeaker, so that conversion between a natural analog voice signal and a digital voice signal which can be processed by the SOC is realized. The short-range module may include wireless fidelity (WiFi) and bluetooth, and may optionally include an infrared, Near Field Communication (NFC), radio (FM), or Global Positioning System (GPS) module. The RF is connected to a baseband communication module in the SOC for converting, i.e., mixing, the air interface RF signal and the baseband signal. For a handset, reception is down conversion and transmission is up conversion. Both the short-range module and the RF may have one or more antennas for signal transmission or reception. The baseband is used for baseband communication, including one or more of multiple communication modes, and is configured to perform processing of a wireless communication protocol, which may include processing of each protocol layer, such as a physical layer (layer 1), a Medium Access Control (MAC) (layer 2), a Radio Resource Control (RRC) (layer 3), and may support various cellular communication systems, such as Long Term Evolution (LTE) communication or 5G new radio, NR (new radio, NR) communication. The Sensor hub is an interface of the SOC and an external Sensor, such as an accelerometer, a gyroscope, a control Sensor, an image Sensor, etc., for collecting and processing data of at least one external Sensor. The arithmetic processor may be a general-purpose processor, such as a Central Processing Unit (CPU), or may be one or more integrated circuits, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more Digital Signal Processors (DSPs), or microprocessors, or one or more Field Programmable Gate Arrays (FPGAs), etc. The arithmetic processor may include one or more cores, and may selectively schedule other units. The RAM may store intermediate data during some calculations or processing, such as intermediate calculation data for the CPU and baseband. The ISP is used for processing the data collected by the image sensor. The I/O is used for the SOC to interact with various external interfaces, such as a Universal Serial Bus (USB) interface for data transmission. The memory may be one or a group of chips. The display screen may be a touch screen, and is connected to the bus through a display interface, and the display interface may perform data processing before image display, such as aliasing of a plurality of layers to be displayed, buffering of display data, or control and adjustment of screen brightness.
It is understood that the image signal processor referred to in the embodiments of the present application may be one chip or a group of chips, and may be integrated or independent. For example, the image signal processor included in the terminal device 100 may be an integrated ISP chip integrated in an arithmetic processor.
Fig. 2 is a schematic diagram of a terminal device for processing image colors, the terminal device may perform image processing on an input image to be processed, the image processing process may include image color processing, and may also include other processing processes such as dynamic range adjustment processing, and the terminal device outputs a processed target image. With reference to the terminal device shown in fig. 1, an ISP in the terminal device may process the image color to obtain a processed target image.
In order to better understand the scheme of the embodiments of the present application, the conceptual terms referred to in the embodiments of the present application are explained first.
1) Image format:
in the embodiment of the present application, the format of the image may be a red, green and blue (RGB) format, a bright and color separation (YUV) format, or a bayer (bayer) format.
2) Color value:
in embodiments of the present application, an image may include one or more pixels, each pixel including one or more dimensional image color components. The color in the embodiment of the invention can comprise hue and saturation. Where a component may also be referred to as a channel, signal, color component, etc. The color components may also be referred to as channels of a color space, color components of a color space, and so on. The color values represent numerical representations of image color components of the image pixels. For example, for a YUV format image, the color values of the color components may include the U and V components of the YUV space. For an RGB format image, the color values may include an R component, a G component, and a B component of an RGB space. The color values of the color components may also be understood as corresponding color component values, color channel values, color component values, etc.
3) Brightness value:
a numerical representation value representing the luminance component of an image pixel, which can be used to characterize the Y component of the YUV space. And may also be used to characterize the R, G, and B components of the RGB space.
4) Lookup table (LUT):
the look-up table may be any form of look-up table as will be appreciated by those skilled in the art. Optionally, a one-dimensional (1D) lookup table is used in the embodiment of the present application. Optionally, the lookup table includes a series of input data and output data, and the input data and the output data are in a one-to-one correspondence relationship. The output data in the lookup table may be embodied in the form of table entry values, and the input data may be represented as the lookup table values. The look-up table may not have its values displayed, and the look-up table values may be represented in the form of table entry indices or table entry subscripts. That is, the lookup table includes one or more table entry values, each table entry value corresponds to one lookup table value, and the table entry value corresponding to the lookup table value can be obtained by inputting the lookup table value.
It is understood that the lookup table may be embodied in a table form, or other forms capable of representing the corresponding relationship between the input data and the output data.
In the embodiment of the present application, for differentiation, the first lookup table, the second lookup table, or the third lookup table is used to represent a plurality of lookup tables, and the concept of each lookup table may refer to the description of point 4).
Based on the above description, as shown in fig. 3, the image color processing method provided by the embodiment of the present application is as follows. The method can be executed by the terminal device shown in fig. 1, and can also be executed by other devices with image color processing functions.
S301, determining color values of a plurality of color components of pixels of the image to be processed.
S302, determining the ratio of the brightness value of the pixel of the image to be processed to the color values of the plurality of color components.
For example, the method for determining the luminance value of the pixel is not limited in the embodiment of the present application. For YUV space, the luminance value may be a luminance value of the Y component of the YUV space. For the RGB space, the luminance value of the Y component can be calculated from the color values of the R component, G component, and B component. For example, a may be given according to the formula Y ═ a 11 *R+a 12 *G+a 13 B, calculating color values of the Y components. Wherein, a 11 、a 12 、a 13 Is a fixed coefficient. As will be understood by those skilled in the art, a 11 、a 12 、a 13 The value of (b) can be selected in various ways, and the embodiment of the present application does not limit this. For example, Y ═ 0.2126 × R +0.7152 × G +0.0722 × B or Y ═ 0.2627 × R +0.6780 × G +0.0593 × B.
For the YUV space, which includes 2 color components, the ratio of the luminance value Y of a pixel to the color values of the 2 color components can be represented as Y/U and Y/V, respectively. For an RGB space comprising 3 color components, the ratio of the luminance value Y of a pixel to the color value of the 3 color components, respectively, can be represented as Y/R, Y/G and Y/B, respectively.
And S303, determining a first color adjustment coefficient of the pixel of the image to be processed according to the ratio obtained in the S302 and the first lookup table.
Optionally, the first lookup table includes or indicates a mapping relationship between the color adjustment coefficient and a preset ratio. The preset ratio may be used as input data, and the color adjustment coefficient may be used as output data.
S304, according to the first color adjustment coefficient, performing color processing on pixels of the image to be processed to obtain a target image.
It will be appreciated that the image to be processed may include a plurality of pixels, and each pixel may be processed according to the flow shown in fig. 3 to obtain the target image.
In the embodiment of fig. 3, the first color adjustment coefficient is determined according to the ratio of the luminance value of the pixel of the image to be processed to the color value of the color component, so as to perform color processing on the image to be processed, thereby reducing the color deviation phenomenon of the image subjected to color processing and improving the quality of the image subjected to color processing. The steps in the embodiment of fig. 3 may be implemented by a hardware circuit of the terminal device, for example, the processing of the adjustment data may be implemented by determining the first color adjustment coefficient through the first lookup table, so that the execution process of the image color processing flow may be implemented in the hardware circuit, and the practical application possibility of the image color processing method may be improved.
Some alternative implementations of the embodiment of fig. 3 are described further below.
The input data of the first lookup table may be a floating-point type value or an integer or fixed-point type value. The input data of the first lookup table is taken as a fixed-point value for example.
First, a possible implementation of the generation process of the lookup table is described.
The value range of the input data of the lookup table can be determined according to the bit width of the color value, and the value range of the input data of the lookup table can be smaller than or equal to the value range determined by the bit width of the color value. When the color value is an integer value, the color value is generally N bits, and N is a positive integer, for example, the color value is 8 bits, 10 bits, 12 bits, 14 bits, or 16 bits. The color value ranges from (0-2) N -1) or (1-2) N ). For example, when the RGB image color value takes 10 bits, the color value takes a range of (0-2) 10 -1)。
The lookup table value of the lookup table is used as input data, and the output data is the table item value of the lookup table. In the embodiment of the present application, the mapping relationship between the table entry value of the lookup table and the table lookup value satisfies the first power function. That is, the table lookup value is input into the first power function for operation to obtain the corresponding table entry value.
When the lookup table value is a fixed-point value, the lookup table may be generated in the following manner.
And according to the maximum value of the value range determined by the bit width of the color value, performing inverse quantization on the table lookup value (namely the fixed point value) to obtain the floating point value. For example, the maximum value according to the value range determined by the bit width of the color value is 2 N 1, the look-up table value is M, by M/(2) N -1) resulting in a floating point value M1. The floating-point value is determined based on the function value of the first power function, for example, the floating-point value M1 is substituted into the first power function to perform an operation, resulting in M2. M2 is a floating point type number. And quantizing the function value according to a preset quantization coefficient to obtain a table entry numerical value of the lookup table. The data obtained after M2 quantization are fixed-point values.
And traversing the table lookup values of the lookup table, and obtaining the table entry numerical value corresponding to each table lookup value according to the method, thereby generating the lookup table.
The table lookup value of the lookup table is input data of the lookup table and is used for obtaining a corresponding table entry value according to the table lookup value. The index value of the lookup table is the sequence number of the table entry value, and is generally generated according to the sequential arrangement of natural numbers from small to large or from large to small. The look-up table value may be determined based on the index value and the step size between the index values.
In one possible embodiment, the range of values of the first look-up table value of the first look-up table described in the embodiment of fig. 3 is determined by the bit width of the color value. For example, the color value is N bits, and the color value ranges from (0-2) N -1) maximum value of 2 N -1. The first lookup table value can be set to 0-2 N -a value in 1. The sequence number of the first entry value in the first lookup table may be referred to as an index value of the first lookup table, for example, if the first lookup table includes L entry values, the sequence number of the entry value of the first lookup table is 0 to L-1 or (1 to L), and L is a positive integer. The index value of the first lookup table is 0-L-1 or(1-L). The step size between every two index values in the lookup table may be 1 or an integer greater than 1. The first lookup table value is determined based on the index value of the first lookup table and a step size between the index values. When the first look-up table value is 2 N When the first look-up table is used, the first look-up table value corresponds to the index value one by one, and the step length is 1. The step size may also be larger than 1, and the first look-up table value is the index value × the step size.
For example, the color value is 12 bits, the color value ranges from 0 to 4095, and the maximum value is 4095. The index value can be 0-4095 or 1-4096. Assuming that the index value can be 0 ~ 1023 and the step size is 4, the look-up table value M is (0,4,8,12,16,20, … …,4092) in turn.
Dividing each fixed-point value in (0,4,8,12,16,20, … …,4092) by 4095 to obtain a floating-point value M1 for each fixed-point value.
Possible implementations of the generation process of the lookup table described above are applicable to the first lookup table.
In another possible embodiment, the value range determined by the bit width of the color value includes a first value range and at least one second value range. That is, the range of values determined by the bit-width of the color value may include a plurality of subsets. Each subset is a value range, and the union of the plurality of subsets is the value range determined by the bit width of the color value, or the union of the plurality of subsets can be smaller than the value range determined by the bit width of the color value. Generally, two subsets are taken as an example, that is, the value range determined by the bit width of the color value includes a first value range and a second value range. The range of the lookup table value of the first lookup table in the embodiment of fig. 3 is a first range of value. For example, the color value is N bits, and the color value ranges from (0-2) N -1) maximum value of 2 N -1. The second value range is (0-N1), and the first value range is (N1+ 1-2) N -1), the minimum value of the first range of values being greater than the maximum value of the second range of values. The value range of the lookup table value of the first lookup table is (N1+ 1-2) N -1), the first lookup table value may be set to (N1+1 ~ 2) N -1) of the value in (a). Of the first entry value in the first look-up tableThe sequence number may be referred to as an index value of the first lookup table, for example, if the first lookup table includes L entry values, the sequence number of the entry value of the first lookup table is 0 to L-1 or (1 to L), and L is a positive integer. The index value of the first lookup table is 0-L-1 or (1-L). The step size between every two index values in the first lookup table may be 1 or an integer greater than 1. The first lookup table value is determined based on the index value of the first lookup table and a step size between the index values. The first lookup table value may correspond to the index value one to one, i.e., the step size is 1. The step size may also be greater than 1, where the first lookup value of the first lookup table is determined based on the index value of the first lookup table, the step size between the index values of the first lookup table, and the maximum value of the second value range, for example, the first lookup value is index value × step size + N1. N1 is the maximum value of the second range. If the value range determined by the bit width of the color value includes a first value range and a plurality of second value ranges, the value range of the lookup table value of the first lookup table in the embodiment of fig. 3 is the first value range, and the first lookup table value is the index value × step size + N1. N1 is the maximum of all the value ranges before the first value range.
Similar to the first lookup table, a second lookup table may be generated according to a second value range, and the value range of the lookup table value of the second lookup table corresponds to the second value range. The table lookup value of the second lookup table ranges from (0 to N1), and the second table lookup value can be set to be a value in (0 to N1). The sequence number of the second entry value in the second lookup table may be referred to as an index value of the second lookup table, for example, if the second lookup table includes L1 entry values, the sequence number of the entry value of the second lookup table is 0 to L1-1 or (1 to L1), and L1 is a positive integer. The index value of the second lookup table is 0-L1-1 or (1-L1). The step size between every two index values in the second lookup table may be 1 or an integer greater than 1. The second lookup table value is determined based on the index value of the second lookup table and a step size between the index values. The second lookup table value may correspond to the index value one-to-one, i.e., step size is 1. The step size may also be greater than 1, and in this case, the second lookup value of the second lookup table is determined based on the step size between the index value of the first lookup table and the index value of the first lookup table, for example, the second lookup table value is equal to the index value × the step size.
Optionally, the step size between the index values of the first lookup table and the step size between the index values of the second lookup table may be the same or different.
For example, the color value is 12 bits, the color value ranges from 0 to 4095, and the maximum value is 4095. The second value range is (0-255), and the first value range is (256-4095). The number of entries in the second lookup table is 64, the step 256/64 between the index values of the second lookup table is 4, and the value of the entry in the second lookup table is (0,4,8,12,16,20, … …, 252). The number of entries in the first lookup table is 128 entries, the step size between the index values of the first lookup table (4095-.
The index value of the second lookup table can be 0-63, or 1-64. The index value of the first lookup table can be 0-127, or 1-128.
Each fixed-point value in the first lookup table (256,286,316, … …,4066) is divided by 4095 to obtain a floating-point value M1 for each fixed-point value. Each fixed point value in the second lookup table (0,4,8,12,16,20, … …,252) is divided by 4095 to obtain a floating point value for each fixed point value.
The above possible implementations of the generation process of the lookup table may be applied to the first lookup table as well as to the second lookup table.
When the value range determined by the bit width of the color value comprises a first value range and at least one second value range, the first lookup table corresponds to the first value range. In this case, before S303, a first lookup table corresponding to the first value range in which the ratio is located may also be determined. For example, a threshold may be set, and according to a comparison result between the ratio and the threshold, a value range in which the ratio is located is determined, and a lookup table corresponding to the value range is further determined. And when the ratio is smaller than the threshold value, determining a second value range in which the ratio is positioned, determining a second lookup table corresponding to the second value range, and determining the color adjustment coefficient corresponding to the ratio according to the ratio and the second lookup table. When the ratio is larger than or equal to the threshold value, determining a first value range in which the ratio is positioned, determining a first lookup table corresponding to the first value range, and determining a first color adjustment coefficient corresponding to the ratio according to the ratio and the first lookup table. Based on the above example, the color value is 12 bits, the color value ranges from 0 to 4095, and the maximum value is 4095. The second value range is (0-255), and the first value range is (256-4095). The threshold may be set to 256.
Of course, the color adjustment coefficient may also be determined according to the following comparison manner. And when the ratio is smaller than or equal to the threshold, determining a second value range in which the ratio is positioned, determining a second lookup table corresponding to the second value range, and determining the color adjustment coefficient corresponding to the ratio according to the ratio and the second lookup table. When the ratio is larger than the threshold value, a first value range where the ratio is located is determined, a first lookup table corresponding to the first value range is determined, and a first color adjustment coefficient corresponding to the ratio is determined according to the ratio and the first lookup table. Based on the above example, the color value is 12 bits, the color value ranges from 0 to 4095, and the maximum value is 4095. The second value range is (0-255), and the first value range is (256-4095). The threshold may be set at 255.
A possible implementation of determining the first color adjustment coefficient of the pixel according to the ratio and the first lookup table in S303 is described below.
The first lookup table includes a mapping relationship between the color adjustment coefficient and a preset ratio, and it can be understood that the preset ratio is a lookup table value of the first lookup table, and the color adjustment coefficient is table entry data corresponding to the lookup table value in the first lookup table.
In S303, a first color adjustment coefficient of the pixel is determined according to the ratio and the first lookup table. If the preset ratio of the first lookup table may include the ratio, the first color adjustment coefficient corresponding to the ratio may be determined according to a mapping relationship between the color adjustment coefficient and the preset ratio.
However, the preset ratio of the first lookup table may not include the ratio, and optionally, the embodiment of the application may determine the first color adjustment coefficient corresponding to the ratio by interpolation.
The interpolation method is a method for estimating unknown data from known discrete data in numerical analysis in the mathematical field. In this embodiment of the application, the interpolation method used to determine the first color adjustment coefficient corresponding to the ratio may be an interpolation method, an extrapolation method, a linear interpolation method, a nonlinear interpolation method, a near interpolation method, a bilinear quadratic interpolation method, a cubic interpolation method, or a Lanczos interpolation method, and a specific interpolation method may be specifically selected according to actual situations.
Optionally, a first preset ratio and a second preset ratio may be determined in the first lookup table, a first coefficient and a second coefficient corresponding to the first preset ratio and the second preset ratio are respectively determined according to a mapping relationship between the color adjustment coefficient and the preset ratio, and interpolation operation is performed on the first coefficient and the second coefficient to obtain a first color adjustment coefficient corresponding to the ratio.
The first predetermined ratio and the second predetermined ratio may be two ratios adjacent to each other, for example, if an interpolation method is used, the ratio is between the first predetermined ratio and the second predetermined ratio, and the ratio is adjacent to both the first predetermined ratio and the second predetermined ratio in the predetermined ratio. For another example, if the interpolation method is adopted, the first predetermined ratio and the second predetermined ratio are both smaller than the ratio, and the ratio, the first predetermined ratio and the second predetermined ratio are adjacent to each other in the predetermined ratio; or the first preset ratio and the second preset ratio are both larger than the ratio, and the ratio, the first preset ratio and the second preset ratio are adjacent in the preset ratio.
The following is an example of a linear interpolation method.
The step size of the first look-up table (LUT1) is 2 step The number of the entries of the first lookup table is NUM, and max is an input value of the lookup table interpolation.
First, calculate the interpolation index: i _ int ═ ((max > > step));
secondly, calculating interpolation weight: i _ dec ═ ((max & (((1< < step) -1)));
thirdly, calculating final quantization interpolation: c1 ═ step (LUT1[ i _ int ] (1< < step) -i _ dec) + LUT1[ iClip (i _ int +1,0, NUM-1) ] + i _ dec + (1< (step-1)) >).
The following describes how to perform linear interpolation on the first look-up table (LUT) to obtain the color adjustment coefficient corresponding to the ratio.
1. The value of the ratio (Radio) divided by the step size (step) is then rounded to obtain the index value A: a is Radio/step;
2. the value (data1) of the index value A corresponding to the first look-up table value is a first preset ratio: data1 ═ LUT [ a ];
3. the value (data2) of the first look-up table corresponding to the index value A +1 is a second preset ratio: data2 ═ LUT [ a +1 ];
4. the value dec obtained by dividing the step length (step) by the ratio (Radio) modulo, dec being Radio% step;
5. the final interpolated data3 is data3 ═ (data1 × (step-dec) + data2 ×/step. data3 is the color adjustment coefficient corresponding to the ratio.
The following description will be given by way of example how to perform linear interpolation on a first lookup table (LUT) to obtain a color adjustment coefficient corresponding to a ratio when a value range determined by a bit width of a color value includes a first value range and at least one second value range.
1. Assume the threshold is thres. The value of the ratio (Radio) minus thres is Radio 1: radio1 is Radio-thres.
2. The integral of the numerical value obtained by dividing the numerical value Radio1 by the step size step is the index value A: a is Radio 1/step.
3. The index value a corresponding to the table lookup value data1 is the first predetermined ratio: data1 is LUT [ a ].
4. The table lookup value data2 corresponding to the index value a +1 is the second preset ratio: data2 is LUT [ a +1 ].
5. The value dec of the ratio Radio1 modulo the step size: dec is Radio 1% step.
6. Final difference data3:
data3 ═ data1 (step-dec) + data2 ═ dec/step. data3 is the color adjustment coefficient corresponding to the ratio.
In the embodiment of the present application, the process of generating the first lookup table is a software process, and may be implemented by software. The software flow may be packaged as firmware/software (firmware). The steps of the embodiment in fig. 3 are a hardware flow, and may be implemented by a hardware circuit.
Based on this, for example, when the terminal device performs color processing on a plurality of continuous frames of images to be processed, the hardware circuit of the terminal device may process each frame of image to be processed according to each step of the embodiment in fig. 3, so as to obtain a target image corresponding to each frame of image to be processed. In the interval between every two consecutive frames, the firmware/software of the terminal device may generate a first look-up table. Of course, the terminal device performs software and hardware separation on the color processing process of the image, and may not be limited to the implementation manner illustrated in this paragraph.
The first power function is explained below.
The first power function may be represented as f (x) x b . Wherein, the exponent b of the first power function is a function coefficient. The coefficient b of the first power function may be determined by means of a look-up table using image or image sequence statistics, which may include the maximum, minimum, mean, standard deviation, and histogram distribution information of the image or image sequence.
For example, as a specific embodiment, a person skilled in the art may establish a correspondence between the exponent of the first power function and the average luminance value of the image to be processed based on experimental data or experience. Here, the average luminance value of the image to be processed may refer to an average value of luminance of the image to be processed or a sequence of images to be processed. As an example, the correspondence may be as shown in table 1 or table 2.
The range of the average luminance value in table 1 or table 2 is [0,1 ].
TABLE 1
Average brightness value 0.1 0.25 0.3 0.55 0.6
Exponent of first power function 1.2 1.0 0.8 0.6 0.2
TABLE 2
Average brightness value 0.1 0.3 0.5
Exponent of first power function 0.0 0.1 0.2
Taking table 1 as an example, as shown in table 1, when the average luminance value of the image to be processed is obtained, the average luminance value may be the average value of the Y component of the image to be processed, or the average value of the other components of the image to be processed. When the average luminance value is less than 0.1, the exponent of the first power function may take 1.2; when the average luminance value is greater than 0.6, the exponent of the first power function may take 0.2. When the average luminance value is between the two table values, the exponent value of the first power function may be obtained by interpolation. The interpolation method is not limited in the embodiments of the present application. For example, linear interpolation, quadratic linear interpolation, or the like may be employed. For example, when the average luminance value is between 0.55 and 0.6, the exponent value of the first power function may be obtained by linear interpolation as follows:
output=0.6+(0.2-0.6)*(input-0.55)/(0.6-0.55)。
the output represents an exponent value of a first power function, and the input represents an average brightness value of the image to be processed or the image sequence to be processed.
The power function in the embodiment of the present application may be replaced with a linear function, for example, f (x) cx + d.
It can be understood that, in the embodiment of the present application, the pixel of the image to be processed has a plurality of color components, and therefore, one ratio, that is, a plurality of ratios can be obtained for the color value of each color component in S302. Then for a plurality of ratios in S303, each ratio corresponds to a first lookup table. The plurality of first lookup tables corresponding to the plurality of ratios may be different, that is, the mapping relationship between the color adjustment coefficients included in or indicated by the plurality of first lookup tables and the preset ratios may be different. The mapping relation between the table entry value of the first lookup table and the lookup table value satisfies a first power function. Then the exponent values of the first power functions for the ratios may be different, although the same may be true.
An alternative implementation of 304 is described below.
In S304, color processing is performed on pixels of the image to be processed according to the first color adjustment coefficient to obtain a target image.
Optional implementation mode 1:
for the RGB space, the image to be processed may be color processed using the following equation (1):
Figure PCTCN2020088476-APPB-000005
where Y denotes a luminance value of the image to be processed, R, G, B denotes a color value of the R component, a color value of the G component, and a color value of the B component of the image to be processed, respectively, R ', G ', B ' denote a color value of the R component, a color value of the G component, and a color value of the B component of the target image, respectively, a 1 Represents the first color adjustment coefficient, a, corresponding to the R component 2 Representing the first color adjustment coefficient, a, corresponding to the G component 3 The first color adjustment coefficient corresponding to the B component is represented. a is 1 、a 2 Or a 3 The numerical value may be a floating-point numerical value or a fixed-point numerical value.
For the YUV space, the color processing can be performed on the image to be processed by using the following formula (2):
Figure PCTCN2020088476-APPB-000006
wherein U and V represent the color value of the U component and the color value of the V component of the image to be processed, respectively, U 'and V' represent the color value of the U component and the color value of the V component of the target image, respectively, and a 4 A first color adjustment coefficient representing the U component 5 The first color adjustment coefficient corresponding to the V component is represented.
In an alternative implementation 1, the first color adjustment factor may be preset. For example, the first color adjustment coefficient may be obtained by experimental data calibration. For example, a mapping relationship of the first color adjustment coefficient to the color value of the color component of the pixel may be counted based on experimental data, and the first color adjustment coefficient may be determined based on the mapping relationship.
Based on the analysis of the experimental data, the first color adjustment coefficient corresponding to the pixel of the image to be processed is determined so as to perform color processing on the image to be processed, and the quality of the image subjected to color processing can be improved.
Optional implementation 2:
and determining a second color adjustment coefficient of the pixel of the image to be processed, multiplying the first color adjustment coefficient and the second color adjustment coefficient, and performing color processing on the pixel of the image to be processed according to the multiplied product.
The second color adjustment coefficient may be a color adjustment coefficient of a given image to be processed, or may be determined according to other manners. For example, the image to be processed may be an image subjected to dynamic range adjustment processing representing compression or stretch processing of electric signal values (e.g., Y component, R component, G component, B component) of the image. In which, the dynamic range adjustment processing on the image may cause the color deviation phenomenon of the image. The second color adjustment coefficient may be determined based on the ratio of the electrical signals. The electrical signal ratio may be an electrical signal ratio of an electrical signal value of each pixel after the dynamic range adjustment processing and an electrical signal value before the dynamic range adjustment processing. For example, in converting an image between a High Dynamic Range (HDR) and a Standard Dynamic Range (SDR), dynamic range adjustment of the image is involved. The electrical signal value may be a Y component in YUV space, or may be an R component, a G component, or a B component in RGB space.
For example, in the YUV color space, the dynamic range adjustment processing is performed on the electrical signal value of the image to be processed, as shown in the following formula (3):
Y 2 =cY 1 formula (3)
Wherein the electric signal value before the dynamic range adjustment is Y 1 The value of the electric signal after the dynamic range adjustment is Y 2 C is the dynamic stateThe ratio of the electrical signals before and after the range adjustment process.
For another example, in the RGB color space, the dynamic range adjustment processing is performed on the color values of the image to be processed, as shown in the following formula (4):
Figure PCTCN2020088476-APPB-000007
wherein the color value before the dynamic range adjustment process is R 1 ,G 1 ,B 1 The color value after the dynamic range adjustment processing is R 2 ,G 2 ,B 2 And f is the ratio of the electrical signals before and after the dynamic range adjustment processing, and optionally, f may be the ratio of the maximum component in RGB.
Alternatively, determining the second color adjustment factor according to the ratio of the electrical signals may include various ways. For example, the ratio of the electrical signals may be directly determined as the second color adjustment factor. For another example, the second color adjustment factor corresponding to the ratio of the electrical signals is determined by a look-up table. Specifically, the second color adjustment factor is determined according to the ratio of the electrical signals and a look-up table (denoted as a third look-up table). The third lookup table may be generated in a manner as described above with reference to the first lookup table. The third lookup table includes or indicates a mapping relationship between the color adjustment coefficient and the preset ratio, and the mapping relationship between the color adjustment coefficient and the preset ratio conforms to the second power function. The second color adjustment coefficient may also be determined based on the ratio of the electrical signals and the second power function, and the second color adjustment coefficient may be a value obtained by substituting the ratio of the electrical signals into the second power function. The process of determining the second color adjustment coefficient according to the ratio of the electrical signal and the third lookup table may be implemented by a hardware circuit, and the process of generating the third lookup table may be implemented by firmware/software.
Wherein the second power function may be expressed as f (x) x d The exponent d of the second power function is a function coefficient. Wherein, the value of d can be a fixed value selected by a person skilled in the art based on experimental data and experience or statistical information of images or image sequencesThe image or image sequence statistics may include a maximum, a minimum, a mean, a standard deviation, and histogram distribution information for the image or image sequence, as determined by way of a lookup table.
For example, as a specific embodiment, a person skilled in the art may establish a correspondence between the exponent of the second power function and the average luminance value of the image to be processed based on experimental data or experience. Here, the average luminance value of the image to be processed may refer to an average value of luminance of the image to be processed or a sequence of images to be processed. As an example, the correspondence may be as shown in table 3 or table 4. The average luminance values in table 3 or table 4 are expressed in a normalized manner, and range from [0,1 ]. Where 1 represents the maximum value of the luminance value and 0 represents the minimum value of the luminance value.
TABLE 3
Average brightness value 0.1 0.25 0.3 0.55 0.6
Exponent of the second power function 0.1 0.15 0.2 0.25 0.3
TABLE 4
Average brightness value 0.1 0.5
Exponent of the second power function -0.1 -0.3
For convenience and simplicity of description, in the method corresponding to table 2, the method for searching the exponent of the second power function may refer to the detailed description related to table 1, and will not be described herein again.
On the basis of the alternative implementation 2, the target image in S304 may be obtained in the following manner.
If the plurality of color components include R, G, and B components in RGB space, the target image is obtained according to the following formula (5):
Figure PCTCN2020088476-APPB-000008
r, G, B, R ', G ', and B ' respectively represent the color value of the R component, the color value of the G component, and the color value of the B component of a corresponding pixel in a target image, Y represents the luminance value of the pixel, Alphy R0 represents the first color adjustment coefficient corresponding to the R component of the pixel, Alphy G0 represents the first color adjustment coefficient corresponding to the G component of the pixel, Alphy B0 represents the first color adjustment coefficient corresponding to the B component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy R0, A3 is the quantization coefficient of Alphy G0, and a4 is the quantization coefficient of Alphy B0.
If the plurality of color components include U component and V component in YUV space, the target image is obtained according to the following formula (6):
Figure PCTCN2020088476-APPB-000009
u, V represents the color value of the U component and the color value of the V component of a pixel, U 'and V' represent the color value of the U component and the color value of the V component of a corresponding pixel in a target image, respectively, Alphy U0 represents the first color adjustment coefficient corresponding to the U component of a pixel, Alphy V0 represents the first color adjustment coefficient corresponding to the V component of a pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy U0, and A3 is the quantization coefficient of Alphy V0.
Based on the above embodiments, to further understand the image color processing method provided in the embodiments of the present application, as shown in fig. 4, an alternative embodiment of a specific scene is described by taking an image in RGB format as an example. In the embodiment of fig. 4, a processing procedure of any pixel of the image to be processed is described, and each pixel of the plurality of pixels included in the image to be processed may operate with reference to the method shown in fig. 4, so as to finally obtain a target image corresponding to the image to be processed.
S401, color values R, G, B of 3 color components of a pixel of the image to be processed and an electrical signal ratio a of the image to be processed are obtained.
The electrical signal ratio a may be an electrical signal ratio between an electrical signal value of each pixel of the image to be processed after the dynamic range adjustment processing and an electrical signal value before the dynamic range adjustment processing.
S402, the luminance value Y of the pixel of the image to be processed is calculated from the color value R, G, B.
For example, Y ═ a may be determined according to the formula described above 11 *R+a 12 *G+a 13 B, determining Y.
And S403, substituting the ratio a of the electric signals into the lookup table 1 to obtain a second color adjustment coefficient Alphy 1.
The mapping relationship between the color adjustment coefficient and the predetermined ratio in the lookup table 1 is satisfied with a second power function of f (x) x d
Optionally, if the value range determined by the bit width of the color value includes multiple value ranges, and each value range correspondingly generates a lookup table, before substituting the electrical signal ratio a into the lookup table 1, the value range where the electrical signal ratio a is located needs to be determined, and the lookup table corresponding to the value range where a is located is selected. This optional step is illustrated in fig. 4 by a dashed box.
Wherein the coefficient d can be determined by means of a look-up table using image or image sequence statistics. For example, the coefficient d may be determined according to table 2. Alternatively, d may be a fixed value chosen empirically, such as 0.2.
S404, respectively calculating the ratio of the luminance value Y to the color values R, G, B of the 3 color components: Y/R, Y/G, Y/B.
S405, substituting Y/R, Y/G, Y/B into a lookup table 2, a lookup table 3 and a lookup table 4 respectively to obtain first color adjustment coefficients AlphyR0, AlphyG0 and AlphyB0 corresponding to the 3 color components respectively.
The mapping relationship between the color adjustment coefficients and the predetermined ratio in the lookup tables 2, 3 and 4 is satisfied as the first power function, i.e. f (x) x b . It is to be understood that the exponents in the first power function to which the mapping relationship between the color adjustment coefficients and the preset ratio values in the lookup tables 2, 3 and 4 is satisfied may be different.
Wherein, the index b can be determined by means of a lookup table by using the statistical information of the image or the image sequence. For example, the index b may be determined according to table 1.
The execution order of steps S403, S404, and S405 is not limited, and the order may be exchanged or performed simultaneously.
Optionally, if the value range determined by the bit width of the color value includes multiple value ranges, and each value range generates a lookup table correspondingly, before substituting the Y/R into the lookup table 2, the value range where the Y/R is located needs to be determined, and the lookup table corresponding to the value range where the Y/R is located is selected. Similarly, before substituting Y/G, Y/B into the lookup tables 3 and 4, the value ranges of the Y/G, Y/B generations need to be respectively judged, and the lookup table corresponding to the value range of Y/G, Y/B is selected. Of course, the number of the lookup tables corresponding to Y/R, Y/G, Y/B may be different, for example, Y/R corresponds to a plurality of lookup tables, and Y/G, Y/B corresponds to one lookup table respectively. Only when a plurality of look-up tables correspond, comparison with the threshold value is required before substituting into the difference table. In addition, the thresholds for the Y/R, Y/G or Y/B comparison may be the same or different. This optional step is not illustrated in fig. 4.
And substituting preset quantization coefficients A1, A2, A3, A4, Alphy1, AlphyR0, AlphyG0 and AlphyB0 into formula (5) for calculation to obtain corresponding color values R ', G ' and B ' of the 3 color channels of the pixel after color processing. Specifically, the following steps S406 to S409 can be used.
And S406, multiplying Alphy1 with AlphyR0, AlphyG0 and AlphyB0 respectively to obtain 3 products of BetaR, BetaG and BetaB.
S407, multiplying BetaR, BetaG and BetaB by (R-Y), (G-Y) and (B-Y) respectively to obtain (R-Y) ', (G-Y) ' and (B-Y) '.
S408, dividing or shifting (R-Y) ' by A1 and A2, dividing or shifting (G-Y) ' by A1 and A3, and dividing or shifting (B-Y) ' by A1 and A4 to obtain (R-Y), (G-Y) "and (B-Y)", respectively.
Wherein the shift operation means shifting the exponent to the left, e.g. A is 2 R1 Z is 2 R2 A is used for carrying out displacement operation on a numerical value Z, namely, A is leftwards moved by R1 bits, and the operation result is 2 R2-R1 . The displacement operation and the division operation have the same operation result.
S409, adding Y to (R-Y) ", (G-Y)" and (B-Y) "respectively to obtain R ', G ' and B '.
It should be noted that the examples in the application scenarios in the present application only show some possible implementations, and are for better understanding and description of the method in the present application. The skilled person can obtain some examples of evolution forms according to the image color processing methods provided by the application.
In order to implement the functions in the method provided by the embodiment of the present application, the terminal device may include a hardware structure and/or a software module, and implement the functions in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
Based on the same technical concept, as shown in fig. 5, the embodiment of the present application further provides an image color processing apparatus 500, where the image color processing apparatus 500 may be a mobile terminal or any device having an image processing function. In one design, the image color processing apparatus 500 may include a module corresponding to one or more of the methods/operations/steps/actions in the foregoing method embodiments, where the module may be a hardware circuit, a software circuit, or a combination of a hardware circuit and a software circuit. In one design, the image color processing apparatus 500 may include a determination module 501 and a processing module 502. The hardware circuits are referred to as hardware (hardware) or c-pipe (cpipe).
The determining module 501 is used for determining color values of a plurality of color components of pixels of an image to be processed; determining ratios of luminance values of the pixels to color values of the plurality of color components, respectively; and determining a first color adjustment coefficient of the pixel according to the ratio and a first lookup table. The processing module 502 is configured to perform color processing on the pixel according to the first color adjustment coefficient to obtain a target image. At this time, the determining module 501 and the processing module 502 may be hardware circuits.
Optionally, when determining the first color adjustment coefficient of the pixel according to the ratio and the first lookup table, the determining module 501 is specifically configured to: when the preset ratio comprises the ratio: determining a first color adjustment coefficient corresponding to the ratio according to the mapping relation; when the preset ratio does not include the ratio: determining a first preset ratio and a second preset ratio in the first lookup table; respectively determining a first coefficient and a second coefficient corresponding to the first preset ratio and the second preset ratio according to the mapping relation; and carrying out interpolation operation on the first coefficient and the second coefficient to obtain a first color adjustment coefficient corresponding to the ratio. The determination module 501 may be a hardware circuit.
Optionally, the first lookup table value is a fixed-point value; when determining the first entry value of the first lookup table, the determining module 501 is specifically configured to: performing inverse quantization on the fixed point numerical value according to the maximum value of the value range determined by the bit width of the color value to obtain a floating point numerical value; determining a function value of the floating-point number based on the first power function; and quantizing the function value according to a preset quantization coefficient to obtain a first table item numerical value of the first lookup table. Here the determination module 501 may be software.
Optionally, the determining module 501 is further configured to: determining the first lookup table corresponding to the first value range in which the ratio is located; and the value range determined by the bit width of the color value comprises the first value range and a second value range corresponding to a second lookup table. The determination module 501 may be a hardware circuit.
The determining module 501 and the processing module 502 may also be configured to perform other corresponding steps or operations of the foregoing method embodiments, which are not described in detail herein.
The division of the modules in the embodiments of the present application is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Based on the same technical concept, as shown in fig. 6, the embodiment of the present application further provides an image color processing apparatus 600. The image color processing apparatus 600 includes a processor 601. The processor 601 is configured to invoke a set of programs to cause the above-described method embodiments to be performed. The image color processing apparatus 600 further comprises a memory 602, the memory 602 being used to store program instructions and/or data for execution by the processor 601. A memory 602 is coupled to the processor 601. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processor 601 may cooperate with the memory 602. Processor 601 may execute program instructions stored in memory 602. The memory 602 may be included in the processor 601.
The image color processing apparatus 600 may be a system on a chip. In the embodiment of the present application, the chip system may be formed by a chip, and may also include a chip and other discrete devices. For example, the system-on-chip is an Application Specific Integrated Circuit (ASIC) chip, and the hardware portion of the image color processing apparatus 600 is a c-model (cmode) for simulating an ASIC chip, where the cmode can be bit-aligned with the ASIC chip effect.
The processor 601 is configured to input the image to be processed into the first neural network for operation to obtain a first image, where the first image is a first component image of the image to be processed, which is processed by the first neural network; and a vector stitching (context) unit for vector stitching the first image and the image to be processed to obtain a first matrix of the image to be processed; the first to-be-processed image matrix is input into a second neural network to be operated so as to obtain a second image, and the second image is a second component image of the to-be-processed image processed by the second neural network; based on the second image, a processed image is obtained.
The processor 601 may also be configured to perform other corresponding steps or operations of the above method embodiments, which are not described herein again.
The processor 601 may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like that implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
The memory 602 may be a nonvolatile memory such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory (RAM), for example, a random-access memory (RAM). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Some or all of the various operations and functions described in the method embodiments described above may be performed by a chip or an integrated circuit.
The embodiment of the present application further provides a chip, which includes a processor, and is configured to support the image color processing apparatus to implement the functions related to the foregoing method embodiments. In one possible design, the chip is connected to or includes a memory for storing the necessary program instructions and data of the communication device.
The embodiment of the application provides a computer readable storage medium, which stores a computer program, wherein the computer program comprises instructions for executing the method embodiment.
Embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the above-described method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (33)

  1. An image color processing method, comprising:
    determining color values of a plurality of color components of pixels of an image to be processed;
    determining ratios of luminance values of the pixels to color values of the plurality of color components, respectively;
    determining a first color adjustment coefficient of the pixel according to the ratio and a first lookup table;
    and carrying out color processing on the pixels according to the first color adjustment coefficient to obtain a target image.
  2. The method of claim 1, wherein the first lookup table comprises a mapping relationship between color adjustment coefficients and preset ratio values.
  3. The method of claim 2, wherein determining the first color adjustment coefficient for the pixel based on the ratio and a first lookup table comprises:
    when the preset ratio comprises the ratio: determining a first color adjustment coefficient corresponding to the ratio according to the mapping relation;
    when the preset ratio does not include the ratio:
    determining a first preset ratio and a second preset ratio in the first lookup table;
    respectively determining a first coefficient and a second coefficient corresponding to the first preset ratio and the second preset ratio according to the mapping relation;
    and carrying out interpolation operation on the first coefficient and the second coefficient to obtain a first color adjustment coefficient corresponding to the ratio.
  4. The method of claim 3, wherein the interpolation operation comprises any type of operation: linear interpolation, near interpolation, bilinear quadratic interpolation, cubic interpolation or Lanczos interpolation.
  5. The method of any one of claims 1 to 4, wherein a mapping relationship between the first entry value of the first lookup table and the first lookup table value satisfies a first power function.
  6. The method of any one of claims 1 to 5, wherein the first look-up table value is a fixed-point value;
    the determining of the first entry value of the first lookup table comprises:
    performing inverse quantization on the fixed point numerical value according to the maximum value of the value range determined by the bit width of the color value to obtain a floating point numerical value;
    determining a function value of the floating-point value based on the first power function;
    and quantizing the function value according to a preset quantization coefficient to obtain a first table item numerical value of the first lookup table.
  7. The method of claim 5 or 6, wherein the first lookup table value is determined based on a step size between an index value of the first lookup table and an index value of the first lookup table.
  8. The method of any one of claims 1 to 7, further comprising: determining the first lookup table corresponding to the first value range in which the ratio is located;
    and the value range determined by the bit width of the color value comprises the first value range and a second value range corresponding to a second lookup table.
  9. The method of claim 8, wherein a mapping relationship between a second entry value of the second lookup table and a second lookup table value satisfies the first power function.
  10. The method of claim 8 or 9, wherein a minimum value of the first range of values is greater than a maximum value of the second range of values;
    correspondingly, the first lookup table value of the first lookup table is determined based on the index value of the first lookup table, the step length between the index values of the first lookup table, and the maximum value of the second value range.
  11. A method according to any of claims 8 to 10, wherein the step size between index values of the first look-up table is different to the step size between index values of the second look-up table.
  12. The method according to any one of claims 1 to 11, wherein the color processing the pixel according to the first color adjustment coefficient comprises:
    determining a second color adjustment coefficient for the pixel;
    multiplying the first color adjustment coefficient by the second color adjustment coefficient;
    and carrying out color processing on the pixel according to the multiplied product.
  13. The method according to claim 12, wherein the image to be processed is an image subjected to dynamic range adjustment processing;
    the determining a second color adjustment coefficient for the pixel comprises:
    determining an electrical signal ratio of the pixel after the dynamic range adjustment process and before the dynamic range adjustment process;
    and determining the second color adjustment coefficient according to the ratio of the electric signals.
  14. The method of claim 13, wherein the second color adjustment factor corresponding to the ratio of the electrical signals is determined by a look-up table.
  15. The method according to any one of claims 12 to 14, wherein the plurality of color components include an R component, a G component, and a B component in an RGB space, and the target image is obtained according to the following formula:
    Figure PCTCN2020088476-APPB-100001
    wherein R, G, B represents a color value of an R component, a color value of a G component, and a color value of a B component of the pixel, respectively, R ', G ', B ' represent a color value of an R component, a color value of a G component, and a color value of a B component of a corresponding pixel in the target image, respectively, Y represents a luminance value of the pixel, Alphy R0 represents a first color adjustment coefficient corresponding to an R component of the pixel, Alphy G0 represents a first color adjustment coefficient corresponding to a G component of the pixel, Alphy B0 represents a first color adjustment coefficient corresponding to a B component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is a quantization coefficient of the Alphy1, a2 is a quantization coefficient of the Alphy R0, A3 is a quantization coefficient of the Alphy G0, and a4 is a quantization coefficient of the Alphy B0.
  16. The method of any one of claims 12 to 14, wherein the plurality of color components comprises a U component, a V component in YUV space, and the target image is obtained according to the following formula:
    Figure PCTCN2020088476-APPB-100002
    u, V represents the color value of the U component and the color value of the V component of the pixel, U 'and V' represent the color value of the U component and the color value of the V component of the corresponding pixel in the target image, respectively, Alphy U0 represents the first color adjustment coefficient corresponding to the U component of the pixel, Alphy V0 represents the first color adjustment coefficient corresponding to the V component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy U0, and A3 is the quantization coefficient of Alphy V0.
  17. An image color processing apparatus, comprising:
    a determining module for determining color values of a plurality of color components of pixels of an image to be processed; determining ratios of luminance values of the pixels to color values of the plurality of color components, respectively; determining a first color adjustment coefficient of the pixel according to the ratio and a first lookup table;
    and the processing module is used for carrying out color processing on the pixels according to the first color adjustment coefficient so as to obtain a target image.
  18. The apparatus of claim 17, wherein the first lookup table comprises a mapping of color adjustment coefficients to preset ratios.
  19. The apparatus of claim 18, wherein in determining the first color adjustment coefficient for the pixel based on the ratio and a first lookup table, the determining module is specifically configured to:
    when the preset ratio comprises the ratio: determining a first color adjustment coefficient corresponding to the ratio according to the mapping relation;
    when the preset ratio does not include the ratio:
    determining a first preset ratio and a second preset ratio in the first lookup table;
    respectively determining a first coefficient and a second coefficient corresponding to the first preset ratio and the second preset ratio according to the mapping relation;
    and carrying out interpolation operation on the first coefficient and the second coefficient to obtain a first color adjustment coefficient corresponding to the ratio.
  20. The apparatus of claim 19, wherein the interpolation operation comprises any type of operation: linear interpolation, near interpolation, bilinear quadratic interpolation, cubic interpolation or Lanczos interpolation.
  21. The apparatus of any one of claims 17 to 20, wherein a mapping relationship between the first entry value of the first lookup table and the first lookup table value satisfies a first power function.
  22. The apparatus of any one of claims 17 to 21, wherein the first look-up table value is a fixed-point value;
    the determining module is specifically configured to:
    performing inverse quantization on the fixed point numerical value according to the maximum value of the value range determined by the bit width of the color value to obtain a floating point numerical value;
    determining a function value of the floating-point number based on the first power function;
    and quantizing the function value according to a preset quantization coefficient to obtain a first table item numerical value of the first lookup table.
  23. The apparatus of claim 21 or 22, wherein the first lookup table value is determined based on a step size between an index value of the first lookup table and an index value of the first lookup table.
  24. The apparatus of any of claims 17-23, wherein the determination module is further configured to: determining the first lookup table corresponding to the first value range in which the ratio is located;
    and the value range determined by the bit width of the color value comprises the first value range and a second value range corresponding to a second lookup table.
  25. The apparatus of claim 24, wherein a mapping relationship between the second entry value of the second lookup table and the second lookup table value satisfies the first power function.
  26. The apparatus of claim 24 or 25, wherein a minimum value of the first range of values is greater than a maximum value of the second range of values;
    correspondingly, the first lookup table value of the first lookup table is determined based on the index value of the first lookup table, the step length between the index values of the first lookup table, and the maximum value of the second value range.
  27. The apparatus of any one of claims 24 to 26, wherein a step size between index values of the first lookup table is different from a step size between index values of the second lookup table.
  28. The apparatus according to any one of claims 17 to 27, wherein when performing color processing on the pixel according to the first color adjustment coefficient, the processing module is specifically configured to:
    determining a second color adjustment coefficient for the pixel;
    multiplying the first color adjustment coefficient by the second color adjustment coefficient;
    and carrying out color processing on the pixel according to the multiplied product.
  29. The apparatus according to claim 28, wherein the image to be processed is an image subjected to dynamic range adjustment processing;
    when determining the second color adjustment coefficient of the pixel, the determining module is specifically configured to:
    determining an electrical signal ratio of the pixel after the dynamic range adjustment process and before the dynamic range adjustment process;
    and determining the second color adjustment coefficient according to the ratio of the electric signals.
  30. The apparatus of claim 29, wherein the second color adjustment factor corresponding to the ratio of the electrical signals is determined by a look-up table.
  31. The apparatus of any one of claims 28 to 30, wherein the plurality of color components include an R component, a G component, and a B component in an RGB space, and the target image is obtained according to the following formula:
    Figure PCTCN2020088476-APPB-100003
    r, G, B represents the color value of the R component, the color value of the G component, and the color value of the B component of the pixel, respectively, R ', G ', B ' represents the color value of the R component, the color value of the G component, and the color value of the B component of the corresponding pixel in the target image, Y represents the luminance value of the pixel, Alphy R0 represents the first color adjustment coefficient corresponding to the R component of the pixel, Alphy G0 represents the first color adjustment coefficient corresponding to the G component of the pixel, Alphy B0 represents the first color adjustment coefficient corresponding to the B component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy R0, A3 is the quantization coefficient of Alphy G0, and a4 is the quantization coefficient of Alphy B0.
  32. The apparatus of any one of claims 28 to 30, wherein the plurality of color components comprises a U component, a V component in YUV space, the target image being obtained according to the following formula:
    Figure PCTCN2020088476-APPB-100004
    u, V represents the color value of the U component and the color value of the V component of the pixel, U 'and V' represent the color value of the U component and the color value of the V component of the corresponding pixel in the target image, respectively, Alphy U0 represents the first color adjustment coefficient corresponding to the U component of the pixel, Alphy V0 represents the first color adjustment coefficient corresponding to the V component of the pixel, Alphy1 represents the second color adjustment coefficient, a1 is the quantization coefficient of Alphy1, a2 is the quantization coefficient of Alphy U0, and A3 is the quantization coefficient of Alphy V0.
  33. A computer-readable storage medium having stored therein computer-readable instructions which, when run on a neural network-based image processing apparatus, cause the apparatus to perform the method of any one of claims 1-16.
CN202080096708.5A 2020-04-30 2020-04-30 Image color processing method and device Pending CN115088253A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/088476 WO2021217647A1 (en) 2020-04-30 2020-04-30 Image color processing method and apparatus

Publications (1)

Publication Number Publication Date
CN115088253A true CN115088253A (en) 2022-09-20

Family

ID=78373136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080096708.5A Pending CN115088253A (en) 2020-04-30 2020-04-30 Image color processing method and device

Country Status (2)

Country Link
CN (1) CN115088253A (en)
WO (1) WO2021217647A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118118797A (en) * 2024-03-29 2024-05-31 摩尔线程智能科技(北京)有限责任公司 Image processing method and device, apparatus, chip, storage medium, and program product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9207910B2 (en) * 2009-01-30 2015-12-08 Intel Corporation Digital signal processor having instruction set with an xK function using reduced look-up table
CN102110078A (en) * 2009-12-23 2011-06-29 富士通株式会社 Method and system for acquiring approximate operation result of power function X<p>
CN111667418B (en) * 2016-08-22 2024-06-28 华为技术有限公司 Method and apparatus for image processing
CN108090879B (en) * 2017-12-12 2020-11-10 上海顺久电子科技有限公司 Method for processing input high dynamic range image and display equipment
CN110473502A (en) * 2018-05-09 2019-11-19 华为技术有限公司 Control method, device and the terminal device of screen intensity

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118118797A (en) * 2024-03-29 2024-05-31 摩尔线程智能科技(北京)有限责任公司 Image processing method and device, apparatus, chip, storage medium, and program product

Also Published As

Publication number Publication date
WO2021217647A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN107038715B (en) Image processing method and device
US10148907B1 (en) System and method of luminance processing in high dynamic range and standard dynamic range conversion
US7081899B2 (en) Image processing support system, image processing device and image display device
CN109274985B (en) Video transcoding method and device, computer equipment and storage medium
US20150363912A1 (en) Rgbw demosaic method by combining rgb chrominance with w luminance
TW201431383A (en) Method and apparatus for adjusting the color gamut of a color image
CN107784993B (en) Color gamut compression method and device and display equipment
US20220101772A1 (en) Enhanced smoothness digital-to-analog converter interpolation systems and methods
US20210056930A1 (en) Electronic display gamma bus reference voltage generator systems and methods
WO2020216085A1 (en) Tetrahedral interpolation calculation method and apparatus, gamut conversion method and apparatus, and medium
CN112534466B (en) Directional scaling system and method
CN115088253A (en) Image color processing method and device
CN116843566A (en) Tone mapping method, tone mapping device, display device and storage medium
EP4372730A1 (en) Image enhancement method and apparatus, computer device, and storage medium
WO2020000255A1 (en) Rgb data color adjustment method
WO2021226769A1 (en) Image processing method and apparatus
CN114529617A (en) Image local color adjusting method and device, electronic equipment and storage medium
US8630488B2 (en) Creating a duotone color effect using an ICC profile
TWI496442B (en) Image processing method and image display device
US11769464B2 (en) Image processing
CN115908596B (en) Image processing method and electronic equipment
WO2021197213A1 (en) Color modulation method and apparatus for display, electronic device, and storage medium
CN117238260A (en) Color standard correction method and device and electronic equipment
CN116112652A (en) Projection image processing method, apparatus, computer device and storage medium
CN115619679A (en) Image processing method, image processing apparatus, image processing device, storage medium, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination