WO2021226769A1 - 一种图像处理方法及装置 - Google Patents

一种图像处理方法及装置 Download PDF

Info

Publication number
WO2021226769A1
WO2021226769A1 PCT/CN2020/089496 CN2020089496W WO2021226769A1 WO 2021226769 A1 WO2021226769 A1 WO 2021226769A1 CN 2020089496 W CN2020089496 W CN 2020089496W WO 2021226769 A1 WO2021226769 A1 WO 2021226769A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
image
processed
primary color
range
Prior art date
Application number
PCT/CN2020/089496
Other languages
English (en)
French (fr)
Inventor
李蒙
陈海
王海军
张秀峰
郑成林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/089496 priority Critical patent/WO2021226769A1/zh
Priority to CN202080099931.5A priority patent/CN115428007A/zh
Publication of WO2021226769A1 publication Critical patent/WO2021226769A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method and device.
  • the optical digital imaging process converts the light radiation of the real scene into electrical signals through the image sensor, and saves them in the form of digital images.
  • the purpose of image display is to reproduce the real scene described by a digital image through the display device. In this way, the user obtains the same visual perception as he directly observes the real scene.
  • the dynamic range is the brightness ratio between the brightest object and the darkest object in the scene, that is, the number of grayscale divisions between the "brightest” and “darkest” objects in the image.
  • the larger the dynamic range the richer the levels that can be represented, and the wider the color space included.
  • the dynamic range of the image does not match the dynamic range supported by the display device, the dynamic range needs to be adjusted. How to adjust the dynamic range is a problem that needs to be solved.
  • the present application provides an image processing method and device for realizing dynamic range adjustment of an image and improving image quality.
  • an image processing method is provided.
  • the execution subject of the method may be a terminal device.
  • the method specifically includes the following steps: determining the maximum value of the primary color values of the multiple components of the pixels of the image to be processed; Table to determine the ratio of the mapping relationship with the maximum value, wherein the first look-up table includes the mapping relationship between the preset ratio and the preset primary color value; according to the determined ratio, the ratio of the pixel
  • the primary color values of the multiple components are respectively adjusted in dynamic range to obtain the target image; wherein, the determination of the mapping relationship can be achieved by the following steps: obtaining the conversion value of the preset primary color value according to the first conversion function; The ratio of the conversion value to the preset primary color value is used as the preset ratio.
  • the preset conversion curve or the first conversion function is used to realize the image conversion, so that images with different dynamic ranges are better compatible with display devices with different display capabilities. For example, it is possible to achieve compatible display of images on SDR display devices and HDR display devices with different display capabilities, and effectively ensure consistent image display effects, help to ensure constant contrast, avoid loss of details, and improve or maintain images The display effect.
  • each step can be implemented by the hardware circuit of the terminal device. For example, by determining the ratio of the mapping relationship with the maximum value through the first look-up table, the processing of integer data can be realized, so that the execution process of the image processing flow can be changed. Implemented in the hardware circuit to improve the practical application possibilities of the image color processing method.
  • the determination of the ratio of the mapping relationship with the maximum value according to the first look-up table may include the following situations: when the preset primary color value includes the maximum value: According to the mapping relationship, the first ratio corresponding to the maximum value is determined; when the preset primary color value does not include the maximum value: the first preset primary color value and the second predetermined primary color value are determined in the first look-up table.
  • Preset primary color values determine the first ratio and the second ratio corresponding to the first preset primary color value and the second preset primary color value respectively according to the mapping relationship; compare the first ratio and the first ratio The two ratios are interpolated to obtain the ratio corresponding to the maximum value.
  • determining the first color adjustment system through the interpolation method can reduce the number of entry values in the first look-up table, so that the space occupied by the first look-up table is reduced, and the complexity of the hardware circuit is reduced.
  • the interpolation operation includes any of the following types of operations: linear interpolation, near interpolation, bilinear quadratic interpolation, cubic interpolation, or Lanczos interpolation.
  • the value of the first entry of the first look-up table is: the conversion value of the first look-up value of the first look-up table obtained by the first conversion function, and Describe the ratio of the first look-up table value.
  • the first look-up table value is a fixed-point value; the determination of the first table entry value can be achieved in the following ways: determining the dynamic parameters of the first conversion function; A value range determined by the bit width of the value, dequantizing the fixed-point value to obtain a floating-point value; converting the floating-point value into a conversion value based on the first conversion function after determining the dynamic parameter; According to a preset quantization coefficient, the ratio of the conversion value and the floating-point value is quantized to obtain the first entry value.
  • the determination of the value of the first entry of the first look-up table by the above-mentioned method can make the look-up value of the first look-up table and the value of the first entry both be fixed-point values, which realizes the realization possibility of the hardware circuit.
  • the above-mentioned method can be implemented by software, so that the actual usability of the image processing flow is improved through the separation of software and hardware. And this part of the method is realized by software, and the software process can be updated at any time according to the effect of image processing, so that the adaptability is high and the effect is good.
  • the first look-up table value is determined based on the step size between the index value of the first look-up table and the index value of the first look-up table.
  • the step size can be an integer equal to or greater than one.
  • the method further includes: determining the first look-up table corresponding to the first value range in which the maximum value is located; wherein the value range is determined by the bit width of the primary color value Including the first value range and the second value range corresponding to the second lookup table.
  • the second entry value of the second look-up table is: the conversion value of the second look-up table value of the second look-up table obtained by the first conversion function, and the conversion value of the second look-up table 2.
  • the method for generating the second lookup table is similar to that of the first lookup table, and can be cross-referenced.
  • the minimum value of the first value range is greater than the maximum value of the second value range; correspondingly, the first look-up table value is based on the index value of the first look-up table, The step length between the index values of the first look-up table and the maximum value of the second value range are determined.
  • the step size between the index values of the first look-up table and the step size between the index values of the second look-up table are different.
  • the dynamic range adjustments are performed on the primary color values of the multiple components of the pixel.
  • the dynamic range of the target image is adjusted to reduce the dynamic range of the primary color values of the multiple components of the pixel according to the first ratio; or, when the dynamic range of the image to be processed is smaller than the
  • the primary color values of the multiple components of the pixel are adjusted to expand the dynamic range according to the first ratio.
  • the dynamic range adjustment of the primary color values of the multiple components of the pixel according to the first ratio includes the following steps: respectively calculating the first ratio and the total value of the pixel
  • the product of the primary color values of the multiple components is used to obtain the adjusted primary color values of the multiple components of the pixel.
  • other dynamic compression processing methods can be performed according to the first ratio, as long as the dynamic range reduction or expansion adjustment processing can be performed on multiple components of the pixels of the image to be processed, in order to be better compatible with the target image
  • the display device can be displayed, and the details are not limited here.
  • the image to be processed is located in the image sequence to be processed
  • the target image is located in the target image sequence
  • the determination of the dynamic parameters of the first conversion function includes: according to at least one of the following information Determine the dynamic parameter: the statistical information of the image to be processed or the image sequence to be processed; the first reference value of the range of the image to be processed or the image sequence to be processed; the image to be processed or the image to be processed The second reference value of the image sequence range; the first reference value of the target image or the target image sequence range; the second reference value of the target image or the target image sequence range.
  • the statistical information of the to-be-processed image or the to-be-processed image sequence includes at least one of the following information: at least one component of the pixel of the to-be-processed image or the to-be-processed image sequence The maximum, minimum, average, standard deviation, and histogram distribution information of the primary color values.
  • the first reference value of the range of the image to be processed or the sequence of images to be processed may include any one of the following: the maximum brightness of the display device used to display the image to be processed; or, According to the statistical information of the image to be processed or the image sequence to be processed, the value obtained by searching the first preset list; or the first preset value.
  • the second reference value of the to-be-processed image or the range of the to-be-processed image sequence range may include any one of the following: the minimum brightness of the display device used to display the to-be-processed image; or, According to the statistical information of the image to be processed or the image sequence to be processed, the value obtained by searching the second preset list; or, the second preset value.
  • the target image or the first reference value of the target image sequence range may include any one of the following: the maximum brightness of the display device used to display the target image; or, the third preset Set value.
  • the target image or the second reference value of the target image sequence range may include any of the following: the minimum brightness of the display device used to display the target image; or, the fourth preset Set value.
  • the first transfer function includes an S-shaped transfer curve or an inverse S-shaped transfer curve.
  • the S-shaped conversion curve is a curve whose slope first rises and then falls.
  • the S-shaped conversion curve includes one or more segments.
  • the S-shaped conversion curve conforms to the following formula:
  • the L is the maximum value
  • the L′ is the conversion value
  • the a, b, p, and m are dynamic parameters of the S-shaped conversion curve.
  • the p and the m are obtained by searching a first preset list according to the statistical information of the image to be processed or the image sequence in which the image to be processed is located;
  • the L 1 is the first reference value of the image sequence range where the image to be processed or the image to be processed is located
  • the L 2 is the first reference value of the image sequence range where the image to be processed or the image to be processed is located.
  • Two reference values the L 1 is the first reference value of the target image or the target image sequence range
  • the L 2 is the second reference value of the target image or the target image sequence range.
  • the reverse S-shaped conversion curve is a curve whose slope first drops and then rises.
  • the reverse S-shaped conversion curve includes one or more segments.
  • the inverse S-shaped conversion curve conforms to the following formula:
  • the L is the maximum value among the primary color values of the multiple components of the pixels of the target image
  • the L′ is the conversion value of the maximum value among the primary color values of the multiple components of the pixels of the target image
  • the parameters a, b, p, and m are dynamic parameters of the inverse S-shaped conversion curve.
  • the p and m parameters are obtained by searching the second preset list; the a and b parameters are calculated by the following formula:
  • L 1 is the first reference value of the image sequence range of the image to be processed or the image to be processed
  • L 2 is the second reference value of the image sequence range of the image to be processed or the image to be processed Value
  • the L 1 is the first reference value of the target image or the target image sequence range
  • the L 2 is the second reference value of the target image or the target image sequence range.
  • an image processing device may be a terminal device, a device in a terminal device (for example, a chip, or a chip system, or a circuit), or a device that can be matched with the terminal device.
  • the device may include modules that perform one-to-one correspondence of the methods/operations/steps/actions described in the first aspect.
  • the modules may be hardware circuits, software, or hardware circuits combined with software.
  • the device may include a determination module and a processing module. Illustratively:
  • the determining module is used to determine the maximum value of the primary color values of the multiple components of the pixels of the image to be processed; and used to determine the ratio having a mapping relationship with the maximum value according to the first look-up table, wherein the first The look-up table includes the mapping relationship between the preset ratio and the preset primary color value; the processing module is configured to dynamically perform dynamics on the primary color values of the multiple components of the pixel according to the ratio having a mapping relationship with the maximum value. Range adjustment to obtain the target image; wherein, the determining module is further configured to determine the mapping relationship through the following steps: obtaining the conversion value of the preset primary color value according to the first conversion function; The ratio of the primary color values is set as the preset ratio.
  • the determining module is specifically configured to: when the preset primary color value includes the maximum value : Determine the first ratio corresponding to the maximum value according to the mapping relationship; when the preset primary color value does not include the maximum value: determine the first preset primary color value in the first look-up table And a second preset primary color value; according to the mapping relationship, a first ratio and a second ratio corresponding to the first preset primary color value and the second preset primary color value are respectively determined; and the first ratio and The second ratio is subjected to an interpolation operation to obtain the ratio corresponding to the maximum value.
  • the interpolation operation includes any of the following types of operations: linear interpolation, near interpolation, bilinear quadratic interpolation, cubic interpolation, or Lanczos interpolation.
  • the value of the first entry of the first look-up table is: the conversion value of the first look-up value of the first look-up table obtained by the first conversion function, and Describe the ratio of the first look-up table value.
  • the first look-up table value is a fixed-point value; the determining module is further configured to determine the first table item value in the following manner: determine the dynamic parameter of the first conversion function; The value range determined by the bit width of the primary color value, the fixed-point value is inversely quantized to obtain a floating-point value; the floating-point value is converted into a conversion value based on the first conversion function after the dynamic parameter is determined Quantify the ratio of the conversion value and the floating-point value according to a preset quantization coefficient to obtain the first entry value.
  • the first look-up table value is determined based on the step size between the index value of the first look-up table and the index value of the first look-up table.
  • the step size can be an integer equal to or greater than one.
  • the determining module is further configured to determine the first look-up table corresponding to the first value range in which the maximum value is located; wherein the value determined by the bit width of the primary color value
  • the value range includes the first value range and the second value range corresponding to the second lookup table.
  • the second entry value of the second look-up table is: the conversion value of the second look-up table value of the second look-up table obtained by the first conversion function, and the conversion value of the second look-up table 2.
  • the method for generating the second lookup table is similar to that of the first lookup table, and can be cross-referenced.
  • the minimum value of the first value range is greater than the maximum value of the second value range; correspondingly, the first look-up table value is based on the index value of the first look-up table, The step length between the index values of the first look-up table and the maximum value of the second value range are determined.
  • the step size between the index values of the first look-up table and the step size between the index values of the second look-up table are different.
  • the processing module when the primary color values of the multiple components of the pixel are respectively adjusted for dynamic range according to the first ratio, the processing module is specifically configured to: When the dynamic range is greater than the dynamic range of the target image, the primary color values of the multiple components of the pixel are adjusted to reduce the dynamic range according to the first ratio; or, when the dynamic range of the image to be processed When it is smaller than the dynamic range of the target image, the primary color values of the multiple components of the pixel are adjusted to expand the dynamic range according to the first ratio.
  • the processing module when the primary color values of the multiple components of the pixel are respectively adjusted for dynamic range according to the first ratio, the processing module is specifically configured to: calculate the first ratio respectively And the product of the primary color values of the multiple components of the pixel to obtain the adjusted primary color values of the multiple components of the pixel.
  • the processing module may be further configured to perform other dynamic compression processing methods according to the first ratio. As long as the dynamic range reduction or expansion adjustment processing can be performed on multiple components of the pixels of the image to be processed, the purpose is to be better compatible with the display of the target image display device, which is not specifically limited here.
  • the image to be processed is located in the image sequence to be processed
  • the target image is located in the target image sequence
  • the determination of the dynamic parameters of the first conversion function includes: according to at least one of the following information Determine the dynamic parameter: the statistical information of the image to be processed or the image sequence to be processed; the first reference value of the range of the image to be processed or the image sequence to be processed; the image to be processed or the image to be processed The second reference value of the image sequence range; the first reference value of the target image or the target image sequence range; the second reference value of the target image or the target image sequence range.
  • the statistical information of the to-be-processed image or the to-be-processed image sequence includes at least one of the following information: at least one component of the pixel of the to-be-processed image or the to-be-processed image sequence The maximum, minimum, average, standard deviation, and histogram distribution information of the primary color values.
  • the first reference value of the range of the image to be processed or the sequence of images to be processed may include any one of the following: the maximum brightness of the display device used to display the image to be processed; or, According to the statistical information of the image to be processed or the image sequence to be processed, the value obtained by searching the first preset list; or the first preset value.
  • the second reference value of the to-be-processed image or the range of the to-be-processed image sequence range may include any one of the following: the minimum brightness of the display device used to display the to-be-processed image; or, According to the statistical information of the image to be processed or the image sequence to be processed, the value obtained by searching the second preset list; or, the second preset value.
  • the target image or the first reference value of the target image sequence range may include any one of the following: the maximum brightness of the display device used to display the target image; or, the third preset Set value.
  • the target image or the second reference value of the target image sequence range may include any of the following: the minimum brightness of the display device used to display the target image; or, the fourth preset Set value.
  • the first transfer function includes an S-shaped transfer curve or an inverse S-shaped transfer curve.
  • the S-shaped conversion curve is a curve whose slope first rises and then falls.
  • the S-shaped conversion curve includes one or more segments.
  • the S-shaped conversion curve conforms to the following formula:
  • the L is the maximum value
  • the L′ is the conversion value
  • the a, b, p, and m are dynamic parameters of the S-shaped conversion curve.
  • the p and the m are obtained by searching a first preset list according to the statistical information of the image to be processed or the image sequence in which the image to be processed is located;
  • the L 1 is the first reference value of the image sequence range where the image to be processed or the image to be processed is located
  • the L 2 is the first reference value of the image sequence range where the image to be processed or the image to be processed is located.
  • Two reference values the L 1 is the first reference value of the target image or the target image sequence range
  • the L 2 is the second reference value of the target image or the target image sequence range.
  • the reverse S-shaped conversion curve is a curve whose slope first drops and then rises.
  • the reverse S-shaped conversion curve includes one or more segments.
  • the inverse S-shaped conversion curve conforms to the following formula:
  • the L is the maximum value among the primary color values of the multiple components of the pixels of the target image
  • the L′ is the conversion value of the maximum value among the primary color values of the multiple components of the pixels of the target image
  • the parameters a, b, p, and m are dynamic parameters of the inverse S-shaped conversion curve.
  • the p and m parameters are obtained by searching the second preset list; the a and b parameters are calculated by the following formula:
  • L 1 is the first reference value of the image sequence range of the image to be processed or the image to be processed
  • L 2 is the second reference value of the image sequence range of the image to be processed or the image to be processed Value
  • the L 1 is the first reference value of the target image or the target image sequence range
  • the L 2 is the second reference value of the target image or the target image sequence range.
  • an embodiment of the present application provides an image processing device, the device includes a processor, and the processor is used to call a set of programs, instructions, or data to execute the first aspect or any possible design of the first aspect. Described method.
  • the device may also include a memory for storing programs, instructions or data called by the processor.
  • the memory is coupled with the processor, and when the processor executes the instructions or data stored in the memory, it can implement the method described in the first aspect or any possible design.
  • an embodiment of the present application provides a chip system, which includes a processor and may also include a memory, for implementing the method described in the first aspect or any one of the possible designs of the first aspect.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer-readable instructions.
  • the method described in one aspect or any one of the possible designs of the first aspect is executed.
  • the embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the method described in the first aspect or any possible design of the first aspect .
  • Fig. 1 is a schematic structural diagram of a terminal device in an embodiment of the present application
  • FIG. 2 is a schematic diagram of image processing by a terminal device in an embodiment of the application
  • FIG. 3 is a schematic flowchart of an image processing method in an embodiment of the application.
  • Fig. 4 is a schematic diagram of an S-shaped conversion curve in an embodiment of the application.
  • Fig. 5 is a schematic diagram of an S-shaped conversion curve composed of two-segment curves in an embodiment of the application
  • Fig. 6 is a schematic diagram of a reverse S-shaped conversion curve in an embodiment of the application.
  • FIG. 7 is a schematic diagram of an inverse S-shaped conversion curve composed of two-segment curves in an embodiment of the application.
  • FIG. 8 is one of the schematic flowcharts of the RGB format image processing method in an embodiment of the application.
  • FIG. 9 is the second schematic diagram of the flow of the RGB format image processing method in the embodiment of the application.
  • FIG. 10 is one of the schematic diagrams of the structure of the image color processing device in an embodiment of the application.
  • FIG. 11 is the second structural diagram of the image color processing device in an embodiment of the application.
  • the embodiments of the present application provide an image processing method and device, in order to realize the adjustment of the dynamic range of the image and improve the image quality.
  • the method and the device are based on the same or similar technical conception. Since the method and the device have similar principles for solving the problem, the implementation of the device and the method can be referred to each other, and the repetition will not be repeated.
  • references described in this specification to "one embodiment” or “some embodiments”, etc. mean that one or more embodiments of the present application include a specific feature, structure, or characteristic described in combination with the embodiment. Therefore, the sentences “in one embodiment”, “in some embodiments”, “in some other embodiments”, “in some other embodiments”, etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless it is specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variations all mean “including but not limited to", unless otherwise specifically emphasized.
  • the multiplication in the formula of the embodiment of this application can be represented by "*" or " ⁇ ".
  • the image-based color processing method and device provided in the embodiments of the present application can be applied to electronic equipment.
  • the electronic device can be a mobile device such as a mobile terminal (mobile terminal), a mobile station (MS), user equipment (UE), etc., or a fixed device, such as a fixed telephone, a desktop computer, etc., or Video monitor.
  • the electronic device has an image color processing function.
  • the electronic device can also optionally have a wireless connection function to provide users with a handheld device with voice and/or data connectivity, or other processing devices connected to a wireless modem.
  • the electronic device can be a mobile phone (or (Called "cellular" phones), computers with mobile terminals, etc., can also be portable, pocket-sized, handheld, computer-built or vehicle-mounted mobile devices, of course, can also be wearable devices (such as smart watches, smart bracelets) Etc.), tablet computers, personal computers (PC), personal digital assistants (PDAs), point of sales (POS), etc.
  • a terminal device may be used as an example for description.
  • FIG. 1 is a schematic diagram of an optional hardware structure of a terminal device 100 related to an embodiment of this application.
  • the terminal device 100 mainly includes a chip set, where the chip set can be used to process image colors.
  • the chip set includes an image signal processor (ISP), and the ISP processes image colors.
  • the chipset in the terminal device 100 further includes other modules, and the terminal device 100 may also include peripheral devices. The details are as follows.
  • the power management unit PMU
  • voice data codec codec
  • short-distance module and radio frequency RF
  • arithmetic processor random-access memory
  • PMU voice data codec
  • RF radio frequency
  • Memory RAM
  • input/output input/output
  • display interface sensor hub
  • Sensor hub baseband communication module and other components make up a chip or chipset.
  • Components such as USB interface, memory, display screen, battery/mains power, earphone/speaker, antenna, sensor, etc. can be understood as peripheral devices.
  • the arithmetic processor, RAM, I/O, display interface, ISP, Sensor hub, baseband and other components in the chipset can form a system-on-a-chip (SOC), which is the main part of the chipset.
  • SOC system-on-a-chip
  • the components in the SOC can all be integrated into a complete chip, or part of the components in the SOC can be integrated, and the other parts are not integrated.
  • the baseband communication module in the SOC can not be integrated with other parts and become an independent part.
  • the components in the SOC can be connected to each other through a bus or other connecting lines.
  • the PMU, voice codec, RF, etc. outside the SOC usually include analog circuit parts, so they are often outside the SOC and are not integrated with each other.
  • the PMU is used for external mains or batteries to supply power to the SOC, and the mains can be used to charge the battery.
  • the voice codec is used as the sound codec unit to connect with earphones or speakers to realize the conversion between natural analog voice signals and digital voice signals that can be processed by the SOC.
  • the short-range module can include wireless fidelity (WiFi) and Bluetooth, and can also optionally include infrared, near field communication (NFC), radio (FM) or global positioning system (GPS) ) Module etc.
  • the RF is connected with the baseband communication module in the SOC to realize the conversion between the air interface RF signal and the baseband signal, that is, mixing. For mobile phones, receiving is down-conversion, and sending is up-conversion.
  • Both the short-range module and the RF can have one or more antennas for signal transmission or reception.
  • Baseband is used for baseband communication, including one or more of a variety of communication modes, used for processing wireless communication protocols, including physical layer (layer 1), medium access control (MAC) ( Layer 2), radio resource control (RRC) (layer 3) and other protocol layers can support various cellular communication standards, such as long term evolution (LTE) communication, or 5G new air interface ( new radio, NR) communication, etc.
  • the Sensor hub is an interface between the SOC and external sensors, and is used to collect and process data from at least one external sensor.
  • the external sensors can be, for example, accelerometers, gyroscopes, control sensors, image sensors, and so on.
  • the arithmetic processor can be a general-purpose processor, such as a central processing unit (CPU), or one or more integrated circuits, such as one or more application specific integrated circuits (ASICs), or , One or more digital signal processors (digital signal processors, DSP), or microprocessors, or, one or more field programmable gate arrays (FPGA), etc.
  • the arithmetic processor can include one or more cores, and can selectively schedule other units.
  • RAM can store some intermediate data during calculation or processing, such as intermediate calculation data of CPU and baseband.
  • ISP is used to process the data collected by the image sensor.
  • I/O is used for the SOC to interact with various external interfaces, such as the universal serial bus (USB) interface for data transmission.
  • USB universal serial bus
  • the memory can be a chip or a group of chips.
  • the display screen can be a touch screen, which is connected to the bus through a display interface.
  • the display interface can be used for data processing before image display, such as aliasing of multiple layers to be displayed, buffering of display data, or control and adjustment of screen brightness.
  • the image signal processor involved in the embodiment of the present application may be one or a group of chips, that is, it may be integrated or independent.
  • the image signal processor included in the terminal device 100 may be an integrated ISP chip integrated in the arithmetic processor.
  • Figure 2 shows a schematic diagram of image processing by the terminal device.
  • the terminal device can perform image processing on the input image to be processed.
  • the image processing process can include dynamic range adjustment processing, and can also include image color processing and other processing processes.
  • the terminal device outputs The processed target image.
  • the ISP in the terminal device can adjust the dynamic range of the image to obtain a processed target image.
  • the lookup table can be any form of lookup table that can be understood by those skilled in the art.
  • a one-dimensional (1D) lookup table is used in the embodiment of the present application.
  • the lookup table includes a series of input data and output data, and the input data and the output data have a one-to-one correspondence.
  • the output data in the lookup table can be embodied in the form of table item values, and the input data can be expressed as the lookup table value.
  • the look-up table value may not be displayed in the look-up table, and the look-up table value is expressed in the form of table item index or table item subscript. That is, the lookup table includes one or more table item values, and each table item value corresponds to a lookup table value. By inputting the lookup table value, the table item value corresponding to the lookup table value can be obtained.
  • lookup table can be embodied in a table form, or can be embodied in other forms that can represent the corresponding relationship between the input data and the output data.
  • the first lookup table, the second lookup table, or the third lookup table are used to represent multiple lookup tables, and the concept of each lookup table can refer to the description in point 2) of this article.
  • Pixels are the basic unit that constitutes an image.
  • the color of a pixel is usually described by several (exemplary, such as three) relatively independent attributes.
  • the combined effect of these independent attributes naturally constitutes a spatial coordinate, that is, a color space.
  • the independent attributes that make up the pixels are called the components of each pixel.
  • the component of the pixel may be the color component of the image, such as R component, G component, B component, or Y component.
  • Brightness is a physical measurement of the radiance of a scene, the unit is candela per square meter (cd/m 2 ), and it can also be expressed in nits.
  • a value corresponding to the color component of a particular image is called the primary color value of the component.
  • the primary color values have different forms.
  • the primary color values can be expressed as linear primary color values or nonlinear primary color values.
  • the linear primary color value is directly proportional to the light intensity, and its value is normalized to [0,1], also known as the light signal value, where 1 represents the highest display brightness, and the meaning of 1 is different when using different transfer functions, such as when using PQ In the transfer function, 1 means the highest display brightness 10000nits.
  • 1 means the highest display brightness 10000nits.
  • the SLF transfer function is used
  • 1 means the highest display brightness is 10000nits.
  • 1 means the highest display brightness is 2000nits, such as when the BT.1886 transfer function is used.
  • Exemplarily, 1 generally indicates that the highest display brightness is 300nits.
  • the non-linear primary color value is the normalized digital expression value of image information, and its value is normalized to [0,1], also known as the electrical signal value.
  • the optical-electro transfer function OETF
  • EOTF transfer function
  • the transfer function can be used to realize the conversion from a non-linear primary color value to a linear primary color value.
  • Commonly used SDR photoelectric transfer functions include ITU-R (international telecommunications union-radio communications sector, ITU-R) BT.1886 photoelectric conversion functions; correspondingly, SDR photoelectric conversion functions include ITU-R BT.1886 photoelectric conversion functions .
  • Commonly used HDR photoelectric conversion functions can specifically include, but are not limited to the following functions: perceptual quantizer (PQ) photoelectric conversion function, hybrid log-gamma (HLG) photoelectric conversion function, scene brightness fidelity ( scene Luminance fidelity, SLF) photoelectric conversion function.
  • HDR electro-optical conversion functions may specifically include, but are not limited to, the following functions: PQ electro-optical conversion function, HLG electro-optical conversion function, SLF electro-optical conversion function.
  • the PQ photoelectric/electric-optical conversion function (also called PQ conversion curve) is defined by the SMPTE2084 standard
  • the HLG photoelectric/electro-optical conversion function (also called HLG) Conversion curve) is jointly proposed by the BBC and NHK to define the high dynamic image standard.
  • the image converted via the PQ conversion curve complies with the SMPTE 2084 standard
  • the image converted via the HLG conversion curve complies with the HLG standard.
  • the data converted using the PQ conversion curve can be referred to as the optical/electrical signal value in the PQ domain;
  • the data converted using the HLG conversion curve can be referred to as the optical/electrical signal value in the HLG domain; using SLF
  • the data converted by the conversion curve can be referred to as the optical/electrical signal value in the SLF domain.
  • the format of the image may be a red, green, and blue (RGB) format, may also be a bright color separation (YUV) format, or may be a Bayer format.
  • RGB red, green, and blue
  • YUV bright color separation
  • the dynamic range of the image sensor is very small. Generally, the dynamic range of the CCD sensor does not exceed 1000:1, but the dynamic range of the brightness in the real scene is very wide.
  • the average height of the scene under the starlight at night is about 0.0001cd/m 2 , and The brightness of the scene under sunlight during the day reaches 100000 cd/m 2 .
  • the dynamic range of an image may generally include high dynamic range (HDR) and standard dynamic range (standard Dynamic Range, SDR).
  • HDR images are used to describe the complete visual range of real-world scenes. HDR images can show detailed information of extremely dark and extremely bright areas that may be lost by traditional shooting equipment but can be perceived by the human visual system.
  • a signal with a dynamic range of an image light signal value exceeding 0.01 to 1000 nits is called a high dynamic range optical signal value; a signal with a dynamic range of an image light information value less than 0.1 to 400 nits is called an SDR optical signal value.
  • the display capability of an HDR display device meets the dynamic range of HDR image light signal values and supports the HDR electro-optical conversion function.
  • the display capability of an SDR display device meets the dynamic range of SDR image light signal values and supports the SDR photoelectric conversion function.
  • the obtained HDR image electrical signal value is adjusted through dynamic range adjustment to obtain the final SDR image Electric signal value.
  • the conversion parameters used in dynamic range adjustment are only related to fixed data such as the maximum or minimum brightness of the SDR display device.
  • Such a processing method may not guarantee that the SDR image display effect after the dynamic range adjustment is consistent with the HDR image display effect, and contrast changes will occur. , Loss of details and other issues, which in turn affects the display effect of the image.
  • the image processing method provided by the embodiment of the present application is as follows. This method can be executed by the terminal device shown in FIG. 1 or by other devices with image processing functions.
  • S301 Determine the maximum value of the primary color values of the multiple components of the pixels of the image to be processed.
  • the multiple components of the pixel of the image to be processed refer to brightness-related components in the pixel.
  • the primary color value may be the brightness value of the Y component of the YUV space.
  • the R component, G component, and B component can be used to characterize the brightness of each color component of the image.
  • the brightness value of the Y component can be calculated according to the color values of the R component, G component, and B component.
  • a 11 , a 12 , and a 13 are fixed coefficients.
  • the R, G, and B components in the RGB space, and the Y component in the YUV space are all related to the brightness of the image.
  • the primary color values of multiple components of the pixels of the image to be processed can refer to the R component of the pixels of the image to be processed , G component, B component and Y component primary color value.
  • the maximum value of the primary color values of the multiple components of the pixels of the image to be processed is the primary color value of the R component, the primary color value of the G component, and the primary color value of the B component, which is the maximum of these three values. . Assuming that the primary color value is expressed in a normalized manner, the maximum value is 1, and the minimum value is 0.
  • the primary color value of the R component of the pixel to be processed is 0.5, the primary color value of the G component is 0.6, and the primary color value of the B component is 0.7, that is, 0.7 is determined as the maximum value of the primary color values of the three components of the pixel.
  • the primary color value is a fixed-point value
  • the primary color value of the R component of the image to be processed is n1
  • the primary color value of the G component is n2
  • the primary color value of the B component is n3, n1, n2, and n3 are all fixed-point values, n3 >n2>n1, then n3 is the maximum value among the primary color values of the three components of the pixel.
  • the maximum value of the primary color values of the multiple components of the pixel of the image to be processed is the primary color value of the one applicable component.
  • the maximum value of the primary color values of the multiple components of the pixels of the image to be processed is the primary color value of the Y component.
  • the first look-up table includes or indicates the mapping relationship between the preset ratio and the preset primary color value.
  • the determination of the mapping relationship can be achieved through the following process: obtaining the conversion value of the preset primary color value according to the first conversion function, and using the ratio of the conversion value to the preset primary color value as the preset ratio.
  • S303 Perform dynamic range adjustments on the primary color values of the multiple components of the pixel according to the ratio determined in S302 to have a mapping relationship with the maximum value, to obtain a target image.
  • the primary color values of the multiple components of the pixel are reduced in dynamic range adjustment according to the above ratio, or when the image of the image to be processed When the dynamic range is smaller than the image dynamic range of the target image, the primary color values of the multiple components of the pixel are adjusted to expand the dynamic range according to the aforementioned ratio.
  • Reducing can also be referred to as decreasing or decreasing, and expanding can also be referred to as increasing or increasing.
  • the ratio that has a mapping relationship with the maximum value may be respectively multiplied by the primary color values of multiple components of the pixel to adjust the dynamic range.
  • Other dynamic compression processing methods can also be performed according to the ratio, as long as the dynamic range reduction or expansion adjustment processing can be performed on multiple components of the pixels of the image to be processed, and the purpose is to be better compatible with the display device display of the target image. , The specifics are not limited here.
  • the image to be processed may include multiple pixels, and each pixel can be processed according to the process shown in FIG. 3 to obtain the target image.
  • each step of the embodiment in FIG. 3 can be implemented by the hardware circuit of the terminal device. For example, by determining the ratio of the mapping relationship with the maximum value through the first look-up table, the processing of the integer data can be implemented, so that the image can be processed.
  • the execution process of the processing flow is implemented in the hardware circuit, which improves the practical application possibility of the image color processing method.
  • the input data of the first look-up table can be floating-point values, integer values or fixed-point values. Take the input data of the first look-up table as a fixed-point value as an example.
  • the value range of the input data of the lookup table may be determined according to the bit width of the primary color value, and the value range of the input data of the lookup table may be less than or equal to the value range determined by the bit width of the primary color value.
  • the primary color value is a fixed-point value, in general, the primary color value is N bits, and N is a positive integer.
  • the value of the primary color value is 8 bits, 10 bits, 12 bits, 14 bits, or 16 bits.
  • the value range of the primary color value is (0 ⁇ 2 N -1) or (1 ⁇ 2 N ).
  • the value of the primary color value of the RGB image is 10 bits
  • the value range of the primary color value is (0-2 10 -1).
  • the look-up table value of the look-up table is used as the input data, and the output data is the table item value of the look-up table.
  • the look-up table includes or indicates the mapping relationship between the table item value and the look-up table value.
  • the look-up table includes the mapping relationship between the preset ratio and the preset primary color value, that is, the table item value can be considered to represent the preset ratio, and the look-up table value represents the preset primary color value.
  • the conversion value of the look-up table value is obtained according to the first conversion function, and the ratio of the conversion value to the look-up table value is taken as the table item value corresponding to the look-up table value.
  • the look-up table value is a fixed-point value
  • the look-up table can be generated in the following way.
  • the look-up table value (that is, the fixed-point value) is inversely quantized to obtain the floating-point value.
  • the maximum value of the value range determined by the bit width of the primary color value is 2 N -1
  • the look-up table value is M
  • the floating point value M1 is obtained by M/(2 N -1).
  • the floating-point value is converted into a conversion value based on the first conversion function, for example, the floating-point value M1 is converted into M2 based on the first conversion function.
  • M2 is a floating-point value. Obtain the ratio M2/M1 of the converted value M2 and the floating-point value M1 in the look-up table.
  • M2/M1 is a floating-point value. According to the preset quantization coefficient, the comparison value M2/M1 is quantized to obtain the entry value of the look-up table.
  • the data obtained after M2/M1 quantization is a fixed-point value. That is, both the look-up table value and the table item value of the look-up table can be fixed-point values.
  • the table item value corresponding to each look-up table value can be obtained, thereby generating the look-up table.
  • the look-up table value of the look-up table is the input data of the look-up table, which is used to obtain the corresponding table item value according to the look-up table value.
  • the index value of the lookup table is the sequence number of the table item value, which is generally generated according to the order of natural numbers from small to large or from large to small.
  • the look-up table value can be determined according to the index value and the step length between the index value.
  • the value range of the first look-up table value of the first look-up table described in the embodiment of FIG. 3 is determined by the bit width of the primary color value.
  • the value of the primary color value is N bits
  • the value range of the primary color value is (0-2 N -1)
  • the maximum value is 2 N -1.
  • the first look-up table value can be set to a value from 0 to 2 N -1.
  • the sequence number of the first entry value in the first lookup table can be referred to as the index value of the first lookup table. For example, if the first lookup table includes L entry values, then the sequence number of the first lookup table entry value is 0 ⁇ L-1 or (1 ⁇ L), L is a positive integer.
  • the index value of the first look-up table is 0 ⁇ L-1 or (1 ⁇ L).
  • the step length between every two index values in the lookup table can be 1 or an integer greater than 1.
  • the first look-up table value is determined based on the index value of the first look-up table and the step size between the index value. When the first look-up table value is 2 N , the first look-up table value corresponds to the index value one-to-one, and the step size is 1.
  • the primary color value is 12 bits
  • the primary color value ranges from 0 to 4095
  • the maximum value is 4095.
  • the index value can be from 0 to 4095, or from 1 to 4096. Assuming that the index value can be 0 ⁇ 1023, the step size is 4, and the table lookup value M is (0,4,8,12,16,20,...,4092) in turn.
  • the value range determined by the bit width of the primary color value includes a first value range and at least one second value range. That is, the value range determined by the bit width of the primary color value may include multiple subsets. Each subset is a value range, and the union of multiple subsets is the value range determined by the bit width of the primary color value, or the union of multiple subsets can also be smaller than the value determined by the bit width of the primary color value Scope. Generally, two subsets are taken as an example, that is, the value range determined by the bit width of the primary color value includes a first value range and a second value range. The value range of the table look-up value of the first look-up table described in the embodiment of FIG. 3 is the first value range.
  • the value of the primary color value is N bits
  • the value range of the primary color value is (0-2 N -1)
  • the maximum value is 2 N -1.
  • the second value range is (0 ⁇ N1)
  • the first value range is (N1+1 ⁇ 2 N -1)
  • the minimum value of the first value range is greater than the maximum value of the second value range.
  • the value range of the table lookup value of the first lookup table is (N1+1 ⁇ 2 N -1), and the first lookup table value can be set to a value in (N1+1 ⁇ 2 N -1).
  • the sequence number of the first entry value in the first lookup table can be called the index value of the first lookup table.
  • the sequence number of the first lookup table entry value is 0 ⁇ L-1 or (1 ⁇ L)
  • L is a positive integer
  • the index value of the first look-up table is 0 ⁇ L-1 or (1 ⁇ L).
  • the step size between every two index values in the first look-up table may be 1 or an integer greater than 1.
  • the first look-up table value is determined based on the index value of the first look-up table and the step size between the index value.
  • the first look-up table value can have a one-to-one correspondence with the index value, that is, the step size is 1.
  • the step size can also be greater than 1.
  • a second lookup table can be generated according to the second value range, and the value range of the table lookup value of the second lookup table corresponds to the second value range.
  • the value range of the table lookup value of the second lookup table is (0 ⁇ N1), and the second lookup table value can be set to a value in (0 ⁇ N1).
  • the sequence number of the second entry value in the second lookup table can be called the index value of the second lookup table. For example, if the second lookup table includes L1 entry values, the sequence number of the second lookup table entry value is 0 ⁇ L1-1 or (1 ⁇ L1), L1 is a positive integer.
  • the index value of the second lookup table is 0 ⁇ L1-1 or (1 ⁇ L1).
  • the step size between every two index values in the second lookup table may be 1 or an integer greater than 1.
  • the second look-up table value is determined based on the index value of the second look-up table and the step size between the index value.
  • the second look-up table value can have a one-to-one correspondence with the index value, that is, the step size is 1.
  • the step size can also be greater than 1.
  • the step size between the index values of the first look-up table and the step size between the index values of the second look-up table may be the same or different.
  • the primary color value is 12 bits
  • the primary color value ranges from 0 to 4095
  • the maximum value is 4095.
  • the second value range is (0 ⁇ 255)
  • the first value range is (256 ⁇ 4095).
  • the number of entries in the second look-up table is 64
  • the value of the entries in the second look-up table is (0,4,8,12,16,20 , Jerusalem,252).
  • the index value of the second lookup table may be 0-63, or 1-64.
  • the index value of the first look-up table may be 0-127, or 1-128.
  • look-up table generation process can be applied to the first look-up table, and can also be applied to the second look-up table.
  • the first look-up table corresponds to the first value range.
  • the first look-up table corresponding to the first value range in which the maximum value of the primary color values of the multiple components is located can also be determined.
  • the maximum value of the primary color values of the plurality of components can be briefly described as the maximum value in the following description. For example, a threshold value can be set, and the value range in which the maximum value is located is determined according to the comparison result of the maximum value and the threshold value, and the look-up table corresponding to the value range is further determined.
  • the second value range of the maximum value is determined, the second lookup table corresponding to the second value range is determined, and the ratio with the maximum value that has a mapping relationship is determined according to the second lookup table.
  • the maximum value is greater than or equal to the threshold value, determine the first value range where the maximum value is located, determine the first lookup table corresponding to the first value range, and determine the mapping relationship with the maximum value according to the first lookup table ratio.
  • the value of the primary color value is 12 bits
  • the value range of the primary color value is 0-4095
  • the maximum value of the value range is 4095.
  • the second value range is (0 ⁇ 255)
  • the first value range is (256 ⁇ 4095).
  • the threshold can be set to 256.
  • the ratio that has a mapping relationship with the maximum value can also be determined according to the following comparison method.
  • the maximum value is less than or equal to the threshold
  • the second value range of the maximum value is determined, and the second lookup table corresponding to the second value range is determined, and the second lookup table is used to determine the mapping relationship with the maximum value. ratio.
  • the maximum value is greater than the threshold value, determine the first value range where the maximum value is located, determine the first lookup table corresponding to the first value range, and determine the mapping relationship with the maximum value according to the determined first lookup table ratio.
  • the color value is 12 bits
  • the color value ranges from 0 to 4095
  • the maximum value is 4095.
  • the second value range is (0 ⁇ 255)
  • the first value range is (256 ⁇ 4095).
  • the threshold can be set to 255.
  • the first look-up table is used to determine the ratio that has a mapping relationship with the maximum value.
  • the first look-up table includes the mapping relationship between the preset ratio and the preset primary color value.
  • the preset primary color value of the first look-up table may include the maximum value, and the ratio corresponding to the maximum value may be determined according to the mapping relationship.
  • the maximum value may not be included in the preset primary color values of the first look-up table.
  • the ratio having a mapping relationship with the maximum value may be determined by interpolation.
  • interpolation method is a method of estimating unknown data from known discrete data in numerical analysis in the field of mathematics.
  • the interpolation method used to determine the ratio of the mapping relationship with the maximum value may be an interpolation method, an extrapolation method, a linear interpolation method, or a nonlinear interpolation method, or It is any of the near interpolation method, bilinear quadratic interpolation method, cubic interpolation method or Lanczos interpolation method, and the specific interpolation method can be selected according to the actual situation.
  • the first preset primary color value and the second preset primary color value may be determined in the first look-up table, and the first preset primary color value and the second preset primary color value may be determined respectively according to the mapping relationship between the preset ratio and the preset primary color value.
  • the first ratio and the second ratio corresponding to the preset primary color value are interpolated between the first preset primary color value and the second preset primary color value to obtain a ratio that has a mapping relationship with the maximum value.
  • the first preset primary color value and the second preset primary color value may be two primary color values adjacent to the maximum value.
  • the maximum value is located between the first preset primary color value and the second preset primary color value In the preset primary color values, the maximum value is adjacent to both the first preset primary color value and the second preset primary color value.
  • the first preset primary color value and the second preset primary color value are both less than the maximum value, and among the preset primary color values, the maximum value, the first preset primary color value and the second preset primary color value Set the three primary color values to be adjacent; or, the first preset primary color value and the second preset primary color value are both greater than the maximum value, and among the preset primary color values, the maximum value, the first preset primary color value, and the second The three preset primary color values are adjacent to each other.
  • the step length of the first lookup table (LUT1) is 2 step , the number of entries in the first lookup table is NUM, and max is the input value of the lookup table interpolation.
  • the following example introduces how to perform linear interpolation on the first look-up table (LUT).
  • the step size of the first look-up table (LUT) is step to obtain the ratio corresponding to the maximum value (max).
  • the final interpolation data3:data3 (data1*(step-dec)+data2*dec)/step.
  • data3 is the ratio of the mapping relationship with the maximum value.
  • the first look-up table corresponds to the first value range.
  • LUT first look-up table
  • data3 (data1*(step-dec)+data2*dec)/step.
  • data3 is the ratio of the mapping relationship with the maximum value.
  • the process of generating the first lookup table is a software process, which can be implemented by software.
  • the software process can be packaged as firmware/software (firmware).
  • firmware/software firmware
  • Each step in the embodiment of FIG. 3 is a hardware flow, which can be implemented by a hardware circuit.
  • the hardware circuit of the terminal device can process each frame of the image to be processed in accordance with the steps in the embodiment of FIG. 3 to obtain each frame of image to be processed. Process the target image corresponding to the image. In the interval between every two consecutive frames, the firmware/software of the terminal device may generate the first look-up table.
  • the software and hardware separation of the image dynamic range adjustment process by the terminal device may not be limited to the implementation manners exemplified in this paragraph.
  • the first conversion function will be described below.
  • the dynamic parameters of the first conversion function can also be determined, and the first conversion function after the determined dynamic parameters is subsequently used.
  • the dynamic parameters of the first conversion function can be obtained according to at least one of the following information: statistical information of the image to be processed; the first reference value of the image range to be processed; the second reference value of the image range to be processed; the target image range The first reference value; the second reference value of the target image range.
  • the dynamic parameters of the first conversion function can also be obtained according to at least one of the following information: the statistical information of the sequence where the image to be processed is located; the sequence range of the image to be processed is the first Reference value; the second reference value of the sequence range where the image to be processed is located; the first reference value of the sequence range where the target image is located; the second reference value of the sequence range where the target image is located.
  • the statistical information of the image to be processed or the sequence of the image to be processed may refer to information related to the attribute of the image to be processed or the sequence of the image to be processed.
  • the primary color value can be a linear primary color value or a non-linear primary color value.
  • the primary color value can be a luminance component (Y component), and the corresponding primary color value is a non-linear primary color value.
  • the information related to the attributes of the image to be processed or the sequence of images to be processed may also include other information, for example, it may also be the primary color value of the image to be processed or multiple components of the image to be processed. variance.
  • a certain functional relationship between the above-listed information can also be used as statistical information, for example, it can refer to the sum of the average value and the standard deviation of the image to be processed or the sequence of images to be processed, which is not specifically limited here.
  • the average value of the image to be processed or the image sequence to be processed may specifically refer to the average value of the non-linear primary color values of the R component of the image to be processed or the pixel set of the image sequence to be processed, or the average of the non-linear primary color values of the G component Value, or the average value of the non-linear primary color values of the B component, or the average value of the non-linear primary color values of the Y component.
  • the average value of the image to be processed or the image sequence to be processed may specifically refer to: the average value of the linear primary color values of the R component of the image to be processed or the pixel set of the image sequence to be processed, or the average value of the linear primary color values of the G component, or The average value of the linear primary color values of the B component, or the average value of the linear primary color values of the Y component.
  • the corresponding non-linear primary color values or the average value of linear primary color values may specifically have a variety of situations.
  • the color space is the RGB color space and The YUV color space is described as an example, and other color spaces will not be repeated.
  • the first reference value of the range of the image to be processed or the sequence of images to be processed may include any one of the following:
  • the maximum brightness of the display device used to display the image to be processed where the display device is pre-configured or selected as a display device used to display the image to be processed when determining the dynamic parameter of the conversion function;
  • the first preset value for example, the first preset value is set to 0.85 or 0.53.
  • the above-mentioned reference value of the to-be-processed image range is obtained in the manner of the first preset list, which is specifically described as follows.
  • the embodiments of the application need to be used to realize the conversion of HDR image to SDR image
  • the image to be processed is an HDR image
  • the statistical information of the image to be processed is the average value and standard deviation of the image to be processed.
  • the first reference value is obtained by searching the first preset list according to the statistical information of the image to be processed, that is, the above reference value of the range of the image to be processed is described, where the list information of the first preset list As shown in Table 1:
  • the reference value of the range of the image to be processed is 0.92; when the sum of the average value and the standard deviation of the image to be processed When it is less than 0.2, the reference value of the image range to be processed is 0.85; when the sum of the average value and standard deviation of the image to be processed is between 0.2 and 0.5, the reference value of the image range to be processed can be selected according to the data 0.2 and 0.5 are obtained by interpolation. When it is between 0.5 and 0.7, it can also be obtained by interpolation. Among them, it can be obtained by interpolation methods such as linear interpolation and weighted average interpolation. The details are not limited here. Do not repeat it.
  • the image to be processed is an SDR image
  • the statistical information of the image to be processed is the average value and standard deviation of the image to be processed.
  • the first reference value is obtained by searching the first preset list according to the statistical information of the image to be processed, that is, the above reference value of the range of the image to be processed is described, where the list information of the first preset list As shown in table 2:
  • the reference value of the range of the image to be processed is 0.58; when the sum of the average value and the standard deviation of the image to be processed When it is less than 0.2, the reference value of the image range to be processed is 0.53; when the sum of the average value and standard deviation of the image to be processed is between 0.2 and 0.5, the reference value of the image range to be processed can be selected according to the data 0.2 and 0.5 are obtained by interpolation. When it is between 0.5 and 0.7, it can also be obtained by interpolation. Among them, it can be obtained by interpolation methods such as linear interpolation and weighted average interpolation. The details are not limited here. Do not repeat it.
  • the embodiments of this application need to be used to implement conversion between HDR images with different dynamic ranges
  • the image to be processed is an HDR image
  • the statistical information of the image to be processed is the average value and standard deviation of the image to be processed
  • the first reference value is obtained by searching the first preset list according to the statistical information of the image to be processed, that is, the reference value of the range of the image to be processed is described.
  • the list information is shown in Table 3:
  • the reference value of the range of the image to be processed is 0.90; when the sum between the average value and the standard deviation of the image to be processed When it is less than 0.2, the reference value of the image range to be processed is 0.82; when the sum of the average value and standard deviation of the image to be processed is between 0.2 and 0.5, the reference value of the image range to be processed can be selected according to the data 0.2 and 0.5 are obtained by interpolation. When it is between 0.5 and 0.7, it can also be obtained by interpolation. Among them, it can be obtained by interpolation methods such as linear interpolation and weighted average interpolation. The details are not limited here. Do not repeat it.
  • Tables 1 to 3 are pre-configured lists, and the data in Tables 1 to 3 are optimal parameters obtained based on empirical data.
  • Tables 1 to 3 only take the statistical information of the image to be processed as the sum of the average value and standard deviation of the image to be processed as an example.
  • the statistical information of the image sequence can also be obtained by looking up the table to obtain the reference value of the range of the image to be processed, which is not limited here, and will not be repeated here.
  • the second reference value of the range of the image to be processed or the image sequence range to be processed may include any one of the following:
  • the minimum brightness of the display device used to display the second image to be processed where the display device is a device configured or selected in advance, and is used to display the image to be processed when determining the dynamic parameter of the conversion function;
  • the second reference value obtained by searching the second preset list
  • the second preset value for example, the second preset value is set to 0.05 or 0.12.
  • the second reference value of the range of the image to be processed is obtained by searching the second preset list based on the statistical information of the image to be processed or the image sequence to be processed.
  • the embodiments of the application need to be used to realize the conversion of HDR image to SDR image
  • the image to be processed is an HDR image
  • the statistical information of the image to be processed is the average value and standard deviation of the image to be processed.
  • the second reference value obtained by the second preset look-up table based on the statistical information of the image to be processed that is, the second reference value of the range of the image to be processed is described, where the second reference value of the second preset list
  • the list information is shown in Table 4:
  • the second reference value of the range of the image to be processed is 0.01; when the difference between the average value and the standard deviation of the image to be processed When the sum is less than 0.1, the second reference value of the image range to be processed is 0; when the sum of the average value and standard deviation of the image to be processed is between 0.1 and 0.2, the second reference value of the image range to be processed is The value of can be obtained by interpolation based on 0.1 and 0.2. Among them, linear interpolation, weighted average interpolation and other interpolation methods can be used to obtain, and the details are not limited here, and will not be repeated here.
  • the image to be processed is an SDR image
  • the statistical information of the image to be processed is the average value and standard deviation of the image to be processed.
  • the second reference value obtained by the second preset look-up table based on the statistical information of the image to be processed that is, the second reference value of the range of the image to be processed is described, where the second reference value of the second preset list
  • the list information is shown in Table 5:
  • the second reference value of the image to be processed (SDR) range 0.1 0.12 0.15
  • the second reference value of the range of the image to be processed takes 0.15; when the difference between the average value and the standard deviation of the image to be processed When the sum is less than 0.1, the second reference value of the image range to be processed is 0.1; when the sum of the average value and standard deviation of the image to be processed is between 0.1 and 0.2, the second reference value of the image range to be processed
  • the value of can be obtained by interpolation based on 0.1 and 0.2. Among them, linear interpolation, weighted average interpolation and other interpolation methods can be used to obtain, and the details are not limited here, and will not be repeated here.
  • the embodiments of this application need to be used to implement conversion between HDR images with different dynamic ranges
  • the image to be processed is an HDR image
  • the statistical information of the image to be processed is the average value and standard deviation of the image to be processed
  • the second reference value obtained by the second preset lookup table based on the statistical information of the image to be processed that is, the second reference value of the range of the image to be processed is described, where the second preset The list information of the list is shown in Table 6:
  • the second reference value of the range of the image to be processed is 0.012; when the difference between the average value and the standard deviation of the image to be processed When the sum of is less than 0.1, the second reference value of the image range to be processed is 0.005; when the sum of the average value and standard deviation of the image to be processed is between 0.1 and 0.2, the second reference value of the image range to be processed
  • the value of can be obtained by interpolation based on 0.1 and 0.2. Among them, linear interpolation, weighted average interpolation and other interpolation methods can be used to obtain, and the details are not limited here, and will not be repeated here.
  • Tables 4 to 6 are pre-configured lists, and the data in Tables 4 to 6 are optimal parameters obtained based on empirical data.
  • Tables 4 to 6 here are just taking the statistical information of the image to be processed as an example of the difference between the average value and standard deviation of the image to be processed. Other statistical information of the image to be processed can also be used.
  • the second reference value of the range of the image to be processed is obtained by looking up the table, which is not specifically limited here, and will not be repeated here.
  • the first reference value of the target image or the range of the target image sequence may include any one of the following:
  • the maximum brightness of the display device used to display the target image where the display device is a pre-configured or selected device that is used to display the target image when determining the dynamic parameter of the conversion function;
  • the third preset value for example, the third preset value is set to 0.53 or 0.85.
  • the second reference value of the target image or the target image sequence range may include any one of the following:
  • the minimum brightness of the display device used to display the target image where the display device is a device configured or selected in advance and used as a display device to display the target image when determining the dynamic parameter of the conversion function;
  • the fourth preset value for example, the fourth preset value is set to 0.12 or 0.05.
  • the first conversion function is introduced below.
  • the first conversion function may be an S-shaped conversion curve or an inverse S-shaped conversion curve.
  • the S-shaped conversion curve can be a curve whose slope first rises and then falls.
  • the S-shaped conversion curve can also be a curve that includes one or more segments and the slope first rises and then falls.
  • an S-shaped conversion curve composed of two curves is shown. In Figure 5, the black dots indicate the connection points of the two curves.
  • the first conversion function may be an S-shaped conversion curve. If the SDR image is converted to an HDR image, the first conversion function may be an inverse S-shaped conversion curve. If the conversion between HDR images with different dynamic ranges is realized, the first conversion function may be an S-shaped conversion curve or an inverse S-shaped conversion curve.
  • the S-shaped conversion curve can conform to the following formula (1):
  • L is the maximum value of the primary color values of the multiple components of the pixel of the image to be processed
  • L' is the conversion value corresponding to the maximum value of the pixel
  • a, b, p and m are the dynamic parameters of the S-shaped conversion curve , Where p and m are used to control the shape of the curve and the degree of curvature of the curve, and a and b are used to determine the range of the curve, that is, the position of the start and end of the curve.
  • the p and m parameters in formula (1) can be obtained in a variety of ways, and examples are described below.
  • Example 1 According to the statistical information of the image to be processed or the sequence of images to be processed, p and m are obtained by searching the preset list.
  • the statistical information of the image to be processed or the image sequence to be processed is explained by taking the average value of the primary color value of the Y channel of the image sequence to be processed as an example.
  • the average value of the primary color value of the Y channel of the image sequence to be processed is y
  • the information of the preset list described in Example 1 is shown in Table 7a or Table 7b as follows:
  • the p parameter when the average brightness value y of the primary color value of the Y channel of the image sequence to be processed is greater than 0.6, the p parameter is set to 3.2 and the m parameter is set to 2.4; when y is less than 0.1, the p parameter is set to 6.0, and the m parameter is set to 2.2; When y is between 0.55 and 0.6, the values of p and m can be obtained by interpolation.
  • the interpolation method can use any method, such as linear interpolation, weighted average interpolation, etc., which are not specifically limited here.
  • linear interpolation weighted average interpolation, etc.
  • the p parameter can be obtained by the following linear interpolation:
  • the p parameter when the average brightness value y of the primary color value of the Y channel of the image sequence to be processed is greater than 0.5, the p parameter is 31 and the m parameter is 18; when y is less than 0.1, the p parameter is 34 and the m parameter is 18; When y is between 0.3 and 0.5, the values of p and m can be obtained by interpolation.
  • the interpolation method can use any method, such as linear interpolation, weighted average interpolation, etc., which are not specifically limited here.
  • linear interpolation weighted average interpolation, etc.
  • the p parameter can be obtained by the following linear interpolation:
  • Table 7 is a pre-configured list, and the data in Table 7 are parameters obtained based on empirical data. Similarly, through other statistical information of the image to be processed or the image sequence to be processed, the p and m parameters can also be obtained by looking up a table, and the specifics are not limited here and will not be repeated.
  • Example 2 The p and m parameters are jointly determined according to the performance parameters of the target image display device, such as the gamma value, and the statistical information of the image to be processed or the image sequence to be processed.
  • the Gamma value of the target image display device can be determined first, and the gamma Gamma value of the reference target image display device can be used as the m parameter.
  • the Gamma of a general SDR display device is 2.4, that is, the m parameter can be taken as 2.4 ;
  • the p parameter is obtained by looking up Table 3 above.
  • the p and m parameters corresponding to the color information are basically consistent with the corresponding p and m parameters when the color information can be embedded in the pre-production and manually adjusted by the colorist.
  • the p and m parameters adjusted by the color staff are basically consistent with the corresponding p and m parameters when the color information can be embedded in the pre-production and manually adjusted by the colorist.
  • the p and m parameters adjusted by the color staff are basically consistent with the corresponding p and m parameters when the color information can be embedded in the pre-production and manually adjusted by the colorist.
  • the a and b parameters in the formula (1) can be obtained in a variety of ways, and an example is described below.
  • the a and b parameters can be determined by the following formula (2) and formula (3).
  • L 1 is the first reference value of the image sequence range of the image to be processed or the image to be processed
  • L 2 is the second reference value of the image sequence range of the image to be processed or the image to be processed
  • L 1 is the target image or target image
  • L 2 is the target image or the second reference value of the target image sequence range.
  • Method 2 Take the following form of S-shaped conversion curve, which is composed of two functions:
  • L' (2t 3 -3t 2 +1)L' 0 +(t 3 -2t 2 +t)(L 1 -L 0 )k 0 +(-2t 3 +3t 2 )L' 1 +(t 3 -t 2 )(L 1 -L 0 )k 1 ;
  • L' (2t 3 -3t 2 +1)L' 1 +(t 3 -2t 2 +t)(L 2 -L 1 )k 1 +(-2t 3 +3t 2 )L' 2 +(t 3 -t 2 )(L 2 -L 1 )k 2 ;
  • L is the maximum value among the primary color values of the multiple components of the pixel of the image to be processed, and L′ is the conversion value corresponding to the maximum value of the pixel;
  • L 0 , L 1 , L 2 , L' 0 , L' 1 , L' 2 , k 0 , K 1 and K 2 are the dynamic parameters of the S-shaped conversion curve, and L 0 , L' 0 , and k 0 represent the starting point of the segment curve
  • L 0 , L' 0 , and k 0 represent the starting point of the segment curve
  • L 1 , L′ 1 , K 1 represent the input value, output value, and slope of the connection point between the segment and the second segment of the curve;
  • L 2 , L' 2 , and K 2 represent the end of the second segment of the curve
  • the input value, output value, slope of k 0 , K 1 , and K 2 satisfy k 0 ⁇ K 1 , and K 1 >K 2 . , That is, to ensure that the S-shaped conversion curve in the second method is a curve whose slope first rises and then falls.
  • L 0 is the reference value of the range of the image to be processed or the sequence of images to be processed
  • L 2 is the second reference value of the range of the image to be processed or the sequence of images to be processed
  • L' 0 is the reference value of the target image or the range of the target image sequence
  • L' 2 is the second reference value of the target image or the range of the target image sequence
  • the L 1 , L' 1 , k 0 , K 1 , K 2 parameters are obtained by searching the fourth and fifth preset lists according to the statistical information of the image to be processed or the image sequence to be processed.
  • the fourth preset list includes Table 4 and the fifth preset list includes Table 5.
  • L 1 , k 0 , K 1 , K 2 it can be obtained by looking up Table 8 below, where the image to be processed or the image to be processed
  • the statistical information of the image sequence is the average value of the non-linear primary color values of the Y channel of the image sequence to be processed as an example. It is assumed that the average value of the non-linear primary color values of the Y channel of the image sequence to be processed is y, then its corresponding
  • the list information is shown in Table 8 below:
  • the statistical information of the image to be processed or the image sequence to be processed is the sum of the mean value and standard deviation of the Y-path nonlinear primary color of the image to be processed as an example. Assume that the sum of the average value and standard deviation of the image to be processed is x, as shown in Table 9 below:
  • L′ 1 can be obtained not only by looking up a table, but also by a preset calculation formula.
  • L′ 1 can be obtained by the following formula (6):
  • the S-shaped conversion curve can be used to process the maximum values of the primary color values of the multiple components of the pixels of the image to be processed, and the S-shaped conversion curve is used as the above method.
  • the S-shaped conversion curve described in the second is taken as an example.
  • the maximum value of the primary color values of the multiple components of the pixels of the image to be processed can be substituted into the formulas shown in the first and second modes to obtain the conversion value.
  • the transfer function is an inverse S-shaped transfer curve.
  • the reverse S-shaped conversion curve in the embodiment of the present application is a curve in which the slope first decreases and then rises, as shown in FIG. 6, which is a reverse S-shaped curve with a slope falling and then rising A schematic diagram of the conversion curve.
  • the inverse S-shaped conversion curve can be a curve that includes one or more segments, the slope of which first drops and then rises.
  • Figure 7 shows a schematic diagram of a reverse S-shaped conversion curve composed of two-segment curves, and the black dots indicate the connection points of the two-segment curves.
  • the reverse S-shaped conversion curve can conform to the following formula (7).
  • L is the maximum value of the primary color values of the multiple components of the pixel of the target image
  • L' is the conversion value of the maximum value of the primary color values of the multiple components of the pixel of the target image
  • the parameters a, b, p, and m It is the dynamic parameter of the reverse S-shaped conversion curve.
  • the p and m parameters are used to control the shape of the curve and the degree of curvature of the curve.
  • the a and b parameters are used to determine the range of the curve, that is, the position of the start and end of the curve.
  • the p and m parameters are obtained by searching the sixth preset list.
  • the p parameter is set to 3.2, and the m parameter is set to 2.4; when y is less than 0.1, the p parameter is set to 6.0, the m parameter is 2.2; when y is between 0.55 and 0.6, the p and m parameters can be obtained by interpolation, and the details are not limited here, and will not be repeated.
  • the performance parameters of the target image display device such as Gamma value.
  • the gamma value of the target image display device can be selected as the m parameter, and the p parameter can be obtained by looking up Table 3 above.
  • the a and b parameters can be determined by the following formula (8) and formula (9).
  • L 1 is the first reference value of the image sequence range of the image to be processed or the image to be processed
  • L 2 is the second reference value of the image sequence range of the image to be processed or the image to be processed
  • L 1 is the target image or the range of the target image sequence
  • the first reference value, L 2 is the second reference value of the target image or the range of the target image sequence.
  • Method 2 Take the reverse S-shaped conversion curve in the following form, which is composed of two functions:
  • L' (2t 3 -3t 2 +1)L' 0 +(t 3 -2t 2 +t)(L 1 -L 0 )k 0 +(-2t 3 +3t 2 )L' 1 +(t 3 -t 2 )(L 1 -L 0 )k 1 ;
  • L' (2t 3 -3t 2 +1)L' 1 +(t 3 -2t 2 +t)(L 2 -L 1 )k 1 +(-2t 3 +3t 2 )L' 2 +(t 3 -t 2 )(L 2 -L 1 )k 2 ;
  • L is the maximum value among the primary color values of the multiple components of the pixel of the target image information
  • L′ is the maximum conversion value among the primary color values of the multiple components of the pixel of the target image information
  • L 0 , L 1 , L 2 , L' 0 , L' 1 , L' 2 , k 0 , K 1 and K 2 are the dynamic parameters of the S-shaped conversion curve, and L 0 , L' 0 , and k 0 represent the starting point of the segment curve
  • L 0 , L' 0 , and k 0 represent the starting point of the segment curve
  • L 1 , L′ 1 , K 1 represent the input value, output value, and slope of the connection point between the segment and the second segment of the curve;
  • L 2 , L' 2 , and K 2 represent the end of the second segment of the curve
  • the input value, output value, slope of k 0 , K 1 , K 2 satisfy k 0 >K 1 , and K 1 ⁇ K 2 , which means that the inverse S-shaped conversion curve in the second method is guaranteed to have a slope that first drops and then rises curve.
  • L 0 is the reference value of the range of the image to be processed or the sequence of images to be processed
  • L 2 is the second reference value of the range of the image to be processed or the sequence of images to be processed
  • L' 0 is the target image or the target image Sequence range reference value
  • L' 2 is the target image or the second reference value of the target image sequence range
  • the L 1 , L' 1 , k 0 , K 1 , K 2 parameters are obtained by searching the seventh and eighth preset lists according to the statistical information of the image to be processed or the image sequence to be processed.
  • the seventh preset list includes Table 11 and the eighth preset list includes Table 12.
  • L 1 , k 0 , K 1 , K 2 it can be obtained by looking up the following table 11, where the statistical information of the image to be processed or the image sequence to be processed is the non-linearity of the Y channel of the image to be processed or the image sequence to be processed.
  • the average value of the primary color value is described as an example.
  • the average value of the non-linear primary color value of the Y channel of the image to be processed or the image sequence to be processed is y, as shown in Table 11 below:
  • the statistical information of the image to be processed or the image sequence to be processed is the sum of the average value and the standard deviation of the image to be processed or the image sequence to be processed as an example. , Assuming that the sum of the average value and the standard deviation of the image to be processed or the sequence of images to be processed is x, as shown in Table 12 below:
  • L ' is obtained by addition to a lookup table, but also can be obtained by a preset calculation formula, for example, L can be obtained by the following equation (6)' 1:
  • the inverted S-shaped conversion curve can be used to process the maximum values of the primary color values of the multiple components of the pixels of the image to be processed, and the inverted S-shaped conversion curve is the above method.
  • the inverse S-shaped conversion curve described in the first mode and the second mode is taken as an example, and the maximum value of the primary color values of the multiple components of the pixels of the image to be processed can be substituted into the formula shown in the first mode and the second mode to obtain the conversion value.
  • FIG. 8 taking an image in an RGB format as an example, an optional embodiment of a specific scene is introduced.
  • the processing process of any pixel of the image to be processed is described.
  • Each pixel of the multiple pixels included in the image to be processed can be operated with reference to the method shown in FIG. 8 to obtain the corresponding image to be processed.
  • Target image is described.
  • S801 Acquire the primary color values R, G, and B of the three color components of the pixels of the image to be processed.
  • S802 Determine the maximum value MAX among the primary color values of the three color components of the pixels of the image to be processed.
  • the concept and generation method of the look-up value can refer to the description of the first look-up value above.
  • the value range determined by the bit width of the primary color value includes multiple value ranges, and a lookup table is generated for each value range, the maximum value MAX needs to be judged before the maximum value MAX is substituted into the lookup table.
  • the shift operation refers to shifting the exponent to the left.
  • A is 2 R1
  • Z is 2 R2
  • A is used to shift a value Z, that is, shift A to the left by R1 bits
  • the result of the operation is 2 R2-R1 .
  • the result of the shift operation is the same as that of the division operation.
  • the value range determined by the bit width of the primary color value includes two value ranges as an example.
  • FIG. 9 an optional embodiment is introduced.
  • the processing process of any pixel of the image to be processed is described.
  • Each pixel of the multiple pixels included in the image to be processed can be operated with reference to the method shown in FIG. 9 to finally obtain the corresponding pixel of the image to be processed.
  • Target image the processing process of any pixel of the image to be processed.
  • the two value ranges are the first value range and the second value range
  • the maximum value of the first value range is less than the minimum value of the second value range
  • the first value range corresponds to look-up table 1
  • the second value range corresponds to lookup table 2.
  • the concept and generation method of the look-up table value 1 and the look-up table 2 can refer to the description of the first look-up table value above.
  • S901 Acquire the primary color values R, G, and B of the three color components of the pixels of the image to be processed.
  • S902 Determine the maximum value MAX among the primary color values of the three color components of the pixels of the image to be processed.
  • the lookup table is selected according to the relationship between the maximum value MAX and the threshold. For example, it can be determined in the manner of S903.
  • S903 Determine whether the maximum value MAX is less than the threshold, if yes, execute S904-S906; otherwise, execute S904'-S906'.
  • the shift operation refers to shifting the exponent to the left.
  • A is 2 R1
  • Z is 2 R2
  • A is used to shift a value Z, that is, shift A to the left by R1 bits
  • the result of the operation is 2 R2-R1 .
  • the result of the shift operation is the same as that of the division operation.
  • S906 Use the preset quantization coefficient A1 to divide or shift R1, G1, and B1 to obtain the primary color values of the R component, G component, and B component of the pixel to be processed after the dynamic range adjustment: R', G', B'.
  • the shift operation refers to shifting the exponent to the left.
  • A is 2 R1
  • Z is 2 R2
  • A is used to shift a value Z, that is, shift A to the left by R1 bits
  • the result of the operation is 2 R2-R1 .
  • the result of the shift operation is the same as that of the division operation.
  • the primary color value of the dynamic range adjustment performed in S303 is a non-linear primary color value
  • the obtained target image is recorded as the first target image.
  • the dynamic range adjustments are performed on the primary color values of the multiple components of the pixel, there may be the following steps: according to the second conversion function, the non-linear primary color values of the multiple components of the pixel of the first target image are converted into the first target image. Two linear primary color values of multiple components of corresponding pixels of the target image.
  • the target image information can be electro-optically converted according to the HDR electro-optical conversion function to obtain the linear primary color values of the multiple components of each pixel of the SDR image, where the target image information contains the information of the image to be processed
  • the non-linear primary color values of the multiple components after the pixels are reduced and adjusted by the dynamic range.
  • the first target image may be set as the HDR image obtained by using the first conversion curve defined by the first standard.
  • the second conversion function is the first conversion curve defined by the first standard. That is, according to the first conversion curve, the non-linear primary color values of the multiple components of each pixel of the first target image are converted into the linear primary color values of the multiple components of the corresponding pixel points of the second target image.
  • the first target image is PQ domain data
  • the PQ conversion curve converts the non-linear primary color value of the pixel of the first target image in the PQ domain into the linear primary color value of the pixel of the second target image.
  • the conversion curve defined by the high dynamic range image standard includes, but is not limited to, the PQ conversion curve, the SLF conversion curve, and the HLG conversion curve, and is not limited.
  • the non-linear primary color values of the multiple components of the pixels of the first target image are converted into the linear primary color values of the multiple components of the corresponding pixels of the second target image according to the second conversion function. After that, it also includes the following steps:
  • the third conversion function converting the linear primary color values of the multiple components of the corresponding pixels of the second target image into the nonlinear primary color values of the multiple components of the corresponding pixels of the second target image;
  • the linear primary color values of the multiple components of each pixel of the SDR image are photoelectrically converted according to the SDR photoelectric conversion function to obtain the non-linear primary color values of the multiple components of the output SDR image pixel, which can finally be output to Display on SDR display device.
  • the second target image may be set as an HDR image obtained by conversion using a second conversion curve defined by the second standard.
  • a second conversion curve defined by the second standard it is also referred to as a second standard-compliant HDR image.
  • the third conversion function is the second conversion curve defined by the second standard. That is, according to the second conversion curve, the linear primary color values of the multiple components of the pixels of the second target image are converted into the nonlinear primary color values of the multiple components of the corresponding pixel points of the second target image.
  • the second target image may be set as HLG domain data
  • the HLG conversion curve converts the linear primary color value of the pixel of the second target image into the nonlinear primary color value of the pixel of the second target image in the HLG domain.
  • the conversion curve defined by the high dynamic range image standard includes, but is not limited to, the PQ conversion curve, the SLF conversion curve, and the HLG conversion curve, and is not limited.
  • the color space corresponding to the non-linear primary color value of the second target image is converted into the color space of the output second target image display device.
  • the color space corresponding to the non-linear primary color value of the second target image is the BT.2020 color space
  • the output second target image display device is the BT.709 color space
  • the color space is converted from the BT.2020 color space to BT. 709 color space, and then perform step 308 of the above-mentioned embodiment 12.
  • the display effect of the target image after the adjustment of the dynamic range can be effectively ensured to be consistent with the display effect of the image to be processed, and the probability of problems such as contrast change and loss of detail can be reduced, thereby reducing the impact on the display effect of the image. Influence.
  • the image to be processed in S301 is marked as the first For the image to be processed, before S301, the following step is further included: according to the fourth conversion function, the linear primary color values of the multiple components of the pixels of the second image to be processed are converted into multiple components of the corresponding pixels of the first image to be processed The value of the non-linear primary color.
  • the target image is the corresponding non-linear primary color value after the value of the SDR image is converted by the HDR photoelectric conversion function.
  • the first image to be processed may be an HDR image obtained by conversion using the first conversion curve defined by the first standard.
  • the fourth conversion function is the first conversion curve defined by the first standard. That is, according to the first conversion curve, the linear primary color values of the multiple components of the pixels of the second image to be processed are converted into the nonlinear primary color values of the multiple components of the corresponding pixels of the first image to be processed.
  • the PQ conversion curve converts the linear primary color value of the pixel of the second image to be processed into the non-linear primary color value of the pixel of the first image to be processed in the PQ domain.
  • the conversion curve defined by the high dynamic range image standard includes, but is not limited to, the PQ conversion curve, the SLF conversion curve, and the HLG conversion curve, and is not limited.
  • the method further includes the following step: according to the fifth conversion function, the non-linear primary color values of the multiple components of the pixels of the second to-be-processed image are converted into linear primary color values of the multiple components of the corresponding pixels of the second to-be-processed image.
  • electro-optical conversion is performed according to the SDR electro-optical conversion function to obtain the values of the multiple components of the SDR image pixels.
  • the second image to be processed may be an HDR image obtained by conversion using the second conversion curve defined by the second standard.
  • the fifth conversion function is the second conversion curve defined by the second standard. That is, according to the fifth conversion curve, the non-linear primary color values of the multiple components of each pixel of the second target image are converted into the linear primary color values of the multiple components of the corresponding pixels of the second target image.
  • the second target image is HLG domain data
  • the HLG conversion curve converts the non-linear primary color value of the pixel of the second target image in the HLG domain into the linear primary color value of the pixel of the second target image.
  • the conversion curve defined by the high dynamic range image standard includes, but is not limited to, the PQ conversion curve, the SLF conversion curve, and the HLG conversion curve, and is not limited.
  • the color space of the first to-be-processed image is converted into the color space of the second to-be-processed image display device.
  • the color space of the first image to be processed is BT.709 color space
  • the color space of the second image display device to be processed is BT.2020
  • the color space of BT.709 is converted to the color space of BT.2020
  • the linear primary color values of the multiple components of the pixels of the second image to be processed are converted into the nonlinear primary color values of the multiple components of the corresponding pixels of the first image to be processed.
  • the display effect of the target image after the adjustment of the dynamic range can be effectively ensured to be consistent with the display effect of the first image to be processed, and the probability of problems such as contrast change and loss of detail can be reduced, thereby reducing the display effect of the image. Impact.
  • the HDR input signal source includes floating point or half floating point linear EXR format HDR image data, PQ or Slog-3 (acquisition mode) collected HDR image data, and SLF HDR image data input.
  • G’ PQ_TF(max(0,min(G/10000,1))
  • the HDR input signal source includes floating point or half floating point linear EXR format HDR image data, PQ or Slog-3 (acquisition mode) collected HDR image data, and SLF HDR image data input.
  • the conversion from the non-linear primary color value of Slog-3 to the non-linear primary color value of the SLF domain includes:
  • in is the input value
  • out is the output value
  • the conversion from the non-linear primary color value of the PQ domain to the non-linear primary color value of the SLF domain includes S31 and S32.
  • the adjustment of the HDR non-linear primary color value by the SDR compatible display is realized, including:
  • the HDR non-linear primary color value can be processed by the SDR display compatible module to obtain the SDR non-linear primary color value to ensure that the SDR non-linear primary color value can be displayed correctly on the SDR device.
  • the display compatible module includes dynamic range adjustment, color adjustment, non-linear conversion, and ITU-R BT.1886EOTF reverse conversion.
  • SDR display compatible dynamic range adjustment includes:
  • the dynamic range adjustment process performs dynamic range adjustment on the input HDR nonlinear signals R', G', and B'according to the dynamic metadata to obtain signals R1, G1, and B1 suitable for the SDR dynamic range.
  • the embodiment of the present invention generates a dynamic range adjustment curve based on the dynamic metadata, uses the maximum value in the HDR nonlinear signal as a reference value and adjusts the dynamic range, calculates the ratio before and after the reference value adjustment as the adjustment coefficient c, and applies the adjustment coefficient to HDR non-linear signal.
  • the function of the curve dynamic range adjustment parameter is to adjust the dynamic range of the HDR nonlinear signal.
  • the HDR nonlinear signal includes but is not limited to the HDR nonlinear signal in the SLF domain and the HDR nonlinear signal in the PQ domain.
  • the specific expressions of the dynamic range adjustment parameters in the SLF domain and the PQ domain are slightly different. Since there is a good correspondence between the HDR nonlinear signal in the SLF domain and the HDR nonlinear signal in the PQ domain, it is easy to adjust the dynamic range of the SLF domain.
  • the parameters are derived from the corresponding PQ domain dynamic range adjustment parameters.
  • the formula corresponding to the dynamic range adjustment curve of the SLF domain is as follows:
  • the parameters p and m are used to control the curve shape and degree of bending, which are generated according to dynamic metadata; the parameters a and b are used to control the range of the curve, that is, the position of the starting point and the end point.
  • the parameters p and m are used to control the curve shape and degree of bending, which are generated according to dynamic metadata; the parameters a and b are used to control the range of the curve, that is, the position of the starting point and the end point.
  • the p parameter When the average value y is greater than 0.6, the p parameter is set to 3.2; when the average value is less than 0.1, the p parameter is set to 6.0; when the average value is between two adjacent items in the table, the parameter p can be obtained by linear interpolation.
  • the parameter p can be obtained by linear interpolation as follows:
  • the parameter m is the gamma value of the output SDR display device, usually 2.4.
  • the parameters a and b can be calculated by solving the following equations:
  • L1 is the maximum non-linear reference of the HDR image
  • L2 is the minimum non-linear reference of the HDR image
  • L1 ⁇ is the maximum non-linear reference of the SDR image
  • L2 ⁇ is the minimum non-linear reference of the SDR image.
  • L1 and L2 are calculated from the average Y and standard deviation V in the dynamic metadata.
  • L1 When Y+V is greater than 0.7, L1 takes 0.92; when Y+V is less than 0.2, L1 takes 0.85; When Y+V is between two adjacent data in the table, L1 can use linear interpolation Way to get.
  • L2 when YV is greater than 0.35, L2 takes 0.01; when YV is less than 0.1, L2 takes 0; when YV is between two adjacent data in the table, L2 can use linearity Obtained by interpolation.
  • L1 ⁇ , L2 ⁇ are obtained by outputting the maximum brightness and minimum brightness of the display of the SDR device through HDR linear to non-linear transformation.
  • the maximum display brightness of a common SDR display device is 300 nits and the minimum display brightness is 0.1 nits, and the corresponding non-linear values L1 ⁇ is 0.64 and L2 ⁇ is 0.12.
  • SDR display compatible color adjustments include:
  • the color adjustment processes the HDR nonlinear signals R1, G1, B1 after the dynamic range adjustment according to the dynamic metadata and the adjustment coefficient c, and obtains the processed HDR nonlinear signals R2, G2, B2.
  • the calculation method refers to the Rec.709 and Rec.2020 brightness calculation methods.
  • the coefficient d when the average value y is less than 0.1, the coefficient d is set to 0.15. When the average brightness value y is greater than 0.6, the coefficient d is set to 0.25. When the average value y is between the two table values, it can be calculated by linear interpolation. Obtain the coefficient d.
  • the component adjustment coefficients AlphyR, AlphyG, AlphyB are obtained from the ratio of the brightness value Y1 to the R1, G1, and B1 values (ie Y1/R1, Y1/G1, Y1/B1) after being processed by the power function F2.
  • the coefficient e when the average y is less than 0.1, the coefficient e can be 1.2; when the average y is greater than 0.6, the coefficient e can be 0.2, and when the average y is between two adjacent data in the table , Then the coefficient e can be obtained by linear interpolation.
  • the adjustment of the HDR non-linear primary color value by HDR compatible display is realized, including:
  • HDR non-linear signals R', G', B' can be processed by display adaptation adjustment to obtain HDR non-linear signals R", G", B" to ensure that HDR non-linear signals can be displayed correctly on different HDR devices.
  • the adjustment module includes dynamic range adjustment and color adjustment.
  • the dynamic range adjustment processing can be implemented by the following adjustments based on the method described in Embodiment S3: L1 ⁇ , L2 ⁇ are obtained by outputting the maximum brightness and minimum brightness of the HDR device's display through HDR linear to non-linear transformation.
  • the coefficients p and m need to be obtained through the look-up table in embodiment S3 of the image dynamic metadata, and the content of the table items needs to be obtained through experimental calibration according to different HDR display devices.
  • the color range adjustment process can be realized by the following adjustments based on the method described in embodiment S3: the coefficients d and e need to be obtained through a lookup table through image dynamic metadata, and the content of the table items needs to be based on different HDR display devices. Obtained by experimental calibration.
  • the terminal device may include a hardware structure and/or software module, and realize the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether a certain function among the above-mentioned functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraint conditions of the technical solution.
  • an embodiment of the present application further provides an image processing apparatus 1000.
  • the image processing apparatus 1000 may be a mobile terminal or any device with an image processing function.
  • the image processing device 1000 may include modules that perform one-to-one correspondence of the methods/operations/steps/actions in the foregoing method embodiments.
  • the modules may be hardware circuits, software, or a combination of hardware circuits.
  • the image processing device 1000 may include a determining module 1001 and a processing module 1002.
  • the hardware circuit is called hardware (hardware) or c pipe (cpipe).
  • the determining module 1001 is used to determine the maximum value of the primary color values of the multiple components of the pixels of the image to be processed; and used to determine the ratio having a mapping relationship with the maximum value according to the first look-up table, wherein the first The look-up table includes the mapping relationship between the preset ratio and the preset base color value.
  • the processing module 1002 according to the ratio of the mapping relationship with the maximum value, performs dynamic range adjustments on the primary color values of the multiple components of the pixel to obtain the target image; wherein the determining module 1001 also uses The mapping relationship is determined by the following steps: obtaining the conversion value of the preset primary color value according to the first conversion function; and using the ratio of the conversion value to the preset primary color value as the preset ratio.
  • the determination module 1001 and the processing module 1002 may be hardware circuits.
  • the determining module 1001 is specifically configured to: when the preset primary color value includes the maximum value: The mapping relationship determines the first ratio corresponding to the maximum value; when the preset primary color value does not include the maximum value: determine the first preset primary color value and the first predetermined primary color value in the first look-up table Two preset primary color values; according to the mapping relationship, a first ratio and a second ratio corresponding to the first preset primary color value and the second preset primary color value are respectively determined; and the first ratio and the Interpolation is performed on the second ratio to obtain the ratio corresponding to the maximum value.
  • the determination module 1001 may be a hardware circuit.
  • the first look-up table value is a fixed-point value
  • the determining module 1001 is further configured to determine the first table item value in the following manner: determine the dynamic parameter of the first conversion function; According to the value range determined by the bit width of, the fixed-point value is inversely quantized to obtain a floating-point value; based on the first conversion function after the dynamic parameter is determined, the floating-point value is converted into a conversion value; The preset quantization coefficient quantizes the ratio of the conversion value and the floating-point value to obtain the first entry value.
  • the determination module 1001 may be software.
  • the determining module 1001 is further configured to determine the first look-up table corresponding to the first value range in which the maximum value is located; wherein the value range is determined by the bit width of the primary color value Including the first value range and the second value range corresponding to the second lookup table.
  • the determination module 1001 may be a hardware circuit.
  • the determining module 1001 and the processing module 1002 may also be used to perform other corresponding steps or operations in the foregoing method embodiments, which will not be repeated here.
  • the division of modules in the embodiments of this application is illustrative, and it is only a logical function division. In actual implementation, there may be other division methods.
  • the functional modules in the various embodiments of this application can be integrated into one process. In the device, it can also exist alone physically, or two or more modules can be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
  • an embodiment of the present application further provides an image processing apparatus 1100.
  • the image processing apparatus 1100 includes a processor 1101.
  • the processor 1101 is used to call a group of programs to enable the above method embodiments to be executed.
  • the image processing apparatus 1100 further includes a memory 1102, and the memory 1102 is configured to store program instructions and/or data executed by the processor 1101.
  • the memory 1102 and the processor 1101 are coupled.
  • the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, which can be electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • the processor 1101 may operate in cooperation with the memory 1102.
  • the processor 1101 may execute program instructions stored in the memory 1102.
  • the memory 1102 may be included in the processor 1101.
  • the image processing device 1100 may be a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the chip system is an application specific integrated circuit (ASIC) chip
  • the hardware part of the image processing device 1100 is a c-model (cmode) of ASIC core simulation, and the above-mentioned cmode can achieve bit consistency with the ASIC chip effect.
  • ASIC application specific integrated circuit
  • the processor 1101 is configured to: determine the maximum value of the primary color values of the multiple components of the pixels of the image to be processed; and to determine the ratio having a mapping relationship with the maximum value according to the first lookup table, wherein A look-up table includes the mapping relationship between the preset ratio and the preset primary color value.
  • the processor 1101 is further configured to perform dynamic range adjustments on the primary color values of the multiple components of the pixel according to the ratio having a mapping relationship with the maximum value to obtain a target image; wherein, the processor 1101 , Is also used to determine the mapping relationship through the following steps: obtaining the conversion value of the preset primary color value according to the first conversion function; and using the ratio of the conversion value to the preset primary color value as the preset ratio.
  • the processor 1101 may also be used to execute other corresponding steps or operations in the foregoing method embodiments, which will not be repeated here.
  • the processor 1101 may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, and may implement or execute the The disclosed methods, steps and logic block diagrams.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in combination with the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the memory 1102 may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., and may also be a volatile memory, such as random access memory (random access memory). -access memory, RAM).
  • the memory is any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited to this.
  • the memory in the embodiments of the present application may also be a circuit or any other device capable of realizing a storage function for storing program instructions and/or data.
  • the embodiments of the present application also provide a chip including a processor, which is used to support the image processing device to implement the functions involved in the foregoing method embodiments.
  • the chip is connected to a memory or the chip includes a memory, and the memory is used to store the necessary program instructions and data of the communication device.
  • the embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program includes instructions for executing the foregoing method embodiments.
  • the embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the foregoing method embodiments.
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

本申请提供一种图像处理方法及装置,该方法包括:确定待处理图像的像素的多个分量的基色值中的最大值;根据第一查找表,确定与所述最大值具有映射关系的比值,其中,所述第一查找表包括预设比值和预设基色值的映射关系;根据确定后的所述比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,以得到目标图像;其中,所述映射关系的确定,包括:根据第一转换函数获得所述预设基色值的转换值;将所述转换值与所述预设基色值的比值作为所述预设比值。各个步骤可以通过终端设备的硬件电路来实现,可以实现整型数据的处理,从而能够将图像处理流程的执行过程落实到硬件电路,提高图像色彩处理方法的实际应用可能性。

Description

一种图像处理方法及装置 技术领域
本申请涉及图像处理技术领域,特别涉及一种图像处理方法及装置。
背景技术
光学数字成像过程是将真实场景的光辐射通过图像传感器转化为电信号,并以数字图像的方式保存下来。图像显示的目的是通过显示设备重现一幅数字图像所描述的真实场景。从而使用户获得与其直接观察真实场景相同的视觉感知。
动态范围是场景中最亮物体与最暗物体之间的亮度比率,也就是图像从“最亮”到“最暗”之间灰度划分的等级数。动态范围越大,所能表示的层次越丰富,所包含的色彩空间越广。当图像的动态范围与显示设备支持的动态范围不匹配时,需要对动态范围进行调整。如何对动态范围进行调整是需要解决的问题。
发明内容
本申请提供一种图像处理方法及装置,用以实现对图像进行动态范围调整,提高图像质量。
第一方面,提供一种图像处理方法,该方法的执行主体可以是终端设备,该方法具体包括以下步骤:确定待处理图像的像素的多个分量的基色值中的最大值;根据第一查找表,确定与所述最大值具有映射关系的比值,其中,所述第一查找表包括预设比值和预设基色值的映射关系;根据确定后的所述比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,以得到目标图像;其中,所述映射关系的确定,可由以下步骤实现:根据第一转换函数获得所述预设基色值的转换值;将所述转换值与所述预设基色值的比值作为所述预设比值。在图像转换过程中,利用预设的转换曲线或第一转换函数实现图像转换,使得不同动态范围的图像更好兼容不同显示能力的显示设备。举例来说,可以实现图像在SDR显示设备以及具有不同显示能力的HDR显示设备上兼容显示,且有效地保证图像显示效果一致,有助于保证对比度不变,避免细节丢失,进而提高或保持图像的显示效果。其中,各个步骤可以通过终端设备的硬件电路来实现,例如,通过第一查找表确定与最大值具有映射关系的比值的方式,可以实现整型数据的处理,从而能够将图像处理流程的执行过程落实到硬件电路,提高图像色彩处理方法的实际应用可能性。
在一个可能的设计中,所述根据第一查找表,确定与所述最大值具有映射关系的比值,可以包括以下几种情况:当所述预设基色值包括所述最大值时:根据所述映射关系,确定所述最大值对应的所述第一比值;当所述预设基色值不包括所述最大值时:在所述第一查找表中确定第一预设基色值和第二预设基色值;根据所述映射关系,分别确定所述第一预设基色值和所述第二预设基色值对应的第一比值和第二比值;对所述第一比值和所述第二比值进行插值运算,以得到所述最大值对应的比值。这样,通过插值法确定第一色彩调整系统,能够降低第一查找表的表项数值的数量,使得第一查找表占用空间缩小,降低硬件电路的复杂度。
在一个可能的设计中,所述插值运算包括以下任一类型的运算:线性插值、近插值、 双线性二次插值、三次插值或Lanczos插值。
在一个可能的设计中,所述第一查找表的所述第一表项数值为:通过所述第一转换函数获得的所述第一查找表的第一查表值的转换值,与所述第一查表值的比值。
在一个可能的设计中,所述第一查表值为定点数值;所述第一表项数值的确定,可以通过以下方式实现:确定所述第一转换函数的动态参数;根据由所述基色值的比特位宽确定的取值范围,对所述定点数值进行反量化,以得到浮点数值;基于确定所述动态参数后的第一转换函数,将所述浮点数值转换为转换值;根据预设的量化系数,对所述转换值和所述浮点数值的比值进行量化,以得到所述第一表项数值。通过上述方法确定第一查找表的第一表项数值,能够使得第一查找表的查表值和第一表项数值均可以为定点数值,落实了硬件电路的实现可能性。上述方法可以通过软件实现,从而通过软件硬件分离的方式,提高了图像处理流程的实际可用性。并且本部分方法通过软件实现,可以随时根据图像处理的效果来更新该软件流程,使得可适配性高,效果可调性好。
在一个可能的设计中,所述第一查表值基于所述第一查找表的索引值和所述第一查找表的索引值间的步长确定。步长可以是等于1或大于1的整数。
在一个可能的设计中,所述方法还包括:确定所述最大值所在的第一取值范围对应的所述第一查找表;其中,由所述基色值的比特位宽确定的取值范围包括所述第一取值范围和第二查找表对应的第二取值范围。
在一个可能的设计中,所述第二查找表的第二表项数值为:通过所述第一转换函数获得的所述第二查找表的第二查表值的转换值,与所述第二查表值的比值。第二查找表与第一查找表的生成方法类似,可以相互参照。
在一个可能的设计中,所述第一取值范围的最小值大于所述第二取值范围的最大值;对应的,所述第一查表值基于所述第一查找表的索引值、所述第一查找表的索引值间的步长和所述第二取值范围的最大值确定。
在一个可能的设计中,所述第一查找表的索引值间的步长和所述第二查找表的索引值间的步长不同。
在一个可能的设计中,根据所述第一比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,可以有以下几种情况:当所述待处理图像的动态范围大于所述目标图像的动态范围时,根据所述第一比值,对所述像素的所述多个分量的基色值进行缩小动态范围的调整;或者,当所述待处理图像的动态范围小于所述目标图像的动态范围时,根据所述第一比值,对所述像素的所述多个分量的基色值进行扩大动态范围的调整。
在一个可能的设计中,根据所述第一比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,包括以下步骤:分别计算所述第一比值和所述像素的所述多个分量的基色值的乘积,以得到所述像素的所述多个分量的调整后的基色值。可选的,还可以根据该第一比值进行其他的动态压缩处理方法,只要使得能对待处理图像的像素的多个分量进行动态范围缩小或扩大调整处理即可,目的是能较好兼容目标图像的显示设备显示即可,具体此处不做限定。
在一个可能的设计中,所述待处理图像位于待处理图像序列中,所述目标图像位于目标图像序列中,所述第一转换函数的动态参数的确定包括:根据以下信息中的至少一种确定所述动态参数:所述待处理图像或所述待处理图像序列的统计信息;所述待处理图像或所述待处理图像序列范围第一参考值;所述待处理图像或所述待处理图像序列范围第二参 考值;所述目标图像或所述目标图像序列范围第一参考值;所述目标图像或所述目标图像序列范围第二参考值。本申请不再利用固定不变的静态参数,而是利用了动态参数,根据第一转换曲线进行图像的动态压缩处理,相比对图像进行动态范围缩小调整的过程中使用静态参数,可以有效地保证动态范围调整后显示效果的一致性,减少出现对比度变化、细节丢失等问题的概率,进而减少对图像的显示效果的影响。
在一个可能的设计中,所述待处理图像或所述待处理图像序列的统计信息至少包括以下信息中的一种:所述待处理图像或所述待处理图像序列的像素点的至少一个分量的基色值中的最大值、最小值、平均值、标准差以及直方图分布信息。
在一个可能的设计中,所述待处理图像或所述待处理图像序列范围第一参考值,可以包括以下任一种:用于显示所述待处理图像的显示设备的亮度最大值;或者,根据所述待处理图像或所述待处理图像序列的统计信息,查找第一预置列表所得到的值;或者,第一预设值。
在一个可能的设计中,所述待处理图像或所述待处理图像序列范围第二参考值,可以包括以下任一种:用于显示所述待处理图像的显示设备的亮度最小值;或者,根据所述待处理图像或所述待处理图像序列的统计信息,查找第二预置列表所得到的值;或者,第二预设值。
在一个可能的设计中,所述目标图像或所述目标图像序列范围第一参考值,可以包括以下任一种:用于显示所述目标图像的显示设备的亮度最大值;或者,第三预设值。
在一个可能的设计中,所述目标图像或所述目标图像序列范围第二参考值,可以包括以下任一种:用于显示所述目标图像的显示设备的亮度最小值;或者,第四预设值。
在一个可能的设计中,所述第一转换函数包括S型转换曲线或反S型转换曲线。
在一个可能的设计中,所述S型转换曲线为斜率先上升后下降的曲线。
在一个可能的设计中,所述S型转换曲线包含一段或多段曲线。
在一个可能的设计中,所述S型转换曲线符合以下公式:
Figure PCTCN2020089496-appb-000001
其中,所述L为所述最大值,所述L′为所述转换值,所述a、b、p和m为所述S型转换曲线的动态参数。
在一个可能的设计中,所述p和所述m由根据所述待处理图像或所述待处理图像所在图像序列的统计信息,查找第一预置列表获得;
所述a和所述b通过以下公式计算获得:
Figure PCTCN2020089496-appb-000002
Figure PCTCN2020089496-appb-000003
其中,所述L 1为所述待处理图像或所述待处理图像所在图像序列范围的第一参考值,所述L 2为所述待处理图像或所述待处理图像所在图像序列范围的第二参考值,所述L 1为所述目标图像或所述目标图像序列范围的第一参考值,所述L 2为所述目标图像或所述目标图像序列范围的第二参考值。
在一个可能的设计中,所述反S型转换曲线为斜率先下降后上升的曲线。
在一个可能的设计中,所述反S型转换曲线包含一段或多段曲线。
在一个可能的设计中,所述反S型转换曲线符合以下公式:
Figure PCTCN2020089496-appb-000004
其中,所述L为所述目标图像的像素的多个分量的基色值中的最大值,所述L'为所述目标图像的像素的多个分量的基色值中的最大值的转换值,所述a、b、p以及m参数为所述反S型转换曲线的动态参数。
在一个可能的设计中,所述p以及m参数由查找第二预置列表的方式获得;所述a以及b参数通过以下公式计算:
Figure PCTCN2020089496-appb-000005
Figure PCTCN2020089496-appb-000006
其中,所述L 1为所述待处理图像或所述待处理图像所在图像序列范围第一参考值,所述L 2为所述待处理图像或所述待处理图像所在图像序列范围第二参考值,所述L 1为所述目标图像或所述目标图像序列范围第一参考值,所述L 2为所述目标图像或所述目标图像序列范围第二参考值。
第二方面,提供一种图像处理装置,该装置可以是终端设备,也可以是终端设备中的装置(例如芯片、或者芯片系统、或者电路),或者是能够和终端设备匹配使用的装置。一种设计中,该装置可以包括执行第一方面中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括确定模块和处理模块。示例性地:
确定模块,用于确定待处理图像的像素的多个分量的基色值中的最大值;以及用于根据第一查找表,确定与所述最大值具有映射关系的比值,其中,所述第一查找表包括预设比值和预设基色值的映射关系;处理模块,用于根据所述与所述最大值具有映射关系的比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,以得到目标图像;其中,所述确定模块,还用于通过以下步骤确定映射关系:根据第一转换函数获得所述预设基色值的转换值;将所述转换值与所述预设基色值的比值作为所述预设比值。
在一个可能的设计中,在所述根据第一查找表,确定与所述最大值具有映射关系的比值时,所述确定模块具体用于:当所述预设基色值包括所述最大值时:根据所述映射关系,确定所述最大值对应的所述第一比值;当所述预设基色值不包括所述最大值时:在所述第一查找表中确定第一预设基色值和第二预设基色值;根据所述映射关系,分别确定所述第一预设基色值和所述第二预设基色值对应的第一比值和第二比值;对所述第一比值和所述第二比值进行插值运算,以得到所述最大值对应的比值。
在一个可能的设计中,所述插值运算包括以下任一类型的运算:线性插值、近插值、双线性二次插值、三次插值或Lanczos插值。
在一个可能的设计中,所述第一查找表的所述第一表项数值为:通过所述第一转换函数获得的所述第一查找表的第一查表值的转换值,与所述第一查表值的比值。
在一个可能的设计中,所述第一查表值为定点数值;所述确定模块还用于通过以下方 式确定第一表项数值:确定所述第一转换函数的动态参数;根据由所述基色值的比特位宽确定的取值范围,对所述定点数值进行反量化,以得到浮点数值;基于确定所述动态参数后的第一转换函数,将所述浮点数值转换为转换值;根据预设的量化系数,对所述转换值和所述浮点数值的比值进行量化,以得到所述第一表项数值。
在一个可能的设计中,所述第一查表值基于所述第一查找表的索引值和所述第一查找表的索引值间的步长确定。步长可以是等于1或大于1的整数。
在一个可能的设计中,所述确定模块还用于:确定所述最大值所在的第一取值范围对应的所述第一查找表;其中,由所述基色值的比特位宽确定的取值范围包括所述第一取值范围和第二查找表对应的第二取值范围。
在一个可能的设计中,所述第二查找表的第二表项数值为:通过所述第一转换函数获得的所述第二查找表的第二查表值的转换值,与所述第二查表值的比值。第二查找表与第一查找表的生成方法类似,可以相互参照。
在一个可能的设计中,所述第一取值范围的最小值大于所述第二取值范围的最大值;对应的,所述第一查表值基于所述第一查找表的索引值、所述第一查找表的索引值间的步长和所述第二取值范围的最大值确定。
在一个可能的设计中,所述第一查找表的索引值间的步长和所述第二查找表的索引值间的步长不同。
在一个可能的设计中,在根据所述第一比值,对所述像素的所述多个分量的基色值分别进行动态范围调整时,所述处理模块具体用于:当所述待处理图像的动态范围大于所述目标图像的动态范围时,根据所述第一比值,对所述像素的所述多个分量的基色值进行缩小动态范围的调整;或者,当所述待处理图像的动态范围小于所述目标图像的动态范围时,根据所述第一比值,对所述像素的所述多个分量的基色值进行扩大动态范围的调整。
在一个可能的设计中,在根据所述第一比值,对所述像素的所述多个分量的基色值分别进行动态范围调整时,所述处理模块具体用于:分别计算所述第一比值和所述像素的所述多个分量的基色值的乘积,以得到所述像素的所述多个分量的调整后的基色值。可选的,所述处理模块可以还用于根据该第一比值进行其他的动态压缩处理方法。只要使得能对待处理图像的像素的多个分量进行动态范围缩小或扩大调整处理即可,目的是能较好兼容目标图像的显示设备显示即可,具体此处不做限定。
在一个可能的设计中,所述待处理图像位于待处理图像序列中,所述目标图像位于目标图像序列中,所述第一转换函数的动态参数的确定包括:根据以下信息中的至少一种确定所述动态参数:所述待处理图像或所述待处理图像序列的统计信息;所述待处理图像或所述待处理图像序列范围第一参考值;所述待处理图像或所述待处理图像序列范围第二参考值;所述目标图像或所述目标图像序列范围第一参考值;所述目标图像或所述目标图像序列范围第二参考值。本申请不再利用固定不变的静态参数,而是利用了动态参数,根据第一转换曲线进行图像的动态压缩处理,相比对图像进行动态范围缩小调整的过程中使用静态参数,可以有效地保证动态范围调整后显示效果的一致性,减少出现对比度变化、细节丢失等问题的概率,进而减少对图像的显示效果的影响。
在一个可能的设计中,所述待处理图像或所述待处理图像序列的统计信息至少包括以下信息中的一种:所述待处理图像或所述待处理图像序列的像素点的至少一个分量的基色值中的最大值、最小值、平均值、标准差以及直方图分布信息。
在一个可能的设计中,所述待处理图像或所述待处理图像序列范围第一参考值,可以包括以下任一种:用于显示所述待处理图像的显示设备的亮度最大值;或者,根据所述待处理图像或所述待处理图像序列的统计信息,查找第一预置列表所得到的值;或者,第一预设值。
在一个可能的设计中,所述待处理图像或所述待处理图像序列范围第二参考值,可以包括以下任一种:用于显示所述待处理图像的显示设备的亮度最小值;或者,根据所述待处理图像或所述待处理图像序列的统计信息,查找第二预置列表所得到的值;或者,第二预设值。
在一个可能的设计中,所述目标图像或所述目标图像序列范围第一参考值,可以包括以下任一种:用于显示所述目标图像的显示设备的亮度最大值;或者,第三预设值。
在一个可能的设计中,所述目标图像或所述目标图像序列范围第二参考值,可以包括以下任一种:用于显示所述目标图像的显示设备的亮度最小值;或者,第四预设值。
在一个可能的设计中,所述第一转换函数包括S型转换曲线或反S型转换曲线。
在一个可能的设计中,所述S型转换曲线为斜率先上升后下降的曲线。
在一个可能的设计中,所述S型转换曲线包含一段或多段曲线。
在一个可能的设计中,所述S型转换曲线符合以下公式:
Figure PCTCN2020089496-appb-000007
其中,所述L为所述最大值,所述L′为所述转换值,所述a、b、p和m为所述S型转换曲线的动态参数。
在一个可能的设计中,所述p和所述m由根据所述待处理图像或所述待处理图像所在图像序列的统计信息,查找第一预置列表获得;
所述a和所述b通过以下公式计算获得:
Figure PCTCN2020089496-appb-000008
Figure PCTCN2020089496-appb-000009
其中,所述L 1为所述待处理图像或所述待处理图像所在图像序列范围的第一参考值,所述L 2为所述待处理图像或所述待处理图像所在图像序列范围的第二参考值,所述L 1为所述目标图像或所述目标图像序列范围的第一参考值,所述L 2为所述目标图像或所述目标图像序列范围的第二参考值。
在一个可能的设计中,所述反S型转换曲线为斜率先下降后上升的曲线。
在一个可能的设计中,所述反S型转换曲线包含一段或多段曲线。
在一个可能的设计中,所述反S型转换曲线符合以下公式:
Figure PCTCN2020089496-appb-000010
其中,所述L为所述目标图像的像素的多个分量的基色值中的最大值,所述L'为所述目标图像的像素的多个分量的基色值中的最大值的转换值,所述a、b、p以及m参数为所述反S型转换曲线的动态参数。
在一个可能的设计中,所述p以及m参数由查找第二预置列表的方式获得;所述a以 及b参数通过以下公式计算:
Figure PCTCN2020089496-appb-000011
Figure PCTCN2020089496-appb-000012
其中,所述L 1为所述待处理图像或所述待处理图像所在图像序列范围第一参考值,所述L 2为所述待处理图像或所述待处理图像所在图像序列范围第二参考值,所述L 1为所述目标图像或所述目标图像序列范围第一参考值,所述L 2为所述目标图像或所述目标图像序列范围第二参考值。
第二方面及各个可能的设计的有益效果可以参考第一方面对应的效果,在此不再赘述。
第三方面,本申请实施例提供一种图像处理装置,所述装置包括处理器,处理器用于调用一组程序、指令或数据,执行上述第一方面或第一方面的任一可能的设计所描述的方法。所述装置还可以包括存储器,用于存储处理器调用的程序、指令或数据。所述存储器与所述处理器耦合,所述处理器执行所述存储器中存储的、指令或数据时,可以实现上述第一方面或任一可能的设计描述的方法。
第四方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,还可以包括存储器,用于实现上述第一方面或第一方面中任一种可能的设计中所述的方法。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
第五方面,本申请实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机可读指令,当所述计算机可读指令在计算机上运行时,使得如第一方面或第一方面中任一种可能的设计中所述的方法被执行。
第六方面,本申请实施例中还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面或第一方面的任一可能的设计中所述的方法。
附图说明
图1本申请实施例中终端设备的结构示意图;
图2为本申请实施例中终端设备对图像处理的示意图;
图3为本申请实施例中图像处理方法流程示意图;
图4为本申请实施例中一种S型转换曲线一个示意图;
图5为本申请实施例中一种由2段曲线组成的S型转换曲线一个示意图;
图6为本申请实施例中一种反S型转换曲线一个示意图;
图7为本申请实施例中一种由2段曲线组成的反S型转换曲线一个示意图;
图8为本申请实施例中RGB格式图像处理方法流程示意图之一;
图9为本申请实施例中RGB格式图像处理方法流程示意图之二;
图10为本申请实施例中图像色彩处理装置结构示意图之一;
图11为本申请实施例中图像色彩处理装置结构示意图之二。
具体实施方式
本申请实施例提供一种图像处理方法及装置,以期实现对图像进行动态范围的调整,提高图像质量。其中,方法和装置是基于相同或相似技术构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
需要说明的是,本申请实施例的描述中,“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。本申请中所涉及的至少一个是指一个或多个;多个,是指两个或两个以上。另外,需要理解的是,在本申请的描述中,“第一”、“第二”、“第三”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。本申请实施例公式中乘法可以用“*”表示,也可以用“×”表示。
本申请实施例提供的基于图像色彩处理方法及装置可应用于电子设备。该电子设备可以是移动终端(mobile terminal)、移动台(mobile station,MS)、用户设备(user equipment,UE)等移动设备,也可以是固定设备,如固定电话、台式电脑等,还可以是视频监控器。该电子设备具有图像色彩处理功能。该电子设备还可以选择性地具有无线连接功能,以向用户提供语音和/或数据连通性的手持式设备、或连接到无线调制解调器的其他处理设备,比如:该电子设备可以是移动电话(或称为“蜂窝”电话)、具有移动终端的计算机等,还可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置,当然也可以是可穿戴设备(如智能手表、智能手环等)、平板电脑、个人电脑(personal computer,PC)、个人数字助理(personal digital assistant,PDA)、销售终端(Point of Sales,POS)等。本申请实施例中不妨以一种终端设备为例进行说明。
下面将结合附图,对本申请实施例进行详细描述。
图1所示为本申请实施例涉及的终端设备100的一种可选的硬件结构示意图。
如图1所示,终端设备100主要包括芯片组,其中,芯片组可以用于对图像色彩进行处理,例如芯片组包括图像信号处理器(image signal processor,ISP),ISP对图像色彩进行处理。可选的,终端设备100中的芯片组还包括其他模块,终端设备100还可以包括外设装置。具体如下所述。图1中实线框中的电源管理单元(power management unit,PMU)、语音数据编解码器(codec)、短距离模块和射频(radio frequency,RF)、运算处理器、随机存储器(random-access memory,RAM)、输入/输出(input/output,I/O)、显示接口、传感器接口(Sensor hub)、基带通信模块等各部件组成芯片或芯片组。USB接口、存储器、显示屏、电池/市电、耳机/扬声器、天线、传感器(Sensor)等部件可以理解为是外设装置。芯片组内的运算处理器、RAM、I/O、显示接口、ISP、Sensor hub、基带等部件可组成片上系统(system-on-a-chip,SOC),为芯片组的主要部分。SOC内的各部件可以全部集成为一个完整芯片,或者SOC内也可以是部分部件集成,另一部分部件不集成,比如SOC内的基带通信模块,可以与其他部分不集成在一起,成为独立部分。SOC中的各部件可通过总线 或其他连接线互相连接。SOC外部的PMU、语音codec、RF等通常包括模拟电路部分,因此经常在SOC之外,彼此并不集成。
图1中,PMU用于外接市电或电池,为SOC供电,可以利用市电为电池充电。语音codec作为声音的编解码单元外接耳机或扬声器,实现自然的模拟语音信号与SOC可处理的数字语音信号之间的转换。短距离模块可包括无线保真(wireless fidelity,WiFi)和蓝牙,也可选择性包括红外、近距离无线通信(near field communication,NFC)、收音机(FM)或全球定位系统(global positioning system,GPS)模块等。RF与SOC中的基带通信模块连接,用来实现空口RF信号和基带信号的转换,即混频。对手机而言,接收是下变频,发送则是上变频。短距离模块和RF都可以有一个或多个用于信号发送或接收的天线。基带用来做基带通信,包括多种通信模式中的一种或多种,用于进行无线通信协议的处理,可包括物理层(层1)、媒体接入控制(medium access control,MAC)(层2)、无线资源控制(radio resource control,RRC)(层3)等各个协议层的处理,可支持各种蜂窝通信制式,例如长期演进(long term evolution,LTE)通信、或5G新空口(new radio,NR)通信等。Sensor hub是SOC与外界传感器的接口,用来收集和处理外界至少一个传感器的数据,外界的传感器例如可以是加速计、陀螺仪、控制传感器、图像传感器等。运算处理器可以是通用处理器,例如中央处理器(central processing unit,CPU),还可以是一个或多个集成电路,例如:一个或多个特定集成电路(application specific integrated circuit,ASIC),或,一个或多个数字信号处理器(digital singnal processor,DSP),或微处理器,或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA)等。运算处理器可包括一个或多个核,并可选择性调度其他单元。RAM可存储一些计算或处理过程中的中间数据,如CPU和基带的中间计算数据。ISP用于图像传感器采集的数据进行处理。I/O用于SOC与外界各类接口进行交互,如可与用于数据传输的通用串行总线(universal serial bus,USB)接口进行交互等。存储器可以是一个或一组芯片。显示屏可以是触摸屏,通过显示接口与总线连接,显示接口可以是进行图像显示前的数据处理,比如需要显示的多个图层的混叠、显示数据的缓存或对屏幕亮度的控制调整等。
可以理解的是,本申请实施例中涉及的图像信号处理器可以是一个或一组芯片,即可以是集成的,也可以是独立的。例如,终端设备100中包括的图像信号处理器可以是集成在运算处理器中的集成ISP芯片。
图2示出终端设备对图像处理的示意图,终端设备可以对输入的待处理图像进行图像处理,图像处理过程可以包括动态范围调整的处理,还可以包括图像色彩处理等其他处理过程,终端设备输出处理后的目标图像。结合图1所示的终端设备,终端设备中的ISP可以对图像进行动态范围调整,得到处理后的目标图像。
为了更好的理解本申请实施例的方案,首先对本申请实施例涉及到的概念术语进行解释说明。
1)查找表(lookup table,LUT):
查找表可以为本领域技术人员可以理解的任意形式的查找表。可选的,本申请实施例中采用一维(1D)查找表。可选的,查找表中包括一系列的输入数据与输出数据,输入数据与输出数据呈一一对应的关系。其中,查找表中的输出数据可以是以表项数值的形式体现,输入数据可以表示为查表值。查找表中可以不显示查表值,查表值以表项索引或表项下标的形式表示。即查找表中包括一个或多个表项数值,每一个表项数值对应一个查表值, 通过输入查表值,可以得到与该查表值对应的表项数值。
可以理解的是,查找表可以以表格形式体现,也可以以其他能够表示输入数据与输出数据的对应关系的形式体现。
本申请实施例中,为作区分,用第一查找表、第二查找表或第三查找表来表示多个查找表,每个查找表的概念可以参照本第2)点的描述。
2)图像的一些属性或特点:
像素是构成图像的基本单元,像素的颜色通常用若干个(示例性的,比如三个)相对独立的属性来描述,这些独立属性综合作用,自然就构成一个空间坐标,即颜色空间。组成像素的独立属性称为每个像素的分量。示例性的,像素的分量可以是图像的颜色分量,例如R分量、G分量、B分量或Y分量。
亮度是场景辐射亮度的物理测量,单位是坎德拉每平方米(cd/m 2),也可以用nits表示。
一个对应于特定图像颜色分量的数值,称为该分量的基色值。而基色值具有不同的存在形式,例如基色值可以表现为线性基色值或非线性基色值。
线性基色值,与光强度成正比,其值归一化到[0,1],也称为光信号值,其中1表示最高显示亮度,使用不同转移函数时1的含义不同,如当使用PQ转移函数时,1表示最高显示亮度10000nits,如当使用SLF转移函数时,1表示最高显示亮度10000nits,如当使用HLG转移函数时,1表示最高显示亮度2000nits,如当使用BT.1886转移函数时,示例性的,1一般表示最高显示亮度300nits。
非线性基色值,是图像信息的归一化数字表达值,其值归一化到[0,1],也称为电信号值。线性基色值和非线性基色值间存在转换关系,例如:光电转移函数(optical-electro transfer function,OETF)可以用来实现线性基色值到非线性基色值的转换,而电光转移函数(electro-optical transfer function,EOTF)可以用来实现非线性基色值到线性基色值的转换。
常用的SDR光电转移函数包括国际电信联盟无线通信组(international telecommunications union-radio communications sector,ITU-R)BT.1886光电转换函数;对应的,SDR电光转换函数包括ITU-R BT.1886电光转换函数。常用的HDR光电转换函数具体可以包括,但不仅限于如下函数:感知量化(perceptual quantizer,PQ)光电转换函数,混合对数伽马(hybrid log-gamma,HLG)光电转换函数,场景亮度保真(scene luminance fidelity,SLF)光电转换函数。对应的,HDR电光转换函数具体可以包括,但不仅限于如下函数:PQ电光转换函数,HLG电光转换函数,SLF电光转换函数。
上述不同的光电/电光转换函数由不同的高动态范围图像解决方案分别提出,例如:PQ光电/电光转换函数(也称PQ转换曲线)由SMPTE2084标准定义,HLG光电/电光转换函数(也称HLG转换曲线)由BBC和NHK联合提出高动态图像标准定义。应理解,示例性的,经由PQ转换曲线转换的图像遵循SMPTE2084标准,经由HLG转换曲线转换的图像遵循HLG标准。
示例性的,使用PQ转换曲线进行转换后的数据,可以称为PQ域的光/电信号值;使用HLG转换曲线进行转换后的数据,可以称为HLG域的光/电信号值;使用SLF转换曲线进行转换后的数据,可以称为SLF域的光/电信号值。
本申请实施例中,图像的格式可以为红绿蓝(RGB)格式,也可以为亮色分离(YUV)格式,也可以为贝尔(bayer)格式。
3)动态范围:
图像传感器的动态范围都很小,一般CCD传感器的动态范围不超过1000:1,但真实场景中亮度的动态变化范围非常广,夜晚星光照射下场景的平均高度大概为0.0001cd/m 2,而白天阳光照射下场景的亮度则达到了100000cd/m 2
图像的动态范围一般可以包括高动态范围(high dynamic range,HDR)和标准动态范围(standard Dynamic Range,SDR)。HDR图像被用来描述真实世界场景的完整视觉范围,HDR图像能够展现可能会被传统拍摄设备丢失但却能被人类视觉系统感知的极暗和极亮区域的细节信息。
一般的,把图像光信号值动态范围超过0.01到1000nits的信号称为高动态范围光信号值;图像光信息值动态范围不足0.1到400nits的信号称为SDR光信号值。
对应于HDR信号和SDR信号,HDR显示设备显示能力满足HDR图像光信号值动态范围,并且支持HDR电光转换函数,SDR显示设备显示能力满足SDR图像光信号值动态范围,并且支持SDR光电转换函数。
为了使HDR图像能够在SDR显示设备上显示,使SDR图像能够在HDR显示设备上显示,或者HDR图像在具有不同HDR显示能力的HDR显示设备上显示,并且保证显示效果一致,不出现对比度变化、细节丢失等问题,就需要进行动态范围调整处理。
以HDR转SDR动态范围调整为例,一种实现方式中,输入HDR图像,显示设备为SDR显示设备时,采用以下技术方案:获得的HDR图像电信号值,通过动态范围调整,获得最终SDR图像电信号值。其中,动态范围调整所用到转换参数只和SDR显示设备的最大或者最小亮度等固定数据相关,这样的处理方法可能无法保证动态范围调整后SDR图像显示效果与HDR图像显示效果一致,会出现对比度变化、细节丢失等问题,进而影响图像的显示效果。
基于上述描述,如图3所示,本申请实施例提供的图像处理方法如下所述。该方法可以由图1所示的终端设备执行,也可以由其它具有图像处理功能的装置执行。
S301、确定待处理图像的像素的多个分量的基色值中的最大值。
本实施例中,该待处理图像的像素的该多个分量指的是像素中与亮度相关的分量。
例如,对于YUV空间,该基色值可以是YUV空间的Y分量的亮度值。
对于RGB空间,R分量、G分量以及B分量可以用于表征图像各颜色分量的亮度。可选的,对于RGB空间,可以根据R分量、G分量、B分量的色彩值计算Y分量的亮度值。例如,可以根据公式Y=a 11*R+a 12*G+a 13*B,计算Y分量的色彩值。其中,a 11、a 12、a 13是固定系数。本领域技术人员能够理解,a 11、a 12、a 13的取值可以有多种选择,本申请实施例对此不作限定。举例来讲,Y=0.2126*R+0.7152*G+0.0722*B或者Y=0.2627*R+0.6780*G+0.0593*B。
RGB空间中的R分量、G分量和B分量,以及YUV空间的Y分量均与图像的亮度有关,待处理图像的像素的多个分量的基色值,可以是指待处理图像的像素的R分量、G分量、B分量及Y分量的基色值。其中,对于RGB空间,待处理图像的像素的多个分量的基色值中的最大值,为R分量的基色值、G分量的基色值和B分量的基色值,这三个值中的最大值。假设基色值采用归一化方式表示,其最大取值为1,最小取值为0。待处理图像的像素的R分量的基色值为0.5,G分量的基色值为0.6、B分量的基色值为0.7,即确定0.7为该像素的三个分量的基色值中的最大值。假设基色值采用定点型数值,待处理 图像的像素的R分量的基色值为n1,G分量的基色值为n2、B分量的基色值为n3,n1、n2和n3均为定点型数值,n3>n2>n1,则n3为该像素的三个分量的基色值中的最大值。
当待处理图像的像素只包含一个适用的分量时,待处理图像的像素的多个分量的基色值中的最大值,即为该一个适用的分量的基色值。例如,处于YUV颜色空间时的Y分量,待处理图像的像素的多个分量的基色值中的最大值即为Y分量的基色值。
S302、根据第一查找表,确定与该最大值具有映射关系的比值。
其中,第一查找表包括或指示预设比值和预设基色值的映射关系。
该映射关系的确定,可以通过以下过程实现:根据第一转换函数获得预设基色值的转换值,将转换值与预设基色值的比值作为预设比值。
S303、根据S302中确定的与该最大值具有映射关系的比值,对该像素的该多个分量的基色值分别进行动态范围调整,以得到目标图像。
可选的,当待处理图像的图像动态范围大于目标图像的图像动态范围时,根据上述比值,对该像素的该多个分量的基色值进行缩小动态范围调整,或者,当待处理图像的图像动态范围小于目标图像的图像动态范围时,根据上述比值,对该像素的该多个分量的基色值进行扩大动态范围调整。
缩小也可以称为减小或降低,扩大也可以称为增大或升高。
可选的,可以将与该最大值具有映射关系的比值,分别乘以该像素的多个分量的基色值,来进行动态范围调整。还可以根据该比值进行其他的动态压缩处理方法,只要使得能对待处理图像的像素的多个分量进行动态范围缩小或扩大调整处理即可,目的是能较好兼容目标图像的显示设备显示即可,具体此处不做限定。
可以理解的是,待处理图像可能包括多个像素,每个像素均可以按照图3所示的流程进行处理,以得到目标图像。
图3实施例,在图像转换过程中,利用预设的转换曲线实现图像转换,使得不同动态范围的图像更好兼容不同显示能力的显示设备。举例来说,可以实现图像在SDR显示设备以及具有不同显示能力的HDR显示设备上兼容显示,且有效地保证图像显示效果一致,有助于保证对比度不变,避免细节丢失,进而提高或保持图像的显示效果。其中,图3实施例的各个步骤可以通过终端设备的硬件电路来实现,例如,通过第一查找表确定与最大值具有映射关系的比值的方式,可以实现整型数据的处理,从而能够将图像处理流程的执行过程落实到硬件电路,提高图像色彩处理方法的实际应用可能性。
下面对图3实施例的一些可选的实现方式作进一步介绍。
第一查找表的输入数据可以是浮点型数值也可以是整型数值或定点数值。以第一查找表的输入数据为定点数值为例进行介绍。
首先介绍一下查找表的生成过程的可能实现方式。
查找表的输入数据的取值范围可以根据基色值的比特位宽确定,查找表的输入数据的取值范围可以小于或等于基色值的比特位宽确定的取值范围。当基色值为定点数值时,一般情况下,基色值取值为N比特,N为正整数。例如,基色值取值为8比特、10比特、12比特、14比特或16比特。基色值的取值范围为(0~2 N-1)或(1~2 N)。例如,当RGB图像基色值的取值为10比特时,基色值的取值范围为(0~2 10-1)。
查找表的查表值作为输入数据,输出数据为查找表的表项数值。查找表包括或指示表项数值与查表值之间的映射关系。本申请实施例中,查找表包括预设比值和预设基色值的 映射关系,即可以认为表项数值表示预设比值,查表值表示预设基色值。
对于任一查表值,根据第一转换函数获得查表值的转换值,将转换值与该查表值的比值作为该查表值对应的表项数值。
当查表值为定点数值时,可以通过以下方式生成查找表。
根据由基色值的比特位宽确定的取值范围的最大值,对查表值(即定点数值)进行反量化,以得到浮点数值。例如,根据由基色值的比特位宽确定的取值范围的最大值为2 N-1,查表值为M,通过M/(2 N-1)得到浮点数值M1。基于第一转换函数,将该浮点数值转换为转换值,例如,基于第一转换函数将浮点数值M1转换为M2。M2为浮点型数值。获取转换值M2和查表浮点数值M1的比值M2/M1,M2/M1为浮点型数值。根据预设的量化系数,对比值M2/M1进行量化,以得到查找表的表项数值。M2/M1量化后得到的数据为定点型数值。即查找表的查表值和表项数值均可以为定点型数值。
遍历查找表的查表值,按照上述方法可以得到每一个查表值对应的表项数值,从而生成查找表。
查找表的查表值为查找表的输入数据,用于根据查表值得到对应的表项数值。查找表的索引值为表项数值的序号,一般根据自然数由小到大或由大到小的顺序排列生成。查表值可以根据索引值和索引值间的步长确定。
在一种可能的实施例中,图3实施例中所述的第一查找表的第一查表值的取值范围由基色值的比特位宽确定。例如,基色值取值为N比特,基色值的取值范围为(0~2 N-1),最大值为2 N-1。第一查表值可以设置为0~2 N-1中的值。第一查找表中的第一表项数值的序号可以称为第一查找表的索引值,例如,第一查找表中包括L个表项数值,则第一查找表的表项数值的序号为0~L-1或(1~L),L为正整数。第一查找表的索引值为0~L-1或(1~L)。查找表中每两个索引值之间的步长可以为1或大于1的整数。第一查表值基于第一查找表的索引值和索引值间的步长确定。当第一查表值为2 N个时,第一查表值与索引值一一对应,步长为1。步长也可以大于1,第一查表值=索引值×步长。
例如,基色值取值为12比特,基色值的取值范围为0~4095,最大值为4095。索引值可以为0~4095,或者为1~4096。假设索引值可以为0~1023,步长为4,查表值M依次为(0,4,8,12,16,20,……,4092)。
将(0,4,8,12,16,20,……,4092)中的每一个定点数值除以4095,得到每一个定点数值对应的浮点数值M1。
上述查找表的生成过程的可能实现方式适用于第一查找表。
在另一种可能的实施例中,基色值的比特位宽确定的取值范围包括第一取值范围和至少一个第二取值范围。即,基色值的比特位宽确定的取值范围可以包括多个子集。每个子集为一个取值范围,多个子集的并集为该基色值的比特位宽确定的取值范围,或者多个子集的并集也可以小于该基色值的比特位宽确定的取值范围。一般以两个子集为例,即基色值的比特位宽确定的取值范围包括第一取值范围和一个第二取值范围。图3实施例中所述的第一查找表的查表值的取值范围为第一取值范围。例如,基色值取值为N比特,基色值的取值范围为(0~2 N-1),最大值为2 N-1。第二取值范围为(0~N1),第一取值范围为(N1+1~2 N-1),第一取值范围的最小值大于第二取值范围的最大值。第一查找表的查表值的取值范围为(N1+1~2 N-1),第一查表值可以设置为(N1+1~2 N-1)中的值。第一查找表中的第一表项数值的序号可以称为第一查找表的索引值,例如,第一查找表中包括L个表 项数值,则第一查找表的表项数值的序号为0~L-1或(1~L),L为正整数。第一查找表的索引值为0~L-1或(1~L)。第一查找表中每两个索引值之间的步长可以为1或大于1的整数。第一查表值基于第一查找表的索引值和索引值间的步长确定。第一查表值可以与索引值一一对应,即步长为1。步长也可以大于1,此时第一查找表的第一查表值基于第一查找表的索引值、第一查找表的索引值间的步长和第二取值范围的最大值确定,例如第一查表值=索引值×步长+N1。N1为第二取值范围的最大值。若基色值的比特位宽确定的取值范围包括第一取值范围和多个第二取值范围,图3实施例中所述的第一查找表的查表值的取值范围为第一取值范围,第一查表值=索引值×步长+N1。N1为第一取值范围之前所有取值范围的最大值。
与第一查找表类似,根据第二取值范围可以生成第二查找表,第二查找表的查表值的取值范围对应第二取值范围。第二查找表的查表值的取值范围为(0~N1),第二查表值可以设置为(0~N1)中的值。第二查找表中的第二表项数值的序号可以称为第二查找表的索引值,例如,第二查找表中包括L1个表项数值,则第二查找表的表项数值的序号为0~L1-1或(1~L1),L1为正整数。第二查找表的索引值为0~L1-1或(1~L1)。第二查找表中每两个索引值之间的步长可以为1或大于1的整数。第二查表值基于第二查找表的索引值和索引值间的步长确定。第二查表值可以与索引值一一对应,即步长为1。步长也可以大于1,此时第二查找表的第二查表值基于第二查找表的索引值和第二查找表的索引值间的步长确定,例如第二查表值=索引值×步长。
可选的,第一查找表的索引值间的步长和第二查找表的索引值间的步长可以相同也可以不同。
例如,基色值取值为12比特,基色值的取值范围为0~4095,最大值为4095。第二取值范围为(0~255),第一取值范围为(256~4095)。第二查找表的表项数为64项,第二查找表的索引值间的步长256/64=4,第二查找表的表项数值为(0,4,8,12,16,20,……,252)。第一查找表的表项数为128项,第一查找表的索引值间的步长(4095-256)/128=30,第二查找表的表项数值为(256,286,316,,……,4066)。
第二查找表的索引值可以为0~63,或者为1~64。第一查找表的索引值可以为0~127,或者为1~128。
将第一查找表(256,286,316,,……,4066)中的每一个定点数值除以4095,得到每一个定点数值对应的浮点数值M1。将第二查找表(0,4,8,12,16,20,……,252)中的每一个定点数值除以4095,得到每一个定点数值对应的浮点数值。
上文中查找表的生成过程的可能实现方式可以适用于第一查找表,也可以适用于第二查找表。
当色彩值的比特位宽确定的取值范围包括第一取值范围和至少一个第二取值范围时,第一查找表对应第一取值范围。在这种情况下,在S302之前,还可以确定该多个分量的基色值中的最大值所在的第一取值范围对应的第一查找表。该多个分量的基色值中的最大值在以下描述中可以简述为该最大值。例如,可以设定一个阈值,根据该最大值与阈值的比较结果,确定该最大值所在的取值范围,进一步确定该取值范围对应的查找表。当该最大值小于阈值时,确定最大值所在第二取值范围,并确定第二取值范围对应的第二查找表,根据第二查找表,确定与该最大值具有映射关系的比值。当该最大值大于或等于阈值时,确定该最大值所在第一取值范围,并确定第一取值范围对应的第一查找表,根据第一查找 表,确定与该最大值具有映射关系的比值。基于上述举例,基色值取值为12比特,基色值的取值范围为0~4095,取值范围的最大值为4095。第二取值范围为(0~255),第一取值范围为(256~4095)。该阈值可以设置为256。
当然,还可以根据以下比较方式确定与该最大值具有映射关系的比值。当该最大值小于或等于阈值时,确定该最大值所在第二取值范围,并确定第二取值范围对应的第二查找表,根据第二查找表,确定与该最大值具有映射关系的比值。当该最大值大于阈值时,确定该最大值所在第一取值范围,并确定第一取值范围对应的第一查找表,根据确定的第一查找表,确定与该最大值具有映射关系的比值。基于上述举例,色彩值取值为12比特,色彩值的取值范围为0~4095,最大值为4095。第二取值范围为(0~255),第一取值范围为(256~4095)。该阈值可以设置为255。
以下介绍一下S302中根据第一查找表确定与该最大值具有映射关系的比值的可能实现方式。
在S302中,根第一查找表,确定与该最大值具有映射关系的比值。其中,第一查找表包括预设比值和预设基色值的映射关系。第一查找表的预设基色值中可能包括该最大值,则可以根据映射关系确定该最大值对应的比值即可。
但是,第一查找表的预设基色值中可能不包括该最大值,则可选的,本申请实施例可以通过插值法确定与该最大值具有映射关系的比值。
其中,插值法,是数学领域数值分析中,由已知的离散数据来估计未知数据的方法。在本申请实施例中,确定与该最大值具有映射关系的比值所采用的插值法可以是内插值法,也可以是外插值法,可以是线性插值法,也可以是非线性插值法,还可以是近插值法、双线性二次插值法、三次插值法或Lanczos插值法中的任一种,具体可以根据实际情况选择具体的插值法。
可选的,可以在第一查找表中确定第一预设基色值和第二预设基色值,根据预设比值和预设基色值的映射关系,分别确定第一预设基色值和第二预设基色值对应的第一比值和第二比值,对第一预设基色值和第二预设基色值进行插值运算,以得到与该最大值具有映射关系的比值。
第一预设基色值和第二预设基色值可以是该最大值邻近的两个基色值,例如,若采用内插值法,该最大值位于第一预设基色值和第二预设基色值之间,且在预设基色值中,该最大值与第一预设基色值和第二预设基色值均相邻。又例如,若采用外插值法,第一预设基色值和第二预设基色值均小于该最大值,且在预设基色值中,该最大值、第一预设基色值和第二预设基色值三者相邻;或者,第一预设基色值和第二预设基色值均大于该最大值,且在预设基色值中,该最大值、第一预设基色值和第二预设基色值三者相邻。
下面对线性插值法进行举例说明。
第一查找表(LUT1)的步长为2 step,第一查找表的表项个数为NUM,max为查找表插值的输入数值。
第一步,计算插值下标:i_int=((max>>step));
第二步,计算插值权重:i_dec=((max&(((1<<step)-1))));
第三步,计算最终量化插值:C1=(LUT1[i_int]*((1<<step)-i_dec)+LUT1[iClip(i_int+1,0,NUM-1)]*i_dec+(1<<(step-1)))>>step。
下面举例介绍一下如何对第一查找表(LUT)进行线性插值,第一查找表(LUT)的 步长为step,得到该最大值(max)对应的比值。
1、最大值(max)除以步长(step)后的数值再取整就是索引值A:A=max/step;
2、索引值A对应第一查表值的数值(data1)就是第一比值:data1=LUT[A];
3、索引值A+1对应的第一查表值的数值(data2)就是第二比值:data2=LUT[A+1];
4、最大值(max)模除所述步长(step)后的数值dec,dec=max%step;
5、最终插值data3:data3=(data1*(step-dec)+data2*dec)/step。data3即与该最大值具有映射关系的比值。
下面举例介绍一下当基色值的比特位宽确定的取值范围包括第一取值范围和至少一个第二取值范围时,第一查找表对应第一取值范围,在这种情况下,如何对第一查找表(LUT)进行线性插值,得到与该最大值具有映射关系的比值。
1、假设阈值为thres。最大值(max)减去thres后的数值为max1:max1=max-thres。
2、数值max1除以所述步长step后的数值取整就是索引值A:A=max1/step。
3、索引值A对应查表值data1就是第一比值:data1=LUT[A]。
4、索引值A+1对应的查表值data2就是第二比值:data2=LUT[A+1]。
5、数值max1模除所述步长step后的数值dec:dec=max1%step。
6、最终差值data3:data3=(data1*(step-dec)+data2*dec)/step。data3即与该最大值具有映射关系的比值。
本申请实施例中,生成第一查找表的流程为软件流程,可以通过软件实现。软件流程可以封装为固件/软件(firmware)。图3实施例的各个步骤为硬件流程,可以通过硬件电路实现。
基于此,例如,终端设备对连续的多帧待处理图像进行动态范围调整时,终端设备的硬件电路可以对每一帧待处理图像按照图3实施例的各个步骤进行处理,得到每一帧待处理图像对应的目标图像。在连续的每两帧之间的间隔内,终端设备的固件/软件可以生成第一查找表。当然终端设备对图像的动态范围调整过程进行软硬件分离,可以不局限于本段中举例的实现方式。
以下对第一转换函数进行说明。
本申请实施例中,在使用第一转换函数之前,还可以确定第一转换函数的动态参数,后续使用确定动态参数后的第一转换函数。
可选的,可以根据以下信息中的至少一种获得第一转换函数的动态参数:待处理图像的统计信息;待处理图像范围第一参考值;待处理图像范围第二参考值;目标图像范围第一参考值;目标图像范围第二参考值。
当待处理图像以及目标图像以序列的形式存在时,还可以根据以下信息中的至少一种获得第一转换函数的动态参数:待处理图像所在序列的统计信息;待处理图像所在序列范围第一参考值;待处理图像所在序列范围第二参考值;目标图像所在序列范围第一参考值;目标图像所在序列范围第二参考值。
在本申请实施例中,待处理图像或待处理图像所在序列的统计信息可以是指与待处理图像或待处理图像序列属性相关的信息。例如,待处理图像或待处理图像的像素的多个分量的基色值中的最大值、最小值、平均值、标准差以及直方图分布信息。其中,基色值可以是线性基色值,也可以是非线性基色值。基色值可以是亮度分量(Y分量),对应的基色值为非线性基色值。
应理解,待处理图像或待处理图像序列属性相关的信息除了上述列举的信息外,还可以是包括其他的信息,例如,还可以是待处理图像或待处理图像的多个分量的基色值的方差。也可以将上述列举的信息之间的某种函数关系作为统计信息,例如可以是指待处理图像或待处理图像序列的平均值与标准差的和,具体此处不做限定。
应理解,待处理图像或待处理图像序列的平均值具体可以是指:待处理图像或待处理图像序列的像素集合的R分量非线性基色值的平均值,或G分量非线性基色值的平均值,或B分量非线性基色值的平均值,或Y分量非线性基色值的平均值。
或者,待处理图像或待处理图像序列的平均值具体可以是指:待处理图像或待处理图像序列的像素集合的R分量线性基色值的平均值,或G分量线性基色值的平均值,或B分量线性基色值的平均值,或Y分量线性基色值的平均值。
应理解,针对不同颜色空间的待处理图像或待处理图像序列,其对应的非线性基色值或者线性基色值的平均值具体可以有多种情况,上述示例性地以颜色空间为RGB颜色空间以及YUV颜色空间为例进行描述,对于其他颜色空间,不再赘述。
在本申请实施例中,待处理图像或待处理图像序列范围第一参考值,可以包括以下任意一种:
用于显示待处理图像的显示设备的亮度最大值,其中,该显示设备为预先配置或选择的,作为确定转换函数的动态参数时用于显示待处理图像的显示设备;
根据待处理图像或待处理图像序列的统计信息,查找第一预置列表得到的第一参考值;
第一预设值,例如,第一预设值设为0.85或者0.53。
应理解,在本申请实施例中,根据待处理图像或待处理图像序列的统计信息,第一预置列表的方式获得上述待处理图像范围参考值,具体如下所述。
在一种可行的实施方式中,需要利用本申请实施例实现HDR图像到SDR图像的转换,待处理图像为HDR图像,以待处理图像的统计信息为待处理图像的平均值以及标准差之间的和为例,对根据待处理图像的统计信息,通过查找第一预置列表的方式得到第一参考值,即上述待处理图像范围参考值进行描述,其中,第一预置列表的列表信息如表1所示:
表1
平均值与标准差之间的和 0.2 0.5 0.7
待处理图像(HDR)范围参考值 0.85 0.9 0.92
如表1所示,例如,当待处理图像的平均值以及标准差之间的和大于0.7时,则待处理图像范围参考值取0.92;当待处理图像的平均值以及标准差之间的和小于0.2时,则待处理图像范围参考值取0.85;当待处理图像的平均值以及标准差之间的和介于0.2与0.5之间时,则待处理图像范围参考值的取值可以根据数据0.2以及0.5,利用插值的方式获得,当介于0.5与0.7之间时,也可以采取插值的方式获得,其中,可以采用线性插值、加权平均插值等插值方式获得,具体此处不做限定,也不再赘述。
在一种可行的实施方式中,需要利用本申请实施例实现SDR图像到HDR图像的转换,待处理图像为SDR图像,以待处理图像的统计信息为待处理图像的平均值以及标准差之间的和为例,对根据待处理图像的统计信息,通过查找第一预置列表的方式得到第一参考值,即上述待处理图像范围参考值进行描述,其中,第一预置列表的列表信息如表2所示:
表2
平均值与标准差之间的和 0.2 0.5 0.7
待处理图像(SDR)范围参考值 0.53 0.56 0.58
如表2所示,例如,当待处理图像的平均值以及标准差之间的和大于0.7时,则待处理图像范围参考值取0.58;当待处理图像的平均值以及标准差之间的和小于0.2时,则待处理图像范围参考值取0.53;当待处理图像的平均值以及标准差之间的和介于0.2与0.5之间时,则待处理图像范围参考值的取值可以根据数据0.2以及0.5,利用插值的方式获得,当介于0.5与0.7之间时,也可以采取插值的方式获得,其中,可以采用线性插值、加权平均插值等插值方式获得,具体此处不做限定,也不再赘述。
在一种可行的实施方式中,需要利用本申请实施例实现不同动态范围的HDR图像之间转换,待处理图像为HDR图像,以待处理图像的统计信息为待处理图像的平均值以及标准差之间的和为例,对根据待处理图像的统计信息,通过查找第一预置列表的方式得到第一参考值,即上述待处理图像范围参考值进行描述,其中,第一预置列表的列表信息如表3所示:
表3
平均值与标准差之间的和 0.2 0.5 0.7
待处理图像(HDR)范围参考值 0.82 0.85 0.90
如表3所示,例如:当待处理图像的平均值以及标准差之间的和大于0.7时,则待处理图像范围参考值取0.90;当待处理图像的平均值以及标准差之间的和小于0.2时,则待处理图像范围参考值取0.82;当待处理图像的平均值以及标准差之间的和介于0.2与0.5之间时,则待处理图像范围参考值的取值可以根据数据0.2以及0.5,利用插值的方式获得,当介于0.5与0.7之间时,也可以采取插值的方式获得,其中,可以采用线性插值、加权平均插值等插值方式获得,具体此处不做限定,也不再赘述。
应理解,表1~表3为预先配置的列表,表1~表3中的数据为根据经验数据获得的最优参数。另外应理解,表1~表3只是以待处理图像的统计信息为待处理图像的平均值以及标准差之间的和为例进行说明,通过待处理图像的其他的统计信息,或者通过待处理图像序列的统计信息也可以通过查表的方式获得待处理图像范围参考值,具体此处不做限定,也不再赘述。
在本申请实施例中,待处理图像或待处理图像序列范围第二参考值,可以包括以下任意一种:
用于显示第二待处理图像的显示设备的亮度最小值,其中,该显示设备为预先配置或选择的设备,作为确定转换函数的动态参数时用于显示待处理图像的显示设备;
根据待处理图像或待处理图像序列的统计信息,查找第二预置列表,得到的第二参考值;
第二预设值,例如,第二预设值设为0.05或者0.12。
同理,在本申请实施例中,通过待处理图像或待处理图像序列的统计信息,查找第二预置列表的方式获得上述待处理图像范围第二参考值。具体如下所述。
在一种可行的实施方式中,需要利用本申请实施例实现HDR图像到SDR图像的转换,待处理图像为HDR图像,以待处理图像的统计信息为待处理图像的平均值以及标准差之间的差为例,对根据待处理图像的统计信息,通过第二预置查找表的方式获得第二参考值, 即上述待处理图像范围第二参考值进行描述,其中,第二预置列表的列表信息如表4所示:
表4
平均值与标准差之间的差 0.1 0.2 0.35
待处理图像(HDR)范围第二参考值 0 0.005 0.01
如表4所示,例如,当待处理图像的平均值以及标准差之间的差大于0.35时,则待处理图像范围第二参考值取0.01;当待处理图像的平均值以及标准差之间的和小于0.1时,则待处理图像范围第二参考值取0;当待处理图像的平均值以及标准差之间的和介于0.1与0.2之间时,则待处理图像范围第二参考值的取值可以根据0.1以及0.2,利用插值的方式获得。其中,可以采用线性插值、加权平均插值等插值方式获得,具体此处不做限定,此处也不再赘述。
在一种可行的实施方式中,需要利用本申请实施例实现SDR图像到HDR图像的转换,待处理图像为SDR图像,以待处理图像的统计信息为待处理图像的平均值以及标准差之间的差为例,对根据待处理图像的统计信息,通过第二预置查找表的方式获得第二参考值,即上述待处理图像范围第二参考值进行描述,其中,第二预置列表的列表信息如表5所示:
表5
平均值与标准差之间的差 0.1 0.2 0.35
待处理图像(SDR)范围第二参考值 0.1 0.12 0.15
如表5所示,例如:当待处理图像的平均值以及标准差之间的差大于0.35时,则待处理图像范围第二参考值取0.15;当待处理图像的平均值以及标准差之间的和小于0.1时,则待处理图像范围第二参考值取0.1;当待处理图像的平均值以及标准差之间的和介于0.1与0.2之间时,则待处理图像范围第二参考值的取值可以根据0.1以及0.2,利用插值的方式获得。其中,可以采用线性插值、加权平均插值等插值方式获得,具体此处不做限定,此处也不再赘述。
在一种可行的实施方式中,需要利用本申请实施例实现不同动态范围的HDR图像之间转换,待处理图像为HDR图像,以待处理图像的统计信息为待处理图像的平均值以及标准差之间的差为例,对根据待处理图像的统计信息,通过第二预置查找表的方式获得第二参考值,即上述待处理图像范围第二参考值进行描述,其中,第二预置列表的列表信息如表6所示:
表6
平均值与标准差之间的差 0.1 0.2 0.35
待处理图像(HDR)范围第二参考值 0.005 0.01 0.012
如表6所示,例如:当待处理图像的平均值以及标准差之间的差大于0.35时,则待处理图像范围第二参考值取0.012;当待处理图像的平均值以及标准差之间的和小于0.1时,则待处理图像范围第二参考值取0.005;当待处理图像的平均值以及标准差之间的和介于0.1与0.2之间时,则待处理图像范围第二参考值的取值可以根据0.1以及0.2,利用插值的方式获得。其中,可以采用线性插值、加权平均插值等插值方式获得,具体此处不做限定,此处也不再赘述。
同样应理解,表4~表6为预先配置的列表,表4~表6中的数据为根据经验数据获得的最优参数。另外应理解,表4~表6在这里只是以待处理图像的统计信息为待处理图像的平均值以及标准差之间的差为例进行说明,通过待处理图像的其他的统计信息,也可以通 过查表的方式获得待处理图像范围第二参考值,具体此处不做限定,也不再赘述。
在本申请实施例中,目标图像或目标图像序列范围第一参考值,可以包括以下任意一种:
用于显示目标图像的显示设备的亮度最大值,其中,该显示设备为预先配置或选择的设备,作为确定转换函数的动态参数时用于显示目标图像的显示设备;
第三预设值,例如,第三预设值设为0.53或0.85。
在本申请实施例中,目标图像或目标图像序列范围第二参考值,可以包括以下任意一种:
用于显示目标图像的显示设备的亮度最小值,其中,该显示设备为预先配置或选择的设备,作为确定转换函数的动态参数时用于显示目标图像的显示设备;
第四预设值,例如,第四预设值设为0.12或0.05。
下面对第一转换函数进行介绍。
示例性的,第一转换函数可以为S型转换曲线或者反S型转换曲线。
如图4所示,S型转换曲线可以是一种斜率先上升后下降的曲线。S型转换曲线还可以是包含一段或多段曲线且斜率先上升后下降的曲线。如图5所示,示意了一种由2段曲线组成的S型转换曲线。图5中,黑点表示两段曲线的连接点。
若将HDR图像转换为SDR图像,则第一转换函数可以是S型转换曲线。若将SDR图像转换为HDR图像,则第一转换函数可以是反S型转换曲线。若实现不同动态范围的HDR图像之间的转换,则第一转换函数可以是S型转换曲线,也可以是反S型转换曲线。
下面通过具体的形式对在本申请实施例中所适用的几种S型转换曲线进行举例描述。
方式一、S型转换曲线可以符合以下公式(1):
Figure PCTCN2020089496-appb-000013
其中,L为待处理图像的像素的多个分量的基色值中的最大值,L'为该像素的该最大值对应的转换值,a、b、p和m为S型转换曲线的动态参数,其中,p和m用来控制曲线形状以及曲线的弯曲程度,a和b用来确定曲线的范围,即曲线起点,终点的位置。
可选地,公式(1)中的p以及m参数可以通过多种方式获得,下面进行举例描述。
例1、根据待处理图像或待处理图像序列的统计信息,通过查找预置列表的方式获得p和m。
为了便于描述,待处理图像或待处理图像序列的统计信息以待处理图像序列的Y通道的基色值的平均值为例进行说明,这里假设待处理图像序列的Y通道的基色值的平均值为y,例1中描述的预置列表的信息如下表7a或表7b所示:
表7a
y 0.1 0.25 0.3 0.55 0.6
p 6.0 5.0 4.5 4.0 3.2
m 2.2 2.25 2.3 2.35 2.4
表7b
y 0.1 0.3 0.5
p 34 32 31
m 18 18 18
如表7a所示,当待处理图像序列的Y通道的基色值的平均亮度值y大于0.6时,p参数取3.2,m参数取2.4;当y小于0.1时,p参数取6.0,m参数取2.2;当y介于0.55与0.6之间时,p和m的值,可以通过插值方式获得。
其中,插值方法可以使用任意方式,例如线性插值、加权平均插值等方式,具体此处不做限定。例如,这里以p为例进行说明,当y介于0.55与0.6之间时,可以通过如下线性插值方式获得p参数:
p=4.0+(y-0.55)/(0.6-0.55)*(3.2-4.0);
对于其他情况,例如当y介于0.1与0.25时,可以此类推获得对应的p以及m参数,具体此处不做赘述。
如表7b所示,当待处理图像序列的Y通道的基色值的平均亮度值y大于0.5时,p参数取31,m参数取18;当y小于0.1时,p参数取34,m参数取18;当y介于0.3与0.5之间时,p和m的值,可以通过插值方式获得。
其中,插值方法可以使用任意方式,例如线性插值、加权平均插值等方式,具体此处不做限定。例如,这里以p为例进行说明,当y介于0.3与0.5之间时,可以通过如下线性插值方式获得p参数:
P=32+(y-0.3)/(0.5-0.3)*(31-32);
对于其他情况,例如当y介于0.1与0.3时,可以此类推获得对应的p以及m参数,具体此处不做赘述。
应理解,表7为预先配置的列表,表7中的数据为根据经验数据获得的参数。类似地,通过待处理图像或待处理图像序列的其他的统计信息,也可以通过查表的方式获得p和m参数,具体此处不做限定,也不再赘述。
例2、根据目标图像显示设备的性能参数,例如伽马(Gamma)值,以及待处理图像或待处理图像序列的统计信息共同确定p和m参数。
可以先确定目标图像显示设备的Gamma值,将参考目标图像显示设备的伽马Gamma值作为m参数,其中,示例性的,一般的SDR显示设备的Gamma均为2.4,即可以取m参数为2.4;而p参数则通过查找上述表3的方式获得。
例3、可以嵌入至前期制作中,通过调色人员手动调整,使得目标图像与获取的待处理图像的色彩、饱和度以及对比度等颜色信息基本保持一致时所对应的p和m参数,接收调色人员调整出的p以及m参数。
应理解,除了以上例1~例3几种方式外,还可以通过其他的方式获得p和m参数。
可选地,公式(1)中的a以及b参数可以通过多种方式获得,下面进行举例描述。
当确定了p以及m参数后,可以通过以下公式(2)和公式(3)确定a和b参数。
Figure PCTCN2020089496-appb-000014
Figure PCTCN2020089496-appb-000015
其中,L 1为待处理图像或待处理图像所在图像序列范围的第一参考值,L 2为待处理图像或待处理图像所在图像序列范围的第二参考值,L 1为目标图像或目标图像序列范围的第 一参考值,L 2为目标图像或目标图像序列范围的第二参考值。
方式二、采取如下形式的S型转换曲线,由两段函数构成:
当L 0≤L≤L 1时,采用以下公式(4)计算L'值:
L'=(2t 3-3t 2+1)L' 0+(t 3-2t 2+t)(L 1-L 0)k 0+(-2t 3+3t 2)L' 1+(t 3-t 2)(L 1-L 0)k 1
其中,
Figure PCTCN2020089496-appb-000016
当L 1<L≤L 2时,采用以下公式(5)计算L'值:
L'=(2t 3-3t 2+1)L' 1+(t 3-2t 2+t)(L 2-L 1)k 1+(-2t 3+3t 2)L' 2+(t 3-t 2)(L 2-L 1)k 2
其中,
Figure PCTCN2020089496-appb-000017
其中,L为待处理图像的像素的多个分量的基色值中的最大值,L'为该像素的该最大值对应的转换值;
L 0、L 1、L 2、L' 0、L' 1、L' 2、k 0、K 1以及K 2为S型转换曲线动态参数,L 0、L' 0、k 0表示段曲线起点的输入、输出值、斜率;L 1、L′ 1、K 1表示段与第二段曲线连接点的输入值、输出值、斜率;L 2、L' 2、K 2表示第二段曲线终点的输入值、输出值、斜率;k 0、K 1、K 2满足k 0<K 1,且K 1>K 2。,即保证本方式二中的S型转换曲线为斜率先上升后下降的曲线。
可选地,L 0为待处理图像或待处理图像序列范围参考值,L 2为待处理图像或待处理图像序列范围第二参考值,L' 0为目标图像或目标图像序列范围参考值,L' 2为目标图像或目标图像序列范围第二参考值;
其中,L 1,L' 1,k 0,K 1,K 2参数由根据待处理图像或待处理图像序列的统计信息,通过查找第四、五预置列表的方式获得。
其中,第四预置列表包括表4以及第五预置列表包括表5,对于L 1,k 0,K 1,K 2,可以查找下表8的方式获得,这里以待处理图像或待处理图像序列的统计信息为待处理图像序列的Y通道的非线性基色值的平均值为例进行描述,这里假设待处理图像序列的Y通道的非线性基色值的平均值为y,则其对应的列表信息具体如下表8所示:
表8
y 0.1 0.25 0.3 0.55 0.6
L 1 0.13 0.28 0.34 0.58 0.63
k 0 0 0.05 0.1 0.15 0.2
K 1 0.8 1.0 1.2 1.4 1.5
K 2 0 0.05 0.1 0.15 0.2
如表8所示,例如,当y为0.1时,对应的,L 1取0.13,k 0取0,K 1取0.8,K 2取0,当y为其他数值时,根据表8以此类推可以获得对应的L 1,k 0,K 1,K 2参数,具体此处不再赘述。
这里应理解,当y介于表8中y对应的数值时,例如,当y介于0.5与0.55之间时, 可以通过插值方式获得对应的L 1,k 0,K 1,K 2参数,具体此处不再描述。
对于L′ 1,可以通过查找表9的方式获得,这里以待处理图像或待处理图像序列的统计信息为待处理图像的Y通路非线性基色平均值与标准差的和为例进行描述,这里假设待处理图像的的平均值与标准差和为x,具体如下表9所示:
表9
x 0.2 0.5 0.7
L′ 1 0.3 0.4 0.5
如表9所示,例如,当x为0.2时,L′ 1取0.3,当x为0.5时,L′ 1取0.7,当x介于0.2与0.5之间时,可以通过插值的方式获得对应的L′ 1,具体的如何通过插值方式获得,这里不再赘述。
应理解,在本实施例中,L′ 1除了通过查表的方式获得外,还可以通过预置计算公式获得,例如可以通过以下公式(6)获得L′ 1
Figure PCTCN2020089496-appb-000018
本实施例中,当获取了S型转换曲线动态参数后,可以利用S型转换曲线对待处理图像的像素的多个分量的基色值最大值进行处理,以S型转换曲线为上述方式一、方式二所述的S型转换曲线为例,可以将待处理图像的像素的多个分量的基色值的最大值代入方式一以及方式二中所示的公式,获得转换值。
在一种可行的实施方式中,转换函数为反S型转换曲线。
在本申请实施例中,优选地,在本申请实施例中的反S型转换曲线为斜率先下降后上升的曲线,如图6所示,图6为一种斜率下降后上升的反S型转换曲线一个示意图。
反S型转换曲线可以是包含一段或多段曲线,斜率先下降后上升的曲线。图7示意了一种由2段曲线组成的反S型转换曲线示意图,黑点表示两段曲线的连接点。
为了便于理解,下面通过具体的形式对在本申请实施例中所优先采取的反S型转换曲线进行描述:
方式一、反S型转换曲线可以符合下述公式(7)。
Figure PCTCN2020089496-appb-000019
其中,L为目标图像的像素的多个分量的基色值中的最大值,L'为目标图像的像素的多个分量的基色值中的最大值的转换值,a、b、p以及m参数为反S型转换曲线的动态参数,p以及m参数用来控制曲线形状以及曲线的弯曲程度,a以及b参数用来确定曲线的范围,即曲线起点,终点的位置。
可选地,公式(7)中的p以及m参数可以通过多种方式获得,下面进行举例说明。
1、根据待处理图像或待处理图像序列的统计信息,通过查找第六预置列表的方式获得p以及m参数。
为了便于描述,这里假设待处理图像或待处理图像序列的Y通道的非线性基色值的平均值为y,第七预置列表的信息如下表10所示:
表10
y 0.1 0.25 0.3 0.55 0.6
p 6.0 5.0 4.5 4.0 3.2
m 2.2 2.25 2.3 2.35 2.4
如表10所示,当待处理或待处理图像序列的Y通道的非线性基色值的平均亮度值y大于0.6时,p参数取3.2,m参数取2.4;当y小于0.1时,p参数取6.0,m参数取2.2;当y介于0.55与0.6之间时,p以及m参数,可以通过插值方式获得,具体此处不做限定,也不再赘述。
2、根据目标图像显示设备的性能参数,例如Gamma值。以及待处理图像或待处理图像序列的统计信息共同获得p、m参数。
例如、可以选择目标图像显示设备的伽马(Gamma)值作为m参数,中,而p参数则通过查找上述表3的方式获得。
3、可以嵌入至前期制作中,通过调色人员手动调整,使得获取的待处理图像与目标图像的色彩、饱和度以及对比度等颜色信息基本保持一致时所对应的p和m参数,接收调整出的p以及m参数。
应理解,除了以上1~3几种方式外,还可以通过其他的方式获得p和m参数。
可选地,公式(7)中的a以及b参数可以通过多种方式获得,下面进行举例描述。
当确定了p以及m参数后,可以通过以下公式(8)和公式(9)确定a和b参数。
Figure PCTCN2020089496-appb-000020
Figure PCTCN2020089496-appb-000021
其中,L 1为待处理图像或待处理图像所在图像序列范围第一参考值,L 2为待处理图像或待处理图像所在图像序列范围第二参考值,L 1为目标图像或目标图像序列范围第一参考值,L 2为目标图像或目标图像序列范围第二参考值。
方式二、采取如下形式的反S型转换曲线,由两段函数构成:
当L 0≤L≤ L1时,采用以下公式(10)计算L'值:
L'=(2t 3-3t 2+1)L' 0+(t 3-2t 2+t)(L 1-L 0)k 0+(-2t 3+3t 2)L' 1+(t 3-t 2)(L 1-L 0)k 1
其中,
Figure PCTCN2020089496-appb-000022
当L 1<L≤L 2时,采用以下公式(11)计算L'值:
L'=(2t 3-3t 2+1)L' 1+(t 3-2t 2+t)(L 2-L 1)k 1+(-2t 3+3t 2)L' 2+(t 3-t 2)(L 2-L 1)k 2
其中,
Figure PCTCN2020089496-appb-000023
其中,L为目标图像信息的像素的多个分量的基色值中的最大值,L'为目标图像信息的像素的多个分量的基色值中的最大值转换值;
L 0、L 1、L 2、L' 0、L' 1、L' 2、k 0、K 1以及K 2为S型转换曲线动态参数,L 0、L' 0、k 0 表示段曲线起点的输入、输出值、斜率;L 1、L′ 1、K 1表示段与第二段曲线连接点的输入值、输出值、斜率;L 2、L' 2、K 2表示第二段曲线终点的输入值、输出值、斜率;k 0、K 1、K 2满足k 0>K 1,且K 1<K 2,即保证本方式二中的反S型转换曲线为斜率先下降后上升的曲线。
可选地,本实施例中,L 0为待处理图像或待处理图像序列范围参考值,L 2为待处理图像或待处理图像序列范围第二参考值,L' 0为目标图像或目标图像序列范围参考值,L' 2为目标图像或目标图像序列范围第二参考值;
L 1,L' 1,k 0,K 1,K 2参数根据待处理图像或待处理图像序列的统计信息,通过查找第七、八预置列表的方式获得。
其中,第七预置列表包括表11以及第八预置列表包括表12。对于L 1,k 0,K 1,K 2,可以查找下表11的方式获得,这里以待处理图像或待处理图像序列的统计信息为待处理图像或待处理图像序列的Y通道的非线性基色值的平均值为例进行描述,这里假设待处理图像或待处理图像序列的Y通道的非线性基色值的平均值为y,具体如下表11所示:
表11
y 0.1 0.25 0.3 0.55 0.6
L 1 0.13 0.28 0.34 0.58 0.63
k 0 0.8 1.0 1.2 1.4 1.5
k 1 0 0.05 0.1 0.15 0.2
k 2 0.8 1.0 1.2 1.4 1.5
如表11所示,例如,当y为0.1时,对应的,L 1取0.13,k 0取0.8,K 1取0,K 2取0.8,当y为其他数值时,根据表11以此类推可以获得对应的L 1,k 0,K 1,K 2参数,具体此处不再赘述。
对于L′ 1,可以通过查找表12的方式获得,这里以待处理图像或待处理图像序列的统计信息为待处理图像或待处理图像序列的平均值与标准差之间的和为例进行描述,这里假设待处理图像或待处理图像序列的平均值与标准差之间的和为x,具体如下表12所示:
表12
x 0.2 0.5 0.7
L′ 1 0.3 0.4 0.5
如表12所示,例如,当x为0.2时,L′ 1取0.3,当x为0.5时,L′ 1取0.4,当x介于0.2与0.5之间时,可以通过插值的方式获得对应的L′ 1,具体的如何通过插值方式获得,这里不再赘述。
应理解,在本实施例中,L′ 1除了通过查表方式获得外,还可以通过预置计算公式获得,例如可以通过以下公式(6)获得L′ 1
Figure PCTCN2020089496-appb-000024
本实施例中,当获取了反S型转换曲线动态参数后,可以利用反S型转换曲线对待处理图像的像素的多个分量的基色值最大值进行处理,以反S型转换曲线为上述方式一、方式二所述的反S型转换曲线为例,可以将待处理图像的像素的多个分量的基色值的最大值代入方式一以及方式二中所示的公式,获得转换值。
基于上述实施例,为了对本申请实施例提供的图像处理方法作进一步了解,如图8所示,以RGB格式的图像为例,介绍一种具体场景的可选实施例。图8实施例中,描述了对待处理图像的任一像素的处理过程,待处理图像包括的多个像素中的每一个像素均可以参照图8所示的方法操作,最终获得待处理图像对应的目标图像。
S801,获取待处理图像的像素的3个色彩分量的基色值R、G、B。
S802,确定待处理图像的像素的3个色彩分量的基色值中的最大值MAX。
S803,将最大值MAX代入查找表,得到与最大值MAX对应的比值C1。
该查表值的概念和生成方法可以参照上文中第一查表值的描述。可选的,若基色值的比特位宽确定的取值范围包括多个取值范围,每个取值范围对应生成一个查找表,则将最大值MAX代入查找表之前,还需要判断最大值MAX所在的取值范围,选择与最大值MAX所在的取值范围对应的查找表。在图8中对该可选的步骤通过虚线框来示意。
S804,将待处理图像的像素的R分量、G分量、B分量的基色值分别与该比值C1相乘,得到乘积R1、G1和B1。
S805,采用预设的量化系数A对R1、G1、B1进行除法或位移运算,得到待处理图像的像素的R分量、G分量和B分量动态范围调整后的基色值:R’、G’、B’。
其中,位移运算是指将指数向左进行位移,例如A为2 R1,Z为2 R2,使用A对一个数值Z进行位移运算,即对A左移R1位,运算结果为2 R2-R1。位移运算与除法运算的运算结果相同。
应理解,除了将该比值C1乘以R分量、G分量、B分量的基色值进行动态压缩之外,还可以根据该比值C1进行其他的动态压缩处理方法,只要使得能对待处理图像的像素的多个分量进行动态范围缩小或扩大调整处理即可,目的是能较好兼容目标图像的显示设备显示即可,具体此处不做限定。
以RGB格式的图像为例,以基色值的比特位宽确定的取值范围包括两个取值范围为例,如图9所示,介绍一个可选的实施例。图9实施例中,描述了对待处理图像的任一像素的处理过程,待处理图像包括的多个像素中的每一个像素均可以参照图9所示的方法操作,最终获得待处理图像对应的目标图像。
假设两个取值范围为第一取值范围和第二取值范围,第一取值范围的最大值小于第二取值范围的最小值,第一取值范围对应查找表1,第二取值范围对应查找表2。查表值1和查找表2的概念和生成方法可以参照上文中第一查表值的描述。
S901,获取待处理图像的像素的3个色彩分量的基色值R、G、B。
S902,确定待处理图像的像素的3个色彩分量的基色值中的最大值MAX。
下面根据最大值MAX与阈值的大小关系,选择查找表,例如,可以按照S903的方式判断。
S903、判断最大值MAX是否小于阈值,若是,则执行S904~S906;否则执行S904’~S906’。
S904,将最大值MAX代入查找表1,得到与最大值MAX对应的比值C1。
S905,将待处理图像的像素的R分量、G分量、B分量的基色值分别与该比值C1相 乘,得到乘积R1、G1和B1。
其中,位移运算是指将指数向左进行位移,例如A为2 R1,Z为2 R2,使用A对一个数值Z进行位移运算,即对A左移R1位,运算结果为2 R2-R1。位移运算与除法运算的运算结果相同。
S906,采用预设的量化系数A1对R1、G1、B1进行除法或位移运算,得到待处理图像的像素的R分量、G分量和B分量动态范围调整后的基色值:R’、G’、B’。
S904’,将最大值MAX代入查找表2,得到与最大值MAX对应的比值C2。
S905’,将待处理图像的像素的R分量、G分量、B分量的基色值分别与该比值C2相乘,得到乘积R2、G2和B2。
S906’,采用预设的量化系数A2对R2、G2、B2进行除法或位移运算,得到待处理图像的像素的R分量、G分量和B分量动态范围调整后的基色值:R’、G’、B’。
其中,位移运算是指将指数向左进行位移,例如A为2 R1,Z为2 R2,使用A对一个数值Z进行位移运算,即对A左移R1位,运算结果为2 R2-R1。位移运算与除法运算的运算结果相同。
应理解,除了将该比值C1乘以R分量、G分量、B分量的基色值进行动态压缩之外,还可以根据该比值C1进行其他的动态压缩处理方法,除了将该比值C2乘以R分量、G分量、B分量的基色值进行动态压缩之外,还可以根据该比值C2进行其他的动态压缩处理方法。只要使得能对待处理图像的像素的多个分量进行动态范围缩小或扩大调整处理即可,目的是能较好兼容目标图像的显示设备显示即可,具体本申请实施例不做限定。
在一个可能的实施例中,若S303中进行动态范围调整的基色值为非线性基色值,得到的目标图像记为第一目标图像。则对该像素的该多个分量的基色值分别进行动态范围调整之后,还可以有以下步骤:根据第二转换函数,将第一目标图像的像素的多个分量的非线性基色值转换为第二目标图像的对应像素的多个分量的线性基色值。
若实现HDR图像到SDR图像的转换,可以根据HDR电光转换函数对目标图像信息进行电光转换,得到SDR图像的各像素的多个分量的线性基色值,其中,该目标图像信息包含待处理图像的像素经过动态范围缩小调整后的多个分量的非线性基色值。
若进行HDR图像之间的转换,不妨设第一目标图像为使用了第一标准定义的第一转换曲线进行转换而得到的HDR图像,在本申请实施例中,也称为遵循第一标准的图像,则,在本步骤中,第二转换函数为第一标准定义的第一转换曲线。即,根据第一转换曲线,将第一目标图像的每个像素的多个分量的非线性基色值转换为第二目标图像的对应像素点的多个分量的线性基色值。示例性的,不妨设第一目标图像为PQ域数据,则PQ转换曲线将PQ域的第一目标图像的像素的非线性基色值转换为第二目标图像的像素点的线性基色值。应理解,高动态范围图像标准所定义的转换曲线包括,但不限于PQ转换曲线、SLF转换曲线、HLG转换曲线,不做限定。
在本另一个可能的实施例中,在根据第二转换函数,将第一目标图像的像素的多个分量的非线性基色值转换为第二目标图像的对应像素的多个分量的线性基色值之后,还包括以下步骤:
根据第三转换函数,将第二目标图像的对应像素的多个分量的线性基色值转换为第二目标图像的所述对应像素的多个分量的非线性基色值;
若进行HDR图像到SDR图像的转换,根据SDR光电转换函数对SDR图像各像素的多个分量的线性基色值进行光电转换获得输出SDR图像像素的多个分量的非线性基色值,最终可以输出至SDR显示设备上显示。
若进行HDR图像之间的转换,不妨设第二目标图像为使用了第二标准定义的第二转换曲线进行转换而得到的HDR图像,在本发明实施例中,也称为遵循第二标准的图像,则,在本步骤中,第三转换函数为第二标准定义的第二转换曲线。即,根据第二转换曲线,将第二目标图像的像素的多个分量的线性基色值转换为第二目标图像的对应像素点的多个分量的非线性基色值。示例性的,不妨设第二目标图像为HLG域数据,则HLG转换曲线将第二目标图像的像素的线性基色值转换为HLG域的第二目标图像的像素点的非线性基色值。应理解,高动态范围图像标准所定义的转换曲线包括,但不限于PQ转换曲线、SLF转换曲线、HLG转换曲线,不做限定。
在本另一个实施例中,在根据第二转换函数,将第一目标图像的像素的多个分量的非线性基色值转换为第二目标图像的对应像素的多个分量的线性基色值之后,还包括以下步骤:
判断输出第二目标图像显示设备的颜色空间与第二目标图像的非线性基色值所对应的颜色空间是否相同;
若不同,则将第二目标图像的非线性基色值所对应的颜色空间转换为输出第二目标图像显示设备的颜色空间。
例如,若第二目标图像的非线性基色值所对应的颜色空间为BT.2020色彩空间,而输出第二目标图像显示设备是BT.709色彩空间,则从BT.2020色彩空间转换成BT.709色彩空间,之后再执行上述实施例12的步骤308。
在本申请实施例中,可以有效地保证动态范围调整后,目标图像显示效果与待处理图像显示效果的一致性,减少出现对比度变化、细节丢失等问题的概率,进而减少对图像的显示效果的影响。
若S301中确定待处理图像的像素的多个分量的基色值中的最大值,其中,待处理图像的像素的多个分量的基色值为非线性基色值,S301中待处理图像记为第一待处理图像,则在S301之前还包括以下步骤:根据第四转换函数,将第二待处理图像的像素点的多个分量的线性基色值转换为第一待处理图像的对应像素的多个分量的非线性基色值。
若进行SDR图像到HDR图像的转换,当获得了SDR图像各像素点的多个分量的值后,根据HDR光电转换函数对SDR图像各像素点的多个分量的值进行光电转换,得到目标图像信息,所述目标图像为SDR图像的值经过HDR光电转换函数转换后对应的非线性基色值。
若进行HDR图像之间的转换,不妨设第一待处理图像为使用了第一标准定义的第一转换曲线进行转换而得到的HDR图像,在本发明实施例中,也称为遵循第一标准的图像,则,在本步骤中,第四转换函数为第一标准定义的第一转换曲线。即,根据第一转换曲线,将第二待处理图像的像素的多个分量的线性基色值转换为第一待处理图像的对应像素的多个分量的非线性基色值。示例性的,不妨设第一待处理图像为PQ域数据,则PQ转换曲线将第二待处理图像的像素的线性基色值转换为PQ域的第一待处理图像的像素的非线性基色值。应理解,高动态范围图像标准所定义的转换曲线包括,但不限于PQ转换曲线、SLF转换曲线、HLG转换曲线,不做限定。
在一个可能的实施例中,在根据第四转换函数,将第二待处理图像的像素点的多个分量的线性基色值转换为第一待处理图像的对应像素的多个分量的非线性基色值之前,还包括以下步骤:根据第五转换函数,将第二待处理图像的像素的多个分量的非线性基色值转换为第二待处理图像的对应像素的多个分量的线性基色值。
若进行SDR图像到HDR图像的转换,当获取了SDR图像的像素的多个分量的非线性基色值后,根据SDR电光转换函数将其进行电光转换,得到SDR图像像素的多个分量的值。
若进行HDR图像之间的转换,不妨设第二待处理图像为使用了第二标准定义的第二转换曲线进行转换而得到的HDR图像,在本申请实施例中,也称为遵循第二标准的图像,则,在本步骤中,第五转换函数为第二标准定义的第二转换曲线。即,根据第五转换曲线,将第二目标图像的每个像素的多个分量的非线性基色值转换为第二目标图像的对应像素的多个分量的线性基色值。示例性的,不妨设第二目标图像为HLG域数据,则HLG转换曲线将HLG域的第二目标图像的像素的非线性基色值转换为第二目标图像的像素的线性基色值。应理解,高动态范围图像标准所定义的转换曲线包括,但不限于PQ转换曲线、SLF转换曲线、HLG转换曲线,不做限定。
在一个可能的实施例中,根据第五转换函数,将第二待处理图像的像素的多个分量的非线性基色值转换为第二待处理图像的对应像素的多个分量的线性基色值之后,还包括以下步骤:判断第二待处理图像显示设备的颜色空间与第一待处理图像的颜色空间是否相同;
若不同,则将第一待处理图像的颜色空间转换为第二待处理图像显示设备的颜色空间。
例如,若第一待处理图像的颜色空间为BT.709颜色空间,而第二待处理图像显示设备是颜色空间为BT.2020,则从BT.709颜色空间转换成BT.2020颜色空间,之后再根据第四转换函数,将第二待处理图像的像素点的多个分量的线性基色值转换为第一待处理图像的对应像素的多个分量的非线性基色值。
在本实施例中,可以有效地保证动态范围调整后,目标图像显示效果与第一待处理图像显示效果的一致性,减少出现对比度变化、细节丢失等问题的概率,进而减少对图像的显示效果的影响。
在本申请的一个实施例S1中,实现了HDR输入图像的像素点的线性基色值到非线性基色值的转换。
示例性的,HDR输入信号源,包括浮点或者半浮点的线性EXR格式HDR图像数据,PQ或者Slog-3(采集模式)采集的HDR图像数据以及SLF HDR图像数据输入。
示例性的,线性基色值(R,G,B)到PQ域非线性基色值(R’,G’,B’)的转换,遵循以下公式:
R’=PQ_TF(max(0,min(R/10000,1)))
G’=PQ_TF(max(0,min(G/10000,1)))
B’=PQ_TF(max(0,min(B/10000,1)))
其中:
Figure PCTCN2020089496-appb-000025
Figure PCTCN2020089496-appb-000026
Figure PCTCN2020089496-appb-000027
Figure PCTCN2020089496-appb-000028
Figure PCTCN2020089496-appb-000029
Figure PCTCN2020089496-appb-000030
示例性的,线性基色值(R,G,B)到SLF域非线性基色值(R’,G’,B’)的转换,遵循以下公式:
R’=SLF_TF(max(0,min(R/10000,1)))
G’=SLF_TF(max(0,min(G/10000,1)))
B’=SLF_TF(max(0,min(B/10000,1)))
其中:
Figure PCTCN2020089496-appb-000031
m=0.14
p=2.3
a=1.12762
b=-0.12762
在本申请的另一个实施例S2中,实现了HDR输入图像的像素点的非线性基色值到线性基色值的转换。
示例性的,HDR输入信号源,包括浮点或者半浮点的线性EXR格式HDR图像数据,PQ或者Slog-3(采集模式)采集的HDR图像数据以及SLF HDR图像数据输入。
示例性的,从Slog-3的非线性基色值到SLF域的非线性基色值的转换包括:
S21、把S-Log3域的HDR非线性基色值转换为HDR线性基色值;
If in>=171.2102946929/1023.0
out=(10.0^((in*1023.0-420.0)/261.5))*(0.18+0.01)-0.01
else
out=(in*1023.0–95.0)*0.01125000/(171.2102946929–95.0)
其中,in为输入值,out为输出值。
S22、按照实施例S1中方法将HDR线性基色值转化为SLF非线性基色值。
示例性的,从PQ域的非线性基色值到SLF域的非线性基色值的转换包括S31和S32。
S31、将PQ域的HDR非线性基色值(R’,G’,B’)转换为HDR线性基色值(R,G,B);
R=10000*inversePQ_TF(R’)
G=10000*inversePQ_TF(G’)
B=10000*inversePQ_TF(B’)
其中:
Figure PCTCN2020089496-appb-000032
Figure PCTCN2020089496-appb-000033
Figure PCTCN2020089496-appb-000034
Figure PCTCN2020089496-appb-000035
Figure PCTCN2020089496-appb-000036
Figure PCTCN2020089496-appb-000037
S32、将HDR线性基色值(R,G,B)转换为SLF域HDR非线性基色值(R’,G’,B’)。
R’=SLF_TF(max(0,min(R/10000,1)))
G’=SLF_TF(max(0,min(G/10000,1)))
B’=SLF_TF(max(0,min(B/10000,1)))
其中:
Figure PCTCN2020089496-appb-000038
m=0.14
p=2.3
a=1.12762
b=-0.12762
在本申请的另一个实施例S3中,实现了HDR非线性基色值被SDR兼容显示的调整,包括:
HDR非线性基色值经过SDR显示兼容模块处理可以得到SDR非线性基色值,以确保SDR非线性基色值可以在SDR设备上正确显示。显示兼容模块包含动态范围调整、色彩调整、非线性转线性、以及ITU-R BT.1886EOTF逆转换。
具体的,SDR显示兼容的动态范围调整包括:
动态范围调整处理根据动态元数据对输入的HDR非线性信号R’、G’、B’进行动态范围调整,得到适合SDR动态范围的信号R1、G1、B1。本发明实施例根据动态元数据生成动态范围调整曲线,把HDR非线性信号中最大值作为参考值并对其调整动态范围,计算参考值调整前后的比值作为调整系数c,并把调整系数应用到HDR非线性信号。
曲线动态范围调整参数作用是调整HDR非线性信号的动态范围,HDR非线性信号包 括但不限于SLF域的HDR非线性信号以及PQ域的HDR非线性信号等。SLF域与PQ域的动态范围调整参数的具体表达形式略有不同,由于SLF域的HDR非线性信号与PQ域的HDR非线性信号间存在较好的对应关系,容易由SLF域的动态范围调整参数推导出其对应的PQ域动态范围调整参数。本发明实施例中,SLF域的动态范围调整曲线对应的公式如下,
Figure PCTCN2020089496-appb-000039
其中参数p,m用来控制曲线形状和弯曲程度,根据动态元数据生成;参数a,b用来控制曲线范围,即起点与终点的位置。参数p与图像动态元数据中的平均值y存在分段线性对应关系,其中分段的关键点对应关系如下表。
表13
平均值y 0.1 0.25 0.3 0.55 0.6
参数p 6.0 5.0 4.5 4.0 3.2
平均值y大于0.6时,p参数取3.2;平均值小于0.1时,p参数取6.0;当平均值介于表中相邻两项之间时,参数p可以通过线性插值方式获得。
例如,当平均值介于0.55与0.6之间时,可以通过如下线性插值方式获得参数p:
p=4.0+(y-0.55)/(0.6-0.55)*(3.2-4.0)
参数m为输出SDR显示设备的伽玛值,通常为2.4。
参数a、b可以通过解以下方程组计算得到:
Figure PCTCN2020089496-appb-000040
Figure PCTCN2020089496-appb-000041
其中,L1为HDR图像非线性参考最大值,L2为HDR图像非线性参考最小值,L1‵为SDR图像非线性参考最大值,L2‵为所述SDR图像非线性参考最小值。L1、L2由动态元数据中的平均值Y和标准差V计算得到。
L1与Y+V存在分段线性对应关系,其中分段的关键点对应关系如下表。
表14
平均值与标准差的和 0.2 0.5 0.7
HDR图像SLF域参考最大值 0.85 0.9 0.92
当Y+V大于0.7时,则L1取0.92;当Y+V小于0.2时,则L1取0.85;当Y+V介于表中两个相邻数据之间时,则L1可以利用线性插值的方式得到。
L2与Y-V存在分段线性对应关系,其中分段的关键点对应关系如下表。
表15
平均值与标准差之间的差 0.1 0.2 0.35
HDR图像SLF域最小值 0 0.005 0.01
如表15所示,例如:当Y-V大于0.35时,则L2取0.01;当Y-V小于0.1时,则L2取0;当Y-V介于表中两个相邻数据之间时,则L2可以利用线性插值的方式获得。
L1‵、L2‵由输出SDR设备的显示最大亮度及最小亮度,经过HDR线性转非线性变换得到。如常见的SDR显示设备的最大显示亮度为300尼特、最小显示亮度为0.1尼特, 其对应的非线性值L1‵为0.64、L2‵为0.12。
具体的,SDR显示兼容的色彩调整包括:
色彩调整根据动态元数据及调整系数c对动态范围调整后的HDR非线性信号R1、G1、B1进行处理,得到处理后的HDR非线性信号R2、G2、B2。
根据HDR非线性信号值R1、G1、B1计算图像的亮度值Y1,计算方法参考Rec.709及Rec.2020亮度计算方法。色彩调整系数Alphy1由动态范围调整系数c计算得到,计算公式为幂函数F1(c)=c d。系数d与图像动态元数据中的平均值y存在分段线性对应关系,其中分段的关键点对应关系如下表。
表16
平均值 0.1 0.25 0.3 0.55 0.6
系数d 0.15 0.18 0.2 0.22 0.25
如表16所示,当平均值y小于0.1时系数d取0.15当平均亮度值y大于0.6时,系数d取0.25,当平均值y在两个表值之间时,可以通过线性插值方式计算获得系数d。
分量调整系数AlphyR、AlphyG、AlphyB,由亮度值Y1分别与R1、G1、B1值的比值(即Y1/R1、Y1/G1、Y1/B1)经过幂函数F2处理后得到,幂函数F2的公式为F2(x)=x e;系数e与图像动态元数据中的平均值y存在分段线性对应关系,其中分段的关键点对应关系如下表:
表17
平均值 0.1 0.25 0.3 0.55 0.6
系数e 1.2 1.0 0.8 0.6 0.2
如表17所示,当平均值y小于0.1时,系数e可以取1.2;当平均值y大于0.6时,系数e可以取0.2,当平均值y介于表中两个相邻数据之间时,则可以采取线性插值方式获得系数e。
在本申请的另一个实施例S4中,实现了HDR非线性基色值被HDR兼容显示的调整,包括:
HDR非线性信号R’、G’、B’经过显示适应调整处理可以得到HDR非线性信号R”、G”、B”,以确保HDR非线性信号可以在不同HDR设备上正确显示。HDR显示兼容调整模块包含动态范围调整、色彩调整。
动态范围调整处理可以在实施例S3中所述的方法的基础上通过以下调整来实现:L1‵、L2‵由输出HDR设备的显示最大亮度及最小亮度,经过HDR线性转非线性变换得到。系数p、m均需要通过图像动态元数据的经过实施例S3中查找表的方式获得,表项内容需要根据不同HDR显示设备通过实验标定获得。
色彩范围调整处理可以在实施例S3中所述的方法的基础上通过以下调整来实现:系数d、e均需要通过图像动态元数据经过查找表的方式获得,表项内容需要根据不同HDR显示设备通过实验标定获得。
需要说明的是,本申请中的各个应用场景中的举例仅仅表现了一些可能的实现方式,是为了对本申请的方法更好的理解和说明。本领域技术人员可以根据申请提供的图像色彩处理方法,得到一些演变形式的举例。
为了实现上述本申请实施例提供的方法中的各功能,终端设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。 上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
基于同一技术构思,如图10所示,本申请实施例还提供了一种图像处理装置1000,该图像处理装置1000可以是移动终端或任意具有图像处理功能的设备。一种设计中,该图像处理装置1000可以包括执行上述方法实施例中各方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该图像处理装置1000可以包括确定模块1001和处理模块1002。硬件电路称为硬件(hardware)或者c管线(cpipe)。
确定模块1001用于确定待处理图像的像素的多个分量的基色值中的最大值;以及用于根据第一查找表,确定与所述最大值具有映射关系的比值,其中,所述第一查找表包括预设比值和预设基色值的映射关系。处理模块1002,根据所述与所述最大值具有映射关系的比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,以得到目标图像;其中,确定模块1001,还用于通过以下步骤确定映射关系:根据第一转换函数获得所述预设基色值的转换值;将所述转换值与所述预设基色值的比值作为所述预设比值。此时,确定模块1001和处理模块1002可以是硬件电路。
可选的,在所述根据第一查找表,确定与所述最大值具有映射关系的比值时,所述确定模块1001具体用于:当所述预设基色值包括所述最大值时:根据所述映射关系,确定所述最大值对应的所述第一比值;当所述预设基色值不包括所述最大值时:在所述第一查找表中确定第一预设基色值和第二预设基色值;根据所述映射关系,分别确定所述第一预设基色值和所述第二预设基色值对应的第一比值和第二比值;对所述第一比值和所述第二比值进行插值运算,以得到所述最大值对应的比值。这里,确定模块1001可以是硬件电路。
可选的,所述第一查表值为定点数值;所述确定模块1001还用于通过以下方式确定第一表项数值:确定所述第一转换函数的动态参数;根据由所述基色值的比特位宽确定的取值范围,对所述定点数值进行反量化,以得到浮点数值;基于确定所述动态参数后的第一转换函数,将所述浮点数值转换为转换值;根据预设的量化系数,对所述转换值和所述浮点数值的比值进行量化,以得到所述第一表项数值。这里确定模块1001可以是软件。
可选的,所述确定模块1001还用于:确定所述最大值所在的第一取值范围对应的所述第一查找表;其中,由所述基色值的比特位宽确定的取值范围包括所述第一取值范围和第二查找表对应的第二取值范围。这里,确定模块1001可以是硬件电路。
确定模块1001和处理模块1002还可以用于执行上述方法实施例的其它对应的步骤或操作,在此不再一一赘述。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
基于同一技术构思,如图11所示,本申请实施例还提供一种图像处理装置1100。该图像处理装置1100包括处理器1101。该处理器1101用于调用一组程序,以使得上述方法实施例被执行。该图像处理装置1100还包括存储器1102,存储器1102用于存储处理器1101执行的程序指令和/或数据。存储器1102和处理器1101耦合。本申请实施例中的耦合是装 置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器1101可能和存储器1102协同操作。处理器1101可能执行存储器1102中存储的程序指令。存储器1102可以包括于处理器1101中。
该图像处理装置1100可以为芯片系统。本申请实施例中,芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。例如,芯片系统为专用集成电路(application specific integrated circuit,ASIC)芯片片,图像处理装置1100的硬件部分为ASIC芯仿真的c模型(cmode),上述cmode可以和ASIC芯片效果达到比特位一致。
处理器1101用于:确定待处理图像的像素的多个分量的基色值中的最大值;以及用于根据第一查找表,确定与所述最大值具有映射关系的比值,其中,所述第一查找表包括预设比值和预设基色值的映射关系。处理器1101,还用于根据所述与所述最大值具有映射关系的比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,以得到目标图像;其中,处理器1101,还用于通过以下步骤确定映射关系:根据第一转换函数获得所述预设基色值的转换值;将所述转换值与所述预设基色值的比值作为所述预设比值。
处理器1101还可以用于执行上述方法实施例其它对应的步骤或操作,在此不再一一赘述。
处理器1101可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
存储器1102可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
本申请上述方法实施例所描述的各个操作和功能中的部分或全部,可以用芯片或集成电路来完成。
本申请实施例还提供一种芯片,包括处理器,用于支持该图像处理装置实现上述方法实施例所涉及的功能。在一种可能的设计中,该芯片与存储器连接或者该芯片包括存储器,该存储器用于保存该通信装置必要的程序指令和数据。
本申请实施例提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序包括用于执行上述方法实施例的指令。
本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述方法实施例。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (52)

  1. 一种图像处理方法,其特征在于,包括:
    确定待处理图像的像素的多个分量的基色值中的最大值;
    根据第一查找表,确定与所述最大值具有映射关系的比值,其中,所述第一查找表包括预设比值和预设基色值的映射关系;
    根据确定后的所述比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,以得到目标图像;
    其中,所述映射关系的确定,包括:
    根据第一转换函数获得所述预设基色值的转换值;
    将所述转换值与所述预设基色值的比值作为所述预设比值。
  2. 如权利要求1所述的方法,其特征在于,所述根据第一查找表,确定与所述最大值具有映射关系的比值,包括:
    当所述预设基色值包括所述最大值时:根据所述映射关系,确定所述最大值对应的所述第一比值;
    当所述预设基色值不包括所述最大值时:
    在所述第一查找表中确定第一预设基色值和第二预设基色值;
    根据所述映射关系,分别确定所述第一预设基色值和所述第二预设基色值对应的第一比值和第二比值;
    对所述第一比值和所述第二比值进行插值运算,以得到所述最大值对应的比值。
  3. 如权利要求2所述的方法,其特征在于,所述插值运算包括以下任一类型的运算:线性插值、近插值、双线性二次插值、三次插值或Lanczos插值。
  4. 如权利要求1~3任一项所述的方法,其特征在于,所述第一查找表的所述第一表项数值为:通过所述第一转换函数获得的所述第一查找表的第一查表值的转换值,与所述第一查表值的比值。
  5. 如权利要求4所述的方法,其特征在于,所述第一查表值为定点数值;
    所述第一表项数值的确定,包括:
    确定所述第一转换函数的动态参数;
    根据由所述基色值的比特位宽确定的取值范围,对所述定点数值进行反量化,以得到浮点数值;
    基于确定所述动态参数后的第一转换函数,将所述浮点数值转换为转换值;
    根据预设的量化系数,对所述转换值和所述浮点数值的比值进行量化,以得到所述第一表项数值。
  6. 如权利要求4或5所述的方法,其特征在于,所述第一查表值基于所述第一查找表的索引值和所述第一查找表的索引值间的步长确定。
  7. 如权利要求1~6任一项所述的方法,其特征在于,所述方法还包括:确定所述最大值所在的第一取值范围对应的所述第一查找表;
    其中,由所述基色值的比特位宽确定的取值范围包括所述第一取值范围和第二查找表对应的第二取值范围。
  8. 如权利要求7所述的方法,其特征在于,所述第二查找表的第二表项数值为:通 过所述第一转换函数获得的所述第二查找表的第二查表值的转换值,与所述第二查表值的比值。
  9. 如权利要求7或8所述的方法,其特征在于,所述第一取值范围的最小值大于所述第二取值范围的最大值;
    对应的,所述第一查表值基于所述第一查找表的索引值、所述第一查找表的索引值间的步长和所述第二取值范围的最大值确定。
  10. 如权利要求7~9任一项所述的方法,其特征在于,所述第一查找表的索引值间的步长和所述第二查找表的索引值间的步长不同。
  11. 如权利要求1~10任一项所述的方法,其特征在于,根据所述第一比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,包括:
    当所述待处理图像的动态范围大于所述目标图像的动态范围时,根据所述第一比值,对所述像素的所述多个分量的基色值进行缩小动态范围的调整;或者,
    当所述待处理图像的动态范围小于所述目标图像的动态范围时,根据所述第一比值,对所述像素的所述多个分量的基色值进行扩大动态范围的调整。
  12. 如权利要求1~11任一项所述的方法,其特征在于,根据所述第一比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,包括:
    分别计算所述第一比值和所述像素的所述多个分量的基色值的乘积,以得到所述像素的所述多个分量的调整后的基色值。
  13. 如权利要求1~12任一项所述的方法,其特征在于,所述待处理图像位于待处理图像序列中,所述目标图像位于目标图像序列中,所述第一转换函数的动态参数的确定包括:
    根据以下信息中的至少一种确定所述动态参数:
    所述待处理图像或所述待处理图像序列的统计信息;所述待处理图像或所述待处理图像序列范围第一参考值;所述待处理图像或所述待处理图像序列范围第二参考值;所述目标图像或所述目标图像序列范围第一参考值;所述目标图像或所述目标图像序列范围第二参考值。
  14. 如权利要求13所述的方法,其特征在于,所述待处理图像或所述待处理图像序列的统计信息至少包括以下信息中的一种:
    所述待处理图像或所述待处理图像序列的像素点的至少一个分量的基色值中的最大值、最小值、平均值、标准差以及直方图分布信息。
  15. 如权利要求13或14所述的方法,其特征在于,所述待处理图像或所述待处理图像序列范围第一参考值,包括:
    用于显示所述待处理图像的显示设备的亮度最大值;或者,
    根据所述待处理图像或所述待处理图像序列的统计信息,查找第一预置列表所得到的值;或者,
    第一预设值。
  16. 如权利要求13或14所述的方法,其特征在于,所述待处理图像或所述待处理图像序列范围第二参考值,包括:
    用于显示所述待处理图像的显示设备的亮度最小值;或者,
    根据所述待处理图像或所述待处理图像序列的统计信息,查找第二预置列表所得到的 值;或者,
    第二预设值。
  17. 如权利要求13或14所述的方法,其特征在于,所述目标图像或所述目标图像序列范围第一参考值,包括:
    用于显示所述目标图像的显示设备的亮度最大值;或者,
    第三预设值。
  18. 如权利要求13或14所述的方法,其特征在于,所述目标图像或所述目标图像序列范围第二参考值,包括:
    用于显示所述目标图像的显示设备的亮度最小值;或者,
    第四预设值。
  19. 如权利要求1~18任一项所述的方法,其特征在于,所述第一转换函数包括S型转换曲线或反S型转换曲线。
  20. 如权利要求19所述的方法,其特征在于,所述S型转换曲线为斜率先上升后下降的曲线。
  21. 如权利要求19或20所述的方法,其特征在于,所述S型转换曲线符合以下公式:
    Figure PCTCN2020089496-appb-100001
    其中,所述L为所述最大值,所述L′为所述转换值,所述a、b、p和m为所述S型转换曲线的动态参数。
  22. 如权利要求21所述的方法,其特征在于,
    所述p和所述m由根据所述待处理图像或所述待处理图像所在图像序列的统计信息,查找第一预置列表获得;
    所述a和所述b通过以下公式计算获得:
    Figure PCTCN2020089496-appb-100002
    Figure PCTCN2020089496-appb-100003
    其中,所述L 1为所述待处理图像或所述待处理图像所在图像序列范围的第一参考值,所述L 2为所述待处理图像或所述待处理图像所在图像序列范围的第二参考值,所述L 1为所述目标图像或所述目标图像序列范围的第一参考值,所述L 2为所述目标图像或所述目标图像序列范围的第二参考值。
  23. 如权利要求19所述的方法,其特征在于,所述反S型转换曲线为斜率先下降后上升的曲线。
  24. 如权利要求19或23所述的方法,其特征在于,所述反S型转换曲线形式如下:
    Figure PCTCN2020089496-appb-100004
    其中,所述L为所述目标图像的像素的多个分量的基色值中的最大值,所述L'为所述目标图像的像素的多个分量的基色值中的最大值的转换值,所述a、b、p以及m参数为所 述反S型转换曲线的动态参数。
  25. 如权利要求24所述的方法,其特征在于,
    所述p以及m参数由查找第二预置列表的方式获得;
    所述a以及b参数通过以下公式计算:
    Figure PCTCN2020089496-appb-100005
    Figure PCTCN2020089496-appb-100006
    其中,所述L 1为所述待处理图像或所述待处理图像所在图像序列范围第一参考值,所述L 2为所述待处理图像或所述待处理图像所在图像序列范围第二参考值,所述L 1为所述目标图像或所述目标图像序列范围第一参考值,所述L 2为所述目标图像或所述目标图像序列范围第二参考值。
  26. 一种图像色彩处理装置,其特征在于,包括:
    确定模块,用于确定待处理图像的像素的多个分量的基色值中的最大值;根据第一查找表,确定与所述最大值具有映射关系的比值,其中,所述第一查找表包括预设比值和预设基色值的映射关系;
    处理模块,用于根据确定后的所述比值,对所述像素的所述多个分量的基色值分别进行动态范围调整,以得到目标图像;其中,所述处理模块根据以下操作确定所述映射关系:根据第一转换函数获得所述预设基色值的转换值;将所述转换值与所述预设基色值的比值作为所述预设比值。
  27. 如权利要求26所述的装置,其特征在于,所述确定模块用于:
    当所述预设基色值包括所述最大值时:根据所述映射关系,确定所述最大值对应的所述第一比值;
    当所述预设基色值不包括所述最大值时:
    在所述第一查找表中确定第一预设基色值和第二预设基色值;
    根据所述映射关系,分别确定所述第一预设基色值和所述第二预设基色值对应的第一比值和第二比值;
    对所述第一比值和所述第二比值进行插值运算,以得到所述最大值对应的比值。
  28. 如权利要求27所述的装置,其特征在于,所述插值运算包括以下任一类型的运算:线性插值、近插值、双线性二次插值、三次插值或Lanczos插值。
  29. 如权利要求26~28任一项所述的装置,其特征在于,所述第一查找表的所述第一表项数值为:通过所述第一转换函数获得的所述第一查找表的第一查表值的转换值,与所述第一查表值的比值。
  30. 如权利要求29所述的装置,其特征在于,所述第一查表值为定点数值;
    所述确定模块用于执行以下操作以确定所述第一表项数值的确定:
    确定所述第一转换函数的动态参数;
    根据由所述基色值的比特位宽确定的取值范围,对所述定点数值进行反量化,以得到浮点数值;
    基于确定所述动态参数后的第一转换函数,将所述浮点数值转换为转换值;
    根据预设的量化系数,对所述转换值和所述浮点数值的比值进行量化,以得到所述第一表项数值。
  31. 如权利要求29或30所述的装置,其特征在于,所述第一查表值基于所述第一查找表的索引值和所述第一查找表的索引值间的步长确定。
  32. 如权利要求26~31任一项所述的装置,其特征在于,所述确定模块,还用于确定所述最大值所在的第一取值范围对应的所述第一查找表;
    其中,由所述基色值的比特位宽确定的取值范围包括所述第一取值范围和第二查找表对应的第二取值范围。
  33. 如权利要求32所述的装置,其特征在于,所述第二查找表的第二表项数值为:通过所述第一转换函数获得的所述第二查找表的第二查表值的转换值,与所述第二查表值的比值。
  34. 如权利要求32或33所述的装置,其特征在于,所述第一取值范围的最小值大于所述第二取值范围的最大值;
    对应的,所述第一查表值基于所述第一查找表的索引值、所述第一查找表的索引值间的步长和所述第二取值范围的最大值确定。
  35. 如权利要求32~34任一项所述的装置,其特征在于,所述第一查找表的索引值间的步长和所述第二查找表的索引值间的步长不同。
  36. 如权利要求26~35任一项所述的装置,其特征在于,所述处理模块用于:
    当所述待处理图像的动态范围大于所述目标图像的动态范围时,根据所述第一比值,对所述像素的所述多个分量的基色值进行缩小动态范围的调整;或者,
    当所述待处理图像的动态范围小于所述目标图像的动态范围时,根据所述第一比值,对所述像素的所述多个分量的基色值进行扩大动态范围的调整。
  37. 如权利要求26~36任一项所述的装置,其特征在于,所述处理模块用于:
    分别计算所述第一比值和所述像素的所述多个分量的基色值的乘积,以得到所述像素的所述多个分量的调整后的基色值。
  38. 如权利要求26~37任一项所述的装置,其特征在于,所述待处理图像位于待处理图像序列中,所述目标图像位于目标图像序列中,所述处理模块用于执行以下操作以确定所述第一转换函数的动态参数:
    根据以下信息中的至少一种确定所述动态参数:
    所述待处理图像或所述待处理图像序列的统计信息;所述待处理图像或所述待处理图像序列范围第一参考值;所述待处理图像或所述待处理图像序列范围第二参考值;所述目标图像或所述目标图像序列范围第一参考值;所述目标图像或所述目标图像序列范围第二参考值。
  39. 如权利要求38所述的装置,其特征在于,所述待处理图像或所述待处理图像序列的统计信息至少包括以下信息中的一种:
    所述待处理图像或所述待处理图像序列的像素点的至少一个分量的基色值中的最大值、最小值、平均值、标准差以及直方图分布信息。
  40. 如权利要求38或39所述的装置,其特征在于,所述待处理图像或所述待处理图像序列范围第一参考值,包括:
    用于显示所述待处理图像的显示设备的亮度最大值;或者,
    根据所述待处理图像或所述待处理图像序列的统计信息,查找第一预置列表所得到的值;或者,
    第一预设值。
  41. 如权利要求38或39所述的装置,其特征在于,所述待处理图像或所述待处理图像序列范围第二参考值,包括:
    用于显示所述待处理图像的显示设备的亮度最小值;或者,
    根据所述待处理图像或所述待处理图像序列的统计信息,查找第二预置列表所得到的值;或者,
    第二预设值。
  42. 如权利要求38或39所述的装置,其特征在于,所述目标图像或所述目标图像序列范围第一参考值,包括:
    用于显示所述目标图像的显示设备的亮度最大值;或者,
    第三预设值。
  43. 如权利要求38或39所述的装置,其特征在于,所述目标图像或所述目标图像序列范围第二参考值,包括:
    用于显示所述目标图像的显示设备的亮度最小值;或者,
    第四预设值。
  44. 如权利要求26~43任一项所述的装置,其特征在于,所述第一转换函数包括S型转换曲线或反S型转换曲线。
  45. 如权利要求44所述的装置,其特征在于,所述S型转换曲线为斜率先上升后下降的曲线。
  46. 如权利要求44或45所述的装置,其特征在于,所述S型转换曲线符合以下公式:
    Figure PCTCN2020089496-appb-100007
    其中,所述L为所述最大值,所述L′为所述转换值,所述a、b、p和m为所述S型转换曲线的动态参数。
  47. 如权利要求46所述的装置,其特征在于,
    所述p和所述m由根据所述待处理图像或所述待处理图像所在图像序列的统计信息,查找第一预置列表获得;
    所述a和所述b通过以下公式计算获得:
    Figure PCTCN2020089496-appb-100008
    Figure PCTCN2020089496-appb-100009
    其中,所述L 1为所述待处理图像或所述待处理图像所在图像序列范围的第一参考值,所述L 2为所述待处理图像或所述待处理图像所在图像序列范围的第二参考值,所述L 1为所述目标图像或所述目标图像序列范围的第一参考值,所述L 2为所述目标图像或所述目标图像序列范围的第二参考值。
  48. 如权利要求44所述的装置,其特征在于,所述反S型转换曲线为斜率先下降后上升的曲线。
  49. 如权利要求44或48所述的装置,其特征在于,所述反S型转换曲线形式如下:
    Figure PCTCN2020089496-appb-100010
    其中,所述L为所述目标图像的像素的多个分量的基色值中的最大值,所述L'为所述目标图像的像素的多个分量的基色值中的最大值的转换值,所述a、b、p以及m参数为所述反S型转换曲线的动态参数。
  50. 如权利要求49所述的装置,其特征在于,
    所述p以及m参数由查找第二预置列表的方式获得;
    所述a以及b参数通过以下公式计算:
    Figure PCTCN2020089496-appb-100011
    Figure PCTCN2020089496-appb-100012
    其中,所述L 1为所述待处理图像或所述待处理图像所在图像序列范围第一参考值,所述L 2为所述待处理图像或所述待处理图像所在图像序列范围第二参考值,所述L 1为所述目标图像或所述目标图像序列范围第一参考值,所述L 2为所述目标图像或所述目标图像序列范围第二参考值。
  51. 一种图像色彩处理装置,其特征在于,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得如权利要求1~25任一项所述的方法被执行。
  52. 一种计算机可读存储介质,其特征在于,所述计算机存储介质中存储有计算机可读指令,当所述计算机可读指令在通信装置上运行时,如权利要求1~25任一项所述的方法被执行。
PCT/CN2020/089496 2020-05-09 2020-05-09 一种图像处理方法及装置 WO2021226769A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/089496 WO2021226769A1 (zh) 2020-05-09 2020-05-09 一种图像处理方法及装置
CN202080099931.5A CN115428007A (zh) 2020-05-09 2020-05-09 一种图像处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/089496 WO2021226769A1 (zh) 2020-05-09 2020-05-09 一种图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2021226769A1 true WO2021226769A1 (zh) 2021-11-18

Family

ID=78526077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/089496 WO2021226769A1 (zh) 2020-05-09 2020-05-09 一种图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN115428007A (zh)
WO (1) WO2021226769A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018035696A1 (zh) * 2016-08-22 2018-03-01 华为技术有限公司 一种图像处理方法以及装置
US20180075588A1 (en) * 2016-09-09 2018-03-15 Kabushiki Kaisha Toshiba Image processing device and image processing method
CN108694030A (zh) * 2017-04-11 2018-10-23 华为技术有限公司 处理高动态范围图像的方法和装置
CN110728633A (zh) * 2019-09-06 2020-01-24 上海交通大学 多曝光度高动态范围反色调映射模型构建方法及装置
CN110852956A (zh) * 2019-07-22 2020-02-28 江苏宇特光电科技股份有限公司 一种高动态范围图像的增强方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018035696A1 (zh) * 2016-08-22 2018-03-01 华为技术有限公司 一种图像处理方法以及装置
US20180075588A1 (en) * 2016-09-09 2018-03-15 Kabushiki Kaisha Toshiba Image processing device and image processing method
CN108694030A (zh) * 2017-04-11 2018-10-23 华为技术有限公司 处理高动态范围图像的方法和装置
CN110852956A (zh) * 2019-07-22 2020-02-28 江苏宇特光电科技股份有限公司 一种高动态范围图像的增强方法
CN110728633A (zh) * 2019-09-06 2020-01-24 上海交通大学 多曝光度高动态范围反色调映射模型构建方法及装置

Also Published As

Publication number Publication date
CN115428007A (zh) 2022-12-02

Similar Documents

Publication Publication Date Title
US10574936B2 (en) System and method of luminance processing in high dynamic range and standard dynamic range conversion
CN109274985B (zh) 视频转码方法、装置、计算机设备和存储介质
US7081899B2 (en) Image processing support system, image processing device and image display device
CN109313796B (zh) 一种图像处理方法以及装置
TWI666921B (zh) 將高動態範圍影像進行色調映射之方法及裝置
CN112202986B (zh) 图像处理方法、图像处理装置、可读介质及其电子设备
RU2710873C2 (ru) Способ и устройство для декодирования цветного изображения
EP3975106A1 (en) Image processing method and apparatus
US20180005358A1 (en) A method and apparatus for inverse-tone mapping a picture
KR20090076033A (ko) 휴대단말에서 시인성 향상 제공 장치 및 방법
US10863157B2 (en) Guided tone mapping of high dynamic range video based on a Bezier curve for presentation on a display device
CN107564493B (zh) 一种色域压缩方法、装置及显示设备
WO2021073304A1 (zh) 一种图像处理的方法及装置
WO2021218924A1 (zh) 动态范围映射的方法和装置
US11651719B2 (en) Enhanced smoothness digital-to-analog converter interpolation systems and methods
US20090110274A1 (en) Image quality enhancement with histogram diffusion
JP2017515373A (ja) 色圧縮のためのクラスタ化およびエンコード
WO2021226769A1 (zh) 一种图像处理方法及装置
WO2021217647A1 (zh) 一种图像色彩处理方法及装置
US8630488B2 (en) Creating a duotone color effect using an ICC profile
WO2020000255A1 (zh) 一种rgb数据的色彩调整方法
US8907965B2 (en) Clipping a known range of integer values using desired ceiling and floor values
US20230061966A1 (en) Image processing
CN116110328A (zh) 图像处理方法、装置及存储介质
US9407938B2 (en) Method for processing image and electronic device for the method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20934985

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20934985

Country of ref document: EP

Kind code of ref document: A1