CN111435990A - Color space encoding method and apparatus - Google Patents

Color space encoding method and apparatus Download PDF

Info

Publication number
CN111435990A
CN111435990A CN201910038935.1A CN201910038935A CN111435990A CN 111435990 A CN111435990 A CN 111435990A CN 201910038935 A CN201910038935 A CN 201910038935A CN 111435990 A CN111435990 A CN 111435990A
Authority
CN
China
Prior art keywords
color
image
linear
quantization
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910038935.1A
Other languages
Chinese (zh)
Other versions
CN111435990B (en
Inventor
方华猛
邸佩云
拉法尔·曼提尔克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
University of Cambridge
Original Assignee
Huawei Technologies Co Ltd
University of Cambridge
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd, University of Cambridge filed Critical Huawei Technologies Co Ltd
Priority to CN201910038935.1A priority Critical patent/CN111435990B/en
Publication of CN111435990A publication Critical patent/CN111435990A/en
Application granted granted Critical
Publication of CN111435990B publication Critical patent/CN111435990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/64Systems for the transmission or the storage of the colour picture signal; Details therefor, e.g. coding or decoding means therefor
    • H04N1/648Transmitting or storing the primary (additive or subtractive) colour signals; Compression thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a color space encoding method and device, wherein the method comprises the following steps: acquiring linear colors of a plurality of color components of a first image in a first color space; converting the linear color of each color component into a nonlinear color according to a color mapping function; quantizing the nonlinear colors of each color component according to the quantization bit number corresponding to each color component, thereby obtaining a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components. The method and the device are beneficial to effectively reducing the total quantization bit number and saving the storage space and the transmission bandwidth on the premise of not introducing quantization stripes.

Description

Color space encoding method and apparatus
Technical Field
The present invention relates to the field of color coding, and in particular, to a color space coding method and apparatus.
Background
Generally, a camera captures an optical signal from an external environment through a lens, and an image sensor converts the optical signal into an electrical signal which is linear with light intensity. Generally, a camera captures image data (may be referred to as RAW image data for short) in a RAW format through an image Sensor (Sensor), and the RAW image data includes only one gray value on each pixel point, and the gray value is one component of red/green/blue. RAW Image data is processed by a demosaic module of an Image Signal Processor (ISP), and the demosaic module interpolates and supplements two other components for each pixel, so as to generate Image data in a native RGB format (which may be referred to as native RGB Image data for short). The native RGB image data is transformed to a gamut space (e.g., bt.709, P3, bt.2020 gamut, etc.) via a linear mapping, resulting in new RGB image data (linear color values).
In order to reduce the data amount processed by ISP, reduce the transmission bandwidth of image video and reduce the storage space, OETF module for quantization coding in ISP utilizes the principle that human eyes are non-linearly sensitive to the illumination intensity to carry out non-linear compression coding on the illumination brightness in the captured image data, such as ITU-R BT.709[1 ]]The OETF function is subjected to Gamma (Gamma) correction on the linear color value in the standard, so that the linear color value is converted into a nonlinear color value, and the RGB three-primary-color components are subjected to quantization coding by using the same bit width. Assuming that n quantization bits are used for quantization coding, taking a full frequency range (full range) as an example, R, G, B code values of three color components are: v. ofR/G/B=VR/G/B*(2n-1), wherein VR/G/BIs a normalized nonlinear color value of R, G, B three linear color components, vR/G/BIs the corresponding output color quantized coded codeword (i.e., quantized nonlinear color value). The quantized non-linear color values may then be further video encoded for output or directly output.
In the above color coding, the quantization coding process of the OETF module is a mapping process from a multi-color value to a single-color value, there is a loss of information, if the quantization coding is not appropriate, a loss of visible information of human eyes will be caused, and human eyes can see that quantization stripes (banding) appear on an image, for example, fig. 1 shows that in a scene, after gamma correction and quantization coding are performed on linear color values, stripes visible to human eyes appear, thereby seriously affecting the quality of the image. If the stripe is not generated, the quantization bit number n needs to be increased, which results in an increase in data volume and a waste of transmission bandwidth.
Disclosure of Invention
The embodiment of the invention provides a color space coding method and color space coding equipment, which can effectively reduce the total bit number and save the storage space and the transmission bandwidth on the premise of not introducing quantization stripes.
In a first aspect, an embodiment of the present invention provides a color space encoding method, where the method includes: acquiring linear colors of a plurality of color components of a first image in a first color space; converting the linear color of each color component into a nonlinear color according to a color mapping function; quantizing the nonlinear colors of the color components according to the quantization bit number corresponding to each color component, so as to obtain a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
It can be seen that, by implementing the embodiment of the present invention, different quantization bit numbers can be reasonably allocated to different color components without introducing quantization stripes in the color coding process, that is, a suitable quantization bit number can be designed for different color components, thereby avoiding the waste of quantization bit numbers caused by using a uniform quantization bit number for each color component in the prior art, effectively reducing the amount of transmission data, and saving the storage space and the transmission bandwidth.
Based on the first aspect, in a possible implementation manner, the quantization bit number corresponding to each color component is determined according to a display characteristic of a display device, where the display characteristic of the display device includes at least one of a maximum brightness, a minimum brightness, and a dispersion range of the display device. The quantization stripe occurrence is related to the display characteristics of the display device, and the quantization bit numbers of different color components are designed according to the display characteristics of the display device, so that the quantization stripe occurrence after quantization processing of each color component is avoided.
For example, for linear colors in CIE1931RGB color space, the quantization bit number of each color component can be designed according to the display characteristics of a certain display device, as shown in the following formula:
Figure BDA0001946333840000021
wherein n isr、ng、nbRespectively representing the number of quantization bits respectively adopted for the three primary color components R (red), G (green) and B (blue); ceil (·) represents a round-up operation; n represents the maximum number of quantized bits in the color component.
Wherein the parameter 1.000, the parameter 0.9452, and the parameter 0.3116 in the above formula are obtained according to display characteristics of the display device. For example, the maximum quantization bit number n is 10 bits (bit), and can be calculated by the following formula: the R component is quantized by 10 bits, the G component is quantized by 10 bits, and the B component is quantized by 9 bits, so that the designed quantization bit number can effectively reduce the transmission data amount, save the storage space and the transmission bandwidth, and is favorable for avoiding quantization stripes after quantization processing and improving the image quality.
Based on the first aspect, in a possible implementation manner, the converting the linear color of each color component into a non-linear color according to a color mapping function includes: and converting the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components, wherein the color mapping function corresponding to at least one color component is different from the color mapping functions corresponding to other color components in the plurality of color components.
For example, for linear colors in CIE1931RGB color space, the quantization bit number of each color component can be designed according to the display characteristics of a certain display device, as shown in the following formula:
Figure BDA0001946333840000022
wherein, f (L)r)、f(Lg)、f(Lb) Represent the color mapping functions corresponding to the R, G, B components of RGB, respectively, Lr、Lg、LbLinear color values respectively representing R, G, and B components of RGB, parameters 0.09754, 0.09296, 0.09648, and the like may be determined according to display characteristics of the display device.
That is, in the color coding process, in addition to designing different quantization bit numbers for different color components according to the display characteristics of the display device, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of the display device, so as to reasonably convert the linear colors of the respective color components into nonlinear colors. The two aspects are complementary, and by combining the two aspects, the occurrence of quantization stripes can be effectively avoided, the storage space and the transmission bandwidth are saved, and the technical effect is improved.
Based on the first aspect, in a possible implementation manner, the color mapping function corresponding to any color component in the plurality of color components is further obtained according to a value of a linear color of the any color component and a change threshold of the value of the linear color of the any color component; wherein the threshold value of the change in the linear color value of any color component is used to indicate an error caused by quantization using a threshold value of the quantization bit number of the linear color value of any color component, which is the minimum quantization bit number of the linear color value of any color component that makes an image displayed on a display device without human eyes perceiving the appearance of quantization stripes.
For example, in some application scenarios, the relationship between the variation threshold of any of the color components and the color mapping function f (-) is as follows:
Figure BDA0001946333840000031
wherein, A LiRepresenting the variation threshold of any color component i.
It can be seen that, by applying the embodiment of the present invention, a more reasonable color mapping function of each color component can be obtained, the color mapping function of each color component is used to generate a nonlinear color value, and then quantization processing is performed according to the quantization level of each color component designed by the embodiment of the present invention, so that it can be ensured that no quantization stripe visible to human eyes appears in an image, a storage space and a transmission bandwidth are saved, and a technical effect is improved.
Based on the first aspect, in a possible implementation, the variation threshold Δ L may be obtained by the following processi
Firstly, a color analyzer, a color corrector and the like can be utilized to measure the color gamut of a device to be displayed, namely the color gamut which can be represented by the display device, such as the color gamut of CIE1931XYZ, then, the CIE1931XYZ color gamut space of the display device is converted into a first color space (such as the first color space is CIE1931RGB color space) in which a quantization process is positioned by utilizing a color space linear conversion relation, each color component (such as R, G, B components of RGB) of the linear color of the first color space is input into the display device, the other color components except any one color component in the plurality of color components are kept unchanged, and a plurality of different values L are selected on any one color componentiFor each LiConstructing a linear smooth variation of color component intensity with an average luminance of LiIs determined L from the number of quantized bitsiThe quantization bit number threshold of (a) is a minimum quantization bit number of the plurality of quantization bit numbers that a human eye cannot perceive a quantization stripe appearing in an image displayed on the display device; the image displayed on the display device is based onThe quantization bit number is obtained by quantizing the second image. And then, quantizing the second image according to the quantization bit number threshold value to obtain a third image. Finally, in some application scenarios, the average value of the amplitudes of the sawtooth waves in the difference image between the third image and the second image may be used as the variation threshold; in still other application scenarios, an average value of errors between the third image and the second image may be taken as the variation threshold.
Based on the first aspect, in a possible implementation, L may be determined according to the following processiQuantization bit number threshold of (2): firstly, traversing any one of a plurality of quantization bit numbers (for example, gradually changing the quantization bit number from small to large), and quantizing the second image according to any one of the plurality of quantization bit numbers to obtain a fourth image; then, determining a plurality of Fourier frequency components corresponding to a difference image between the fourth image and the second image; determining a degree of response of each of the plurality of fourier frequency components to luminance and/or chrominance, such as normalized contrast to luminance and/or chrominance; converting the response degree of each Fourier frequency component to brightness and/or chroma into detection probability; and counting the detection probabilities of the multiple Fourier frequency components to obtain a probability statistic value, and determining the quantization bit number adopted by the current quantization processing as the quantization bit number threshold when the probability statistic value meets a preset probability threshold.
It can be seen that, through the above process, the linear color of the first color space can be tested in a modeling manner according to the display characteristics of the display device, so as to obtain the quantization bit threshold of each color componentiAccording to the linear color values of the color components and the change threshold value delta LiA color mapping function for each color component is obtained. Therefore, implementing embodiments of the present invention enables different color separations to be achieved as linear colorsDifferent color mapping functions are designed, so that the quantization stripes visible to human eyes do not appear in the quantization image, the storage space and the transmission bandwidth are saved, and the technical effect is improved.
The linear colors of the color components of the first image in the first color space according to the embodiments of the present invention may be derived in various ways.
For example, in a possible embodiment, the linear colors of the plurality of color components in the first color space may be obtained by means of camera capture and image signal processing ISP.
For another example, in a possible implementation, the linear colors of the plurality of color components in the first color space may be generated by a graphics card device.
For another example, in a possible embodiment, linear colors of a plurality of color components of the first image in a second color space may be first obtained, wherein the second color space is different from the first color space; then, the linear colors of the plurality of color components in the second color space are converted into the linear colors of the plurality of color components in the first color space by conversion of the color space.
Based on the first aspect, in a possible implementation manner, after the non-linear colors of the color components are quantized to obtain a quantized image, the quantized image may be further post-processed according to the needs of an application scenario, and the relevant post-processing manner may be, for example, one or a combination of the following manners:
carrying out image coding on the quantized nonlinear color of each color component in the quantized image;
performing color space conversion on the quantized nonlinear color of each color component in the quantized image;
storing the quantized nonlinear color of each color component in the quantized image;
and transmitting the quantized nonlinear color of each color component in the quantized image to a display device.
In a second aspect, an embodiment of the present invention provides an apparatus for color space coding, including: the device comprises an image acquisition module, a color conversion module, a quantization module, a test module and a post-processing module. Wherein: the image acquisition module may be configured to acquire linear colors of a plurality of color components of the first image in a first color space; the color conversion module may be configured to convert the linear colors of the respective color components into non-linear colors according to a color mapping function; the quantization module may be configured to quantize the nonlinear color of each color component according to the quantization bit number corresponding to each color component, so as to obtain a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
The functional modules in the device may be specifically adapted to implement the method described in the first aspect.
In a third aspect, an embodiment of the present invention provides another apparatus for color space coding, where the apparatus includes: a memory for storing program instructions and one or more processors coupled to the memory for invoking the program instructions, in particular for performing the method as described in the first aspect.
In a fourth aspect, embodiments of the present invention provide a non-volatile computer-readable storage medium; the computer readable storage medium is used for storing code for implementing the method of the first aspect. The program code, when executed by a computing device, is for use in the method of the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer program product; the computer program product comprising program instructions which, when executed by a computing device, cause the controller to perform the method of the first aspect as set forth above. The computer program product may be a software installation package, which, in case it is required to use the method provided by any of the possible designs of the first aspect described above, may be downloaded and executed on a controller to implement the method of the first aspect.
It can be seen that, in the color coding process, on one hand, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of display equipment, and the linear colors of the color components are converted into nonlinear colors; on the other hand, different quantization bit numbers can be designed for different color components according to the display characteristics of the display equipment by utilizing different sensitivity degrees of human eyes to different color component changes. Therefore, by combining the two aspects, the occurrence of quantization stripes can be effectively avoided, different quantization bit numbers can be reasonably distributed for different color components, the waste of the quantization bit numbers is avoided, the transmission data volume is effectively reduced, and the storage space and the transmission bandwidth are saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present invention, the drawings required to be used in the embodiments or the background art of the present invention will be described below.
FIG. 1 is a diagram illustrating a linear color image and a scene in which quantization stripes appear after the image is subjected to quantization coding;
fig. 2 is a schematic diagram of a source device, a destination device, and a color coding system composed of the source device and the destination device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an apparatus that can be used as either or both of a source device and an end device according to an embodiment of the present invention;
FIG. 4 is a flow chart of a color coding method according to an embodiment of the present invention;
FIG. 5 is a flow chart of another color coding method according to an embodiment of the present invention;
FIG. 6 is a flow chart of another color coding method according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart of generating a quantized image from a source image according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a sawtooth scene according to an embodiment of the present invention;
FIG. 9 is a graphical representation of comparative results of experimental data provided by an embodiment of the present invention;
FIG. 10 is a graphical representation of the results of an experiment provided by an embodiment of the present invention;
FIG. 11 is a schematic diagram of a gradient image for a chroma quantization experiment according to an embodiment of the present invention;
FIG. 12 is a graphical representation of comparative results of further experimental data provided by an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described below with reference to the drawings. The terminology used in the description of the embodiments of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
For a better understanding of the embodiments of the present invention, the color space involved in the embodiments of the present invention will be described first.
In a tristimulus model, if a color and another color of three primary colors mixed with different components are made to look the same to the human eye, we refer to the components of the three primary colors as tristimulus values of the color, and for how to select, quantify, determine the stimulus values, etc., there is a set of general standards — the international Commission on illumination de L' Eclairage, CIE) standard colorimetry system, besides which there are many color spaces, e.g., 1931RGB, CIE1931XYZ, CIE1 xyY, 1976L, CIE, and other color spaces, which may be expressed in different forms, e.g., different forms of color space, e.g., different forms of hsyc space, hsv space, and space, e.g., different forms of space, e.g., different CIE space, different CIE HS 32, different CIE space, and different CIE HS space.
For example, the CIE1931RGB color space employs monochromatic light of three wavelengths, 700nm (r), 546.1nm (g), 435.8nm (b), as the three primary colors. Each value in CIE1931RGB scales linearly with cd/m (number of photons), which may be referred to as RGB linear color values. Correspondingly, RGB obtained by further performing nonlinear processing on the RGB linear color values may be referred to as RGB nonlinear color values.
The CIE1931XYZ color space may be derived by linear transformation of the CIE1931RGB standard. Since the CIE1931RGB standard is formulated according to experimental results, the visible light color gamut of the CIE1931RGB standard has a negative value in the coordinate system, and for the convenience of calculation and conversion, the CIE selects a triangular area in the CIE1931RGB color gamut, the triangular area covers all the visible color gamuts, and the visible light color gamut is converted into the positive number gamut by performing linear conversion on the triangular area. Thus, the CIE1931XYZ color space is obtained by imagining the three primary colors X, Y, Z.
The system architecture according to embodiments of the present invention is described below. Referring to fig. 2, fig. 2 is a block diagram of a color coding system according to an embodiment of the present invention, as shown in fig. 2, the color coding system includes a source device 10 and an end device 20, the source device 10 generates color-coded video data/image data, and the end device 20 can perform color decoding and display on the color-coded video data/image data. Various implementations of the source device 10 and the end device 20, or both in combination, may include one or more processors and memory coupled to the one or more processors. The memory can include, but is not limited to, RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures that can be accessed by a computer, as described herein.
The source device 10 and/or the end device 20 can include a variety of devices, including a desktop computer, a mobile computing device, a notebook (e.g., laptop) computer, a tablet computer, a set-top box, a cell phone, a television, a camera, a display device, a digital media player, a video game console, an in-vehicle computer, a display, a projector, a video surveillance appliance, a video conferencing appliance, a live-on-demand appliance, or the like.
End device 20 may receive encoded video data from source device 10 via link 30.
In one example, link 30 may comprise one or more media or devices capable of moving color-coded video/image data from source device 10 to end device 20, in which example the color-coded video/image data may be moved to end device 20 by way of copying/burning through a non-volatile storage medium, for example.
In one example, link 30 may comprise one or more communication media that enable source device 10 to transmit color-coded video/image data directly to end device 20. In this example, the source device 10 may modulate the color-coded video/image data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated data to the end device 20. The one or more communication media may include wireless and/or wired communication media such as Radio Frequency (RF) spectrum, WIFI, bluetooth, mobile networks, cellular data networks, and so forth, and wired communication media such as a physical transmission line for display Interface (DP), High Definition Multimedia Interface (HDMI), coaxial cable, coarse coaxial cable, twisted pair, and fiber optics, and so forth. The one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the internet). The one or more communication media may include a router, switch, base station, or other apparatus that facilitates communication from a source device 10 to an end device 20.
In another example, color-coded video/image data may be output from output interface 160 to storage device 40. Similarly, color-coded video/image data may be accessed from storage device 40 through input interface 260. Storage device 40 may include any of a variety of distributed or locally accessed data storage media, such as a hard drive, blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, frame buffers, or any other suitable digital storage media for storing color-coded video/image data.
In another example, the storage device 40 may correspond to a file server or another intermediate storage device that may hold color-coded video/image data generated by the source device 10. the end device 20 may access the stored video data from the storage device 40 via streaming or download.
In some implementations, the color space encoding techniques of embodiments of the present invention may be applied to support a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, streaming video transmissions (e.g., via the internet), or applying color coding for video data/image data stored on a data storage medium, or applying color decoding for video data/image data stored on a data storage medium, or other applications. In some examples, color coding systems may be used to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
The color coding system illustrated in fig. 2 is merely an example, and the techniques of embodiments of this disclosure may be applied to color coding settings (e.g., color coding or color decoding) that do not necessarily include any data communication between the source device 10 and the end device 20. In other examples, the data is retrieved from local storage, streamed over a network, and so forth. In many examples, the encoding and decoding are performed by devices that do not communicate with each other, but merely encode data to and/or retrieve data from memory and decode data.
In the example of fig. 2, source device 10 specifically includes an image source 100, a color encoding module 120, and an output interface 160. In some examples, output interface 160 may include a regulator/demodulator (modem) and/or a transmitter. The image source 100 can comprise a combination of one or more of a video/image capture device (e.g., a camera and an ISP module), a video/image archive containing previously captured video/image data, a video feed interface to receive video/image data from a video/image content provider, a computer graphics system (e.g., a graphics card) for generating video/image data, and so forth. In particular, the image source 100 may be used, for example, to provide linear color image data.
The color encoding module 120 may color encode the video/image data from the image source 100, obtaining encoded video/image data, e.g., converting linear color image data to quantized non-linear color image data. In some examples, source device 10 transmits the encoded video/image data directly to end device 20 via output interface 160. For example, may be transmitted directly to the end device 20 via a wireless, DP, or HMDI connection; in other examples, encoded video/image data may also be stored onto storage device 40 for later access by end device 20 for decoding and/or playback.
In a possible embodiment, the source device 10 may further include a source post-processing module 140, where the source post-processing module 140 may be configured to further process the data encoded by the color coding module, for example, perform color space conversion, perform compression coding (for example, inter-frame prediction coding, intra-frame prediction coding, and the like) on the image/video, and store the format to obtain processed video/image data; source device 10 then transmits the processed video/image data to end device 20 via output interface 160. For example, may be transmitted directly to end device 20 via a wireless, wired, etc. connection; in other examples, the video/image data to be transmitted may also be stored onto storage device 40 for later access by end device 20 for decoding and/or playback.
In the example of FIG. 1, end device 20 specifically includes an input interface 260, a color decoding module 220, and a display apparatus 200. in some examples, input interface 260 includes a receiver and/or a modem input interface 260 may receive encoded video/image data via link 30 and/or from storage device 40. for example, end device 20 may receive encoded video/image data transmitted by source device 10 over a wireless, DP, or HDMI connection directly through input interface 260. color decoding module 220 may be used to perform color decoding processing on the encoded video/image data, obtain decoded video/image data, e.g., to invert quantized non-linear color image data to linear color image data. display apparatus 200 may be integrated with end device 20 or may be external to end device 20. display apparatus 200 may be, for example, a liquid crystal display (L CD), a plasma display, an organic light emitting diode (O L ED) display, or other type of display apparatus in general, display apparatus 200 is used to display decoded video/image data.
In a possible embodiment, when the source device 10 includes the active post-processing module 140, the end device 20 may further include an end post-processing module 240 accordingly. The end device 20 may receive the processed video/image data transmitted by the source device 10 through the wireless, wired, etc. connection manner through the input interface 260, and the end post-processing module 240 may be configured to perform corresponding inverse processing on the video/image data, for example, perform operations such as decoding and decompressing on a compressed and encoded code stream, and further transmit the obtained nonlinear color image data to the color decoding module 220. The color decoding module 220 further performs color decoding processing on the non-linear color image data transmitted from the end post-processing module 240 to obtain decoded video/image data, for example, to convert the quantized non-linear color image data into linear color image data.
The embodiment of the present invention mainly designs the color coding module 120 in the source end device 10 and the color decoding module 220 of the end device 20, and since the color decoding module 220 and the color coding module 120 operate in an inverse manner, in order to avoid the description, the embodiment of the present invention mainly describes related design implementation of the color coding module.
Referring to fig. 3, fig. 3 is a simplified block diagram of an apparatus 300 that may be used as either or both of the source device 10 and the end device 20 of fig. 1 according to an example embodiment. Apparatus 300 may implement the techniques of this disclosure, and apparatus 300 for implementing color space codecs may take the form of a computing system including multiple computing devices, or a single computing device such as a laptop, a tablet, a set-top box, a cell phone, a television, a camera, a display apparatus, a digital media player, a video game console, an in-vehicle computer, a display, a projector, a video surveillance device, and so forth. The apparatus 300 comprises a processor 301, a memory 302, and a communication interface 303, the processor 301, the memory 302, and the communication interface 303 being communicatively coupled via a bus 306.
The processor 301 in the apparatus 300 may be a central processor. Alternatively, processor 301 may be any other type of device or devices now or later developed that is capable of manipulating or processing information. Although the disclosed embodiments may be practiced using a single processor, such as processor 301, as shown, parallel processing using more than one processor is also possible, to increase computational speed and efficiency.
In one embodiment, the Memory 302 of the apparatus 300 may be a Read Only Memory (ROM) device or a Random Access Memory (RAM) device. Any other suitable type of storage device may be used as memory 302. The memory 302 may include program code and data that are accessed, read, and written to by the processor 301 over the bus 306.
In one embodiment, the communication interface 303 of the apparatus 300 can be used for sending data to the outside, and/or for receiving data transmitted from the outside, and/or for storing data in an external storage medium, and/or for reading data from the external storage medium. Communication interface 303 may be a wireless communication media interface or may be a wired communication media interface. Wireless communication media interfaces such as radio interfaces for Radio Frequency (RF) spectrum, WIFI, bluetooth, mobile networks, cellular data networks, etc., and wired communication media interfaces such as physical transmission line interfaces for display interfaces (DP), High Definition Multimedia Interfaces (HDMI), coaxial cable, coarse coaxial cable, twisted pair, and fiber optics, etc.
When the output device is a display or includes a display, the display may be implemented in different ways, including by a liquid crystal display (L CD), a Cathode Ray Tube (CRT) display, a plasma display, or a light emitting diode (light emitting diode, L) display, such as an organic L ED (organic L, O L ED) display.
In a possible implementation, the apparatus 300 may further comprise or be in communication with an image sensing device 305, the image sensing device 305 being operable to obtain color image data, for example, the image sensing device 305 comprising a camera and an ISP for capturing/pre-processing images, and for example, the image sensing device 305 comprising a graphics card device for generating images. The image sensing device 305 may also be any device that is currently or later developed that can sense an image.
It should be noted that although the processor 301 and the memory 302 of the apparatus 300 are depicted in fig. 3 as being integrated in a single unit, other configurations may also be used. The operations of processor 301 may be distributed among a number of directly coupleable machines (each machine having one or more processors), or distributed in a local area or other network. Memory 302 may be distributed across multiple machines, such as a network-based memory or a memory in multiple machines running device 300. Although only a single bus is depicted here, the bus 306 of the device 300 may be formed from multiple buses. Further, the secondary memory 302 may be directly coupled to other components of the apparatus 300 or may be accessible over a network, and may comprise a single integrated unit, such as one memory card, or multiple units, such as multiple memory cards. Accordingly, the apparatus 300 may be implemented in a variety of configurations.
In an embodiment of the present invention, the processor 301 may be configured to call a program code in the memory to execute a method for color space encoding described in the embodiments of the methods described later in the present invention, which will not be described herein again.
Referring to fig. 4, fig. 4 is a flowchart illustrating a color space encoding method according to an embodiment of the present invention, which is comprehensively described from the perspective of a source device and an end device, and the method includes, but is not limited to, the following steps:
step 401: the source device acquires linear color values of a plurality of color components of the first image in a first color space.
In some embodiments, the source device may render a first image having RGB linear color values proportional to brightness through a content generation device (image source) such as a video card.
The RGB linear color values are linear color values of the first image in a CIE1931RGB color space (i.e., the first color space is the CIE1931RGB color space), and the linear color values are in a linear relationship with the light intensity values displayed by the display device. The RGB linear color values include a linear color value of an R component, a linear color value of a G component, and a linear color value of a B component.
In a possible embodiment, the source device may also obtain linear color values for multiple color components of other color spaces, such as CIE1931XYZ, CIE1931 xyY, CIE 1976L u v, CIE 1976L a b, and other color spaces, such as L MS, CMYK, YUV, HS L, hsb (hsv), YCbCr.
Step 402: and the source end device converts the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components.
Accordingly, the value of the non-linear color is in a non-linear relationship with the light intensity value captured by the camera or displayed by the display device.
The color mapping function f (-) is used to describe the mapping relationship between linear color values and non-linear color values. In a possible implementation, the mapping relationship between the linear color values and the non-linear color values may be preset according to the color display capability (display characteristic) of the display device, and a specific implementation process will be described in detail later. The color mapping function f (-) thus formulated can be beneficial for avoiding the occurrence of quantization stripes. For example, for a certain display device, for an RGB color space, a color mapping function f (-) corresponding to an RGB image can be designed as shown in the following formula (1):
Figure BDA0001946333840000091
step 403: and the source end device carries out quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component.
In the embodiment of the present invention, the quantization bit numbers corresponding to the color components are not uniform, but are designed separately. Different quantization bit numbers can be designed for each color component by using different sensitivity degrees of human eyes to different color component changes. In a specific embodiment, the number of quantization bits corresponding to each color component may be determined according to a display characteristic of the image of the first color space by a display device, where the display characteristic of the display device includes at least one of a maximum luminance, a minimum luminance, and a chromaticity diagram range of the display device, for example. Therefore, according to the design of the embodiment of the present invention, the quantization bit number corresponding to at least one of the plurality of color components is different from that of other color components.
For example, taking a certain type of display device as an example, for an RGB color space, the quantization bit number corresponding to each color component may be calculated by the following formula (2):
Figure BDA0001946333840000101
wherein n isr、ng、nbRespectively representing the number of quantization bits respectively adopted for the three primary color components R (red), G (green) and B (blue); ceil (·) represents a round-up operation; n represents the maximum number of quantized bits in the color component. For example, the maximum quantization bit number n is 10 bits (bit), and can be calculated by the above equation (2): the R component is quantized with 10 bits, the G component is quantized with 10 bits, and the B component is quantized with 9 bits.
Thus, the color coding module of the source device may perform quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component, where the formula is shown in the following equation (3):
Figure BDA0001946333840000103
wherein, CiL representing the ith color component quantized coded codewordiA normalized linear color value representing an ith color component; f (-) represents a color mapping function; n isiIndicating the ith number of color quantization bits.
Then, by substituting equation (2) into equation (2), the non-linear color value component codeword after quantization coding of each color component can be obtained, as shown in equation (4) below:
Figure BDA0001946333840000102
wherein, Cr、Cg、CbRepresenting R, G, B quantized encoded non-linear color value component codewords of color components, respectively, int representing a rounding operation, f (-) representing a color mapping function, Lr、Lg、LbRespectively representing R, G, B normalized linear color values of the color components; n isr、ng、nbRespectively, indicate the number of quantization bits respectively employed for R, G, B color components.
Step 404: and the source end device sends the non-linear color value component code words after each color component is quantized to the tail end device.
Specifically, the quantized non-linear color component codeword obtained by the source device is transmitted to the end device through a wireless connection or media such as HDMI and DP.
Step 405: and the terminal device respectively adopts the quantization bit number corresponding to each color component to perform inverse quantization processing on the quantized nonlinear color value. The implementation manner of the quantization bit number corresponding to each color component may also refer to the related description of step 403, and for brevity of the description, details are not repeated here.
Step 406: the end device converts the non-linear color values into linear color values through the color decoding module. It is understood that this process is the inverse operation process of the foregoing step 402, and for the brevity of the description, the detailed description is omitted here.
Step 407: optionally, the end device displays the first image corresponding to the linear color value through a display device.
It can be seen that, in the color coding process, on one hand, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of display equipment, and the linear colors of the color components are converted into nonlinear colors; on the other hand, different quantization bit numbers can be designed for different color components according to the display characteristics of the display equipment by utilizing different sensitivity degrees of human eyes to different color component changes. Therefore, by combining the two aspects, the occurrence of quantization stripes can be effectively avoided, different quantization bit numbers can be reasonably distributed for different color components, the waste of the quantization bit numbers is avoided, the transmission data volume is effectively reduced, and the storage space and the transmission bandwidth are saved.
Referring to fig. 5, fig. 5 is a flow chart of another color space encoding method provided by the embodiment of the present invention, which is comprehensively described from the perspective of a source device and an end device, and the method includes, but is not limited to, the following steps:
step 501: the source device acquires linear color values of a plurality of color components of the first image in a first color space.
In some embodiments, the source device may capture a first image (i.e., an arbitrary optical image) via the camera and generate RGB linear color values via processing by the ISP module (image source).
Similarly, the RGB linear color values are linear color values of the first image in the CIE1931RGB color space (i.e. the first color space is the CIE1931RGB color space), and the linear color values are in a linear relationship with the light intensity values captured by the camera. The RGB linear color values include a linear color value of an R component, a linear color value of a G component, and a linear color value of a B component.
In a possible embodiment, the source device may also obtain linear color values for multiple color components of other color spaces, such as CIE1931XYZ, CIE1931 xyY, CIE 1976L u v, CIE 1976L a b, and other color spaces, such as L MS, CMYK, YUV, HS L, hsb (hsv), YCbCr.
Step 502: and the source end device converts the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components. For a specific implementation process, reference may be made to the description of step 402, which is not described herein again.
Step 503: and the source end device carries out quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component. For a specific implementation process, reference may be made to the description of step 403, which is not described herein again.
Step 504: the source device processes an image composed of quantized nonlinear color values through a source post-processing module.
The processing may be, for example, operations such as performing color space conversion, performing compression coding on an image/video, and storing a format. For example, the source device may perform image compression encoding processing, such as inter-frame prediction encoding and intra-frame prediction encoding, on the image composed of the series of quantized and encoded nonlinear color values obtained in step 503 through the source post-processing module, so as to obtain an encoded code stream. Alternatively, the source device may store an image composed of the nonlinear color values, and so on. Specific compression encoding processes or storage processes are known to those skilled in the art and will not be described in detail.
Step 505: and the source end device sends the data processed by the source end post-processing module to the tail end device. For example, the source device sends the code stream that is compressed and encoded by the source post-processing module to the end device.
Step 506: the end device processes data through the end post-processing module. It is understood that this process is the inverse operation process of the foregoing step 504, for example, the end post-processing module may perform image decoding according to the code stream sent by the source device, so as to obtain an image composed of nonlinear color values. For the sake of brevity of the description, no further description is provided herein.
Step 507: and the terminal device respectively adopts the quantization bit number corresponding to each color component to perform inverse quantization processing on the quantized nonlinear color value. The implementation manner of the quantization bit number corresponding to each color component may also refer to the related description of step 403, and for brevity of the description, details are not repeated here.
Step 508: the end device converts the non-linear color values into linear color values through the color decoding module. It is understood that this process is the inverse operation process of the foregoing step 502, and for brevity of the description, the detailed description is omitted here.
Step 509: optionally, the end device displays the first image corresponding to the linear color value through a display device.
It can be seen that, in the color coding process, on one hand, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of display equipment, and the linear colors of the color components are converted into nonlinear colors; on the other hand, different quantization bit numbers can be designed for different color components according to the display characteristics of the display equipment by utilizing different sensitivity degrees of human eyes to different color component changes. After the quantization coding is finished, the quantized nonlinear color can be subjected to post-processing such as image compression coding and the like, and then is sent to a display side for decoding and displaying through a code stream. Therefore, by combining the two aspects, the technical scheme of the invention can effectively avoid the quantization stripes on the display side, can reasonably allocate different quantization bit numbers for different color components, avoids the waste of the quantization bit numbers, effectively reduces the transmission data volume, and saves the storage space and the transmission bandwidth.
Referring to fig. 6, fig. 6 is a schematic flowchart of another color space encoding method according to an embodiment of the present invention, including, but not limited to, the following steps:
step 601: the source device acquires linear color values of a plurality of color components of the first image in the second color space.
In some embodiments, the source device may capture an image (i.e., an arbitrary optical image) through the camera and generate RGB linear color values through ISP module (image source) processing.
In some embodiments, the source device may render an image having RGB linear color values proportional to brightness through a content generation device such as a video card.
The RGB linear color values are linear color values of the first image in the CIE1931RGB color space (i.e. the second color space is the CIE1931RGB color space).
In possible embodiments, the source device may also obtain linear color values that may be obtained for multiple color components of other color spaces, such as CIE1931XYZ, CIE1931 xyY, CIE 1976L u v, CIE 1976L a b, etc.
Step 602: the source device converts the linear colors of the plurality of color components in the second color space to linear colors of the plurality of color components in the first color space.
For example, the image of the first image in CIE1931RGB color space (here, RGB color space is the second color space) may be referred to as RGB image, and the image of the first image in L MS color space (here, L MS color space is the first color space) may be referred to as L MS image.
First, linear color values of an RGB image are converted into linear color values of CIE XYZ1931 color space. For example, the transformation can be carried out by the following formula (5):
Figure BDA0001946333840000121
after obtaining the linear color values of the CIE XYZ1931 color space by the above equation (5), the linear color values of the CIE XYZ1931 color space may be further converted into linear color values of L MS color space by the following equation (6):
Figure BDA0001946333840000122
it should be understood that the L MS image is only an example, and in a possible embodiment, the first image may be an image in other color spaces, such as CMYK, YUV, HS L, HSB (HSV), YCbCr, etc.
Step 603: the source device converts the linear color of each color component into a nonlinear color according to a color mapping function corresponding to each color component in the plurality of color components of the first image.
Similarly, in a possible implementation, the mapping relationship between the linear color values and the non-linear color values may be pre-established according to the color display capability (display characteristic) of the display device, so as to obtain a color mapping function f (-) corresponding to each color component of the first image (e.g., L MS image). the determination process of the color mapping function f (-) may similarly refer to the related description of the foregoing step 402, and the color mapping function f (-) is also favorable for avoiding the occurrence of the quantization stripes.
Step 604: and the source end device carries out quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component.
Specifically, the quantization bit number of each color component of the nonlinear color value L 'M' S 'may be designed according to the display characteristics of an image in L MS color space displayed by the display device, and the quantization bit number corresponding to at least one color component in each color component of L' M 'S' is different from that of other color components.
Step 605: and carrying out color space conversion and image coding compression on the image consisting of the quantized nonlinear color values.
For example, the quantized non-linear color values L ' M ' S ' obtained in step 504 may be transformed into ICtCp color space to obtain ICtCp non-linear color values, as shown in equation (7) below:
Figure BDA0001946333840000131
then, the source device may perform image compression coding processing, such as inter-frame prediction coding, intra-frame prediction coding, and the like, on the obtained ICtCp nonlinear color value, thereby obtaining a coded code stream. Alternatively, the source device may store an image of the ICtCp non-linear color values, and so on. Specific compression encoding processes or storage processes are known to those skilled in the art and will not be described in detail.
Step 606: optionally, the source device may send the code stream subjected to the compression coding to the end device. Accordingly, the subsequent end device can parse the code stream and display the image through the reverse operation process, which is not described in detail herein.
It can be seen that, in the color coding process, on one hand, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of display equipment, and the linear colors of the color components are converted into nonlinear colors; on the other hand, different quantization bit numbers can be designed for different color components according to the display characteristics of the display equipment by utilizing different sensitivity degrees of human eyes to different color component changes. Before color coding, color space conversion can be carried out to obtain an image needing color coding; after the quantization coding is finished, the quantized nonlinear color can be subjected to post-processing such as color space conversion, compression coding and the like on the image, and then is sent to a display side for decoding and displaying through a code stream. Therefore, by combining the two aspects, the technical scheme of the invention can effectively avoid the quantization stripes on the display side, can reasonably allocate different quantization bit numbers for different color components, avoids the waste of the quantization bit numbers, effectively reduces the transmission data volume, and saves the storage space and the transmission bandwidth.
In some embodiments of the present invention, a method of generating a color mapping function f (-) for each color component of a linear color is described below.
In this embodiment of the present invention, a color mapping function corresponding to any one of the color components is obtained according to a linear color value of the any one color component and a variation threshold of the any one color component; for example, the relationship between the variation threshold of any one color component and the color mapping function f (-) is shown in the following equation (8):
Figure BDA0001946333840000132
wherein, Delta LiΔ L representing the threshold of change for any color component iiSpecifically, the method is used for indicating an error caused by quantization using a quantization bit number threshold of a linear color value of any color component when other color components except for any color component in a plurality of color components of a linear color of a first color space are kept unchanged, where the quantization bit number threshold of the linear color value of any color component is a minimum quantization bit number of the linear color value of any color component which makes a human eye not perceive an image displayed on a display device as having a quantization stripe. Wherein a gamut range of the display device is in the first color space.
Then, the above equation (8) can be solved by using a numerical solving method to obtain f (-) corresponding to N linear color values, and the nonlinear color values corresponding to other linear color values can be obtained by using an interpolation method according to f (-).
In a specific application scenario, before executing any one of the method embodiments of fig. 4-6, the color mapping function f (-) corresponding to each color component of the linear color may be predetermined, and specifically, the Δ L corresponding to each color component may be predeterminediAnd further f (-) corresponding to each color component is determined according to the above equation (8)iSome of the ways of (1).
The first measurement Δ L provided by an embodiment of the present invention is given belowiThe method (1).
In this way, the color gamut range of the device to be displayed, i.e. the color gamut range that the display device can represent, for example, the color gamut range of CIE1931XYZ, can be measured by a color analyzer, a color corrector, or the like. Then, the CIE1931XYZ color gamut range space of the display device is converted into a first color space (for example, the first color space is the CIE1931RGB color space) in which the quantization process is located by using the linear conversion relationship of the color space. Then, the respective color components (e.g., R, G, B components of RGB) of the linear color of the first color space are input theretoThe display device, wherein two of the linear color components of the linear color are kept unchanged, and the remaining one color component is provided with N different values LiEach L was determinediRecording L each corresponding to the number of quantization bits that make the display device imperceptible to the human eye of the occurrence of quantization fringesiCorresponding quantization error Δ Li
For example, for one LiConstructing an average luminance equal to LiSpecifically, all pixels in the top row of the image have a brightness of LiThe luminance of the lower left pixel of the image is 0 (optionally set to the lowest luminance of the display device) and the luminance of the lower right pixel of the image is 2LiThe image increases linearly and smoothly from top to bottom in brightness, keeping the contrast from 0 to 1. The linear image is quantized by using an existing photoelectric conversion Function (OETF) (optionally, PQ or sRGB), the quantized bit number is n, and then converted into a linear domain by an electro-optical conversion Function (EOTF), and the quantized image is observed by human eyes. The number n of quantization bits is gradually increased from small to large until the human eye cannot see that the quantization stripes exist in the image. Assuming that the human eye cannot observe the existence of quantization stripes in the image, the number of quantization bits is NiThe average value of quantization errors between the source image and the observed image (i.e., the image after the quantization processing of the source image) is LiCorresponding change threshold Δ Li
A second measurement, Δ L, provided by an embodiment of the present invention is given belowiThe method (1).
In this manner, the color gamut of the device to be displayed, i.e., the color gamut that the display device can represent, for example, the color gamut of CIE1931XYZ, can also be measured first by using a color analyzer, a color corrector, or the like. Then, the CIE1931XYZ color gamut range space of the display device is converted into a first color space (for example, the first color space is the CIE1931RGB color space) in which the quantization process is located by using the linear conversion relationship of the color space. Then, the respective color components of the linear color of the first color space (e.g., R, G, B components of RGB) are input to the display device, each holdingTwo of the linear color components of the linear color are unchanged, and the remaining one color component is chosen to have N different values LiFor each LiConstructing a linear smooth variation of color component intensity with an average luminance of LiDetermines each LiWhether quantization stripes (banding) are visible at different quantization levels, i.e. quantized images quantized with different numbers of quantization bits.
In particular, to minimize the visual effect of quantization streaks due to quantization, we construct a model that can predict whether contour artifacts (contour artifacts) are visible at a given quantization level.
The model may be implemented by first determining a set of spatial frequencies corresponding to a difference image (e.g., a quantization error sawtooth image) between a quantized image (i.e., an image after the source image is quantized with a certain number of quantization bits) and the source image at any one certain number of quantization bits during a change of the number of quantization bits from small to large (or from large to small), then using these frequencies and a contrast sensitivity function CSF (-) to determine our sensitivity to each spatial frequency component, then using a physico-psychological function to convert this sensitivity to a detection probability, finally using a probability summation to sum the detection probabilities corresponding to all spatial frequency components to obtain a probability statistic which is compared to a preset probability threshold, and if the probability statistic is equal to (or approximately equal to) the probability threshold, determining the certain number of quantization bits as a quantization bit threshold, the number of quantization bits representing the minimum quantization bits for which the human eye does not perceive the image displayed on the display device as a quantization stripe, LiAlternatively, the average value of the quantization errors between the quantized image and the source image may be used as the variation threshold Δ Li. The above process is described in detail as follows:
first, referring to FIG. 7, for each LiStrong structural color componentDegree of linear smooth change and average brightness of LiA source image (e.g., the left image in fig. 7) consisting of a row of gradients with contrast varying from 0 (top) to 1 (bottom) — given an average brightness level LiThe brightness of all pixels in the top row in the source image is equal to L, and the brightness of the pixels in the bottom row varies from 0 to 2L on a linear scalei(the lower left corner is 0 and the lower right corner is 2L in the figure)i). In the flow shown in fig. 7, after smooth gradient images with different contrasts are generated in linear space, they are transferred to an arbitrary color space using a transfer function before quantization. Then, quantization processing is performed based on a specific quantization level. After quantization, the inverse of the transfer function is applied to convert to linear space. An inverse display model (GOG) is used to compensate the display characteristics of the device before being sent to the display device. Thus, a desired quantized image (e.g., the right image in FIG. 7) applicable to the display device can be obtained
Then, to determine the spatial frequency of the contours, an analysis error signal, e.g., a quantization error sawtooth image between the quantized image and the source image, can be obtained at a particular quantization level, and we analyze the fourier transform of the error signal — the difference between the smooth gradient and the contour gradient. Contour artifacts on smooth gradients appear as jagged shapes (e.g., as shown in fig. 8). An analytical formula for Fourier transform of a sawtooth waveform with a period w and an amplitude h is given by the following equation (9):
Figure BDA0001946333840000151
then, for some natural numbers k, the Fourier coefficient a of the sawtoothkAs shown in the following formula (10):
Figure BDA0001946333840000152
where h is the amplitude of the sawtooth. The frequency w of the Fourier component is given bykAs shown in the following formula (11):
Figure BDA0001946333840000153
where p represents the device angular resolution in units of number of pixels per degree. w is the sawtooth period in pixels. We have found that when k > 16, the Fourier components are not significant and do not improve the accuracy of the model.
To calculate the probability of detecting each fourier component of the contour, we determine the sensitivity S from the contrast sensitivity function CSF (-) as shown in equation (12) below:
Figure BDA0001946333840000154
where w is the spatial frequency, LbIs the background brightness, and Δ LdetIs the detectable amplitude of the frequency component. Then, the contrast (amplitude divided by background brightness) of the contour pattern is normalized by multiplying by the sensitivity so that the normalized value is equal to 1 when the kth frequency component can be detected exactly. The normalized contrast is given by the following equation (13):
Figure BDA0001946333840000155
next, we convert the contrast into a probability P using a form of psychometric functionkAs shown in the following formula (14):
Figure BDA0001946333840000156
where index β is the slope of the psychometric function.
Finally, the probabilities over all fourier components are assembled using a probability summation to obtain a probability statistic P, as shown in equation (15) below:
P=1-Пk(1-Pk) (15)
then, the probability statistic is compared with a preset probability threshold, and if the probability statistic is equal to or approximately equal to the probability threshold (e.g., the probability threshold is 0.5), the specific quantization bit number is determined as the quantization bit number threshold, which is the minimum quantization bit number that does not cause contour artifacts. If the probability statistic is not equal or not approximately equal to the probability threshold, the quantization level may be changed until an appropriate number of quantization bits is found as the quantization bits threshold. Alternatively, to determine the quantization bit number threshold, a binary search algorithm may be performed such that the result P is equal to a probability threshold (e.g., a probability threshold of 0.5).
A third measurement, Δ L, provided by an embodiment of the present invention is given belowiThe method (1).
This approach differs from the second approach described above in that the model in the second approach above only considers contour fringes due to brightness variations. The change in luminance contributes greatly to the appearance of the stripes, but the change in chromaticity also contributes. Here we extend the model by using a chrominance contrast sensitivity function and a probability summation across the visual channels to take into account the effect of chrominance variation on the formation of fringes, so the extended model can be referred to as a color difference model.
For example, our color difference model takes as input two colors specified in the CIE1931XYZ color space, and predicts the probability that a difference between the two colors can be observed. If we consider the stripe effect, the difference between the two colors is reflected in the height of the sawtooth pattern introduced by the stripes. Such a model can be easily extended with a binary search algorithm, in particular taking as input the initial color and the color direction vector, resulting in a color direction vector such that the detection probability is equal to 0.5. The color difference model is used for drawing a MacAdam ellipse graph to obtain a detection threshold value.
First, we convert two colors from XYZ to L MS space (alternatively, the conversion can be accomplished using the CIE1931 color matching functions.) each channel of this tristimulus space is proportional to the response of the three pyramidal cells in the retina, long, medium, and short.
Figure BDA0001946333840000161
The cone response further translates into a contrast response of the color vision mechanism: one achromatic (black to white) and two colors: red to green and yellow green to violet. These mechanisms are not clear for the toning method to precisely adjust the color direction, but we can do without these toning knowledge. We use the simplest formula with versatility to calculate the color contrast as shown in equation (17) below:
Figure BDA0001946333840000162
where a represents the achromatic (luminance) response, R represents the red-green response, and B represents the yellow-green-violet response.
Given two colors to be distinguished, we need to calculate the contrast between them. Since there is no single way to express color contrast, we have conducted experiments with many expressions. We have found that macadam ellipses perform better predictions if the color contrast is calculated using the following equation (18):
Figure BDA0001946333840000163
CAthe expression of (1) is regularized to luminance contrast. CRAnd CYThe expression of (c) is normalized using a mixture of luminance and color mechanism responses. We have found that a good fit can be obtained with a k value of 2/3.
Given color contrast component CA、CRAnd CYWe follow the same steps as luminance stripe prediction: we multiply the color contrast by the corresponding contrast sensitivity function and Fourier coefficient a of the sawtooth patternkAnd converting the normalized contrast into a detection probability. Corresponds to CA、CRAnd CYDetection of the kth Fourier componentThe probabilities are given by the following equations (19), (20), (21), respectively:
PA,k=1-exp(ln(0.5)(CAakCSFA(rhok,A1)β) (19)
PA,k=1-exp(ln(0.5)(CRakCSFR(rhok,A1)β) (20)
PA,k=1-exp(ln(0.5)(CYakCSFY(rhok,A1)β) (21)
we observed that when the typical index parameter β of the psychometric function is 3.5, the color ellipse on the chromaticity diagram appears square, which is not common in the literature.
Then, we will integrate all fourier coefficients and responses across the three color channels using a form of probability summation to compute the final statistical probability P as shown in equation (22) below:
P=1-Пk(1-PA,k)·Пk(1-PR,k)·Пk(1-PY,k) (22)
similarly, the probability statistic may be compared with a preset probability threshold, and if the probability statistic is equal to or approximately equal to the probability threshold (e.g., the probability threshold is 0.5), the specific quantization bit number is determined to be a quantization bit number threshold, which is the minimum quantization bit number that does not cause contour artifacts. If the probability statistic is not equal or not approximately equal to the probability threshold, the quantization level may be changed until an appropriate number of quantization bits is found as the quantization bits threshold. Alternatively, to determine the quantization bit number threshold, a binary search algorithm may be performed such that the result P is equal to a probability threshold (e.g., a probability threshold of 0.5).
To verify a second measurement variation threshold Δ L provided by an embodiment of the present inventioni(and quantization bit number threshold) and determines the change threshold Δ L generated by the methodi(and quantization bit number threshold) can be positiveIn order that the quantization streaks are not observed by the human eye, a monochrome quantization (monochrome quantization) experiment related to the embodiment of the present invention is given below.
The exact shape of the CSFA depends on many free variables in the Barten model (Barten's model), including the brightness of the adaptation field, the angular size of the object, the background brightness and the viewing angle, and other factors. We performed psychophysical experiments to ensure that our Contrast Sensitivity Function (CSF) fits the visual conditions of our application scenario. We aimed at novel display technologies, using Hua Mate Pro 9 and DayDream VR helmets (peak brightness 44cd/m 2). We measured the display characteristics using a spectroradiometer (spectroradiometer) and then fit a gamma-offset-gain model. The experiments were performed in dark rooms to minimize the effects of external light sources.
We used a number of monochrome smooth gradient images, such as the one shown on the left of the aforementioned fig. 7, the relevant description of which is as described before each image consists of a row of gradients (gradients) with contrast varying from 0 (top) to 1 (bottom), given an average luminance level L, the luminance of all pixels in the top row is equal to L and the luminance of the pixels in the bottom row varies from 0 to 2L on a linear scale we measured the effect of luminance quantization at 7 chromaticities: white point, close to the three primary colors (red, green and blue), and their opposite colors (cyan, magenta and yellow), the exact color coordinates (color co-ordinates) of these colors being as shown in table 1 below.
TABLE 1 chromaticity coordinates of the monochromatic smooth gradient image in CIE L' UV.
Figure BDA0001946333840000171
Figure BDA0001946333840000181
The stimulus (stimuli) is created according to the process steps shown in fig. 7. First, one of two transfer functions (transfer functions:) is used: PQ or sRGB, converting each linear gradient image to a 0-1 range. The values are then quantized to a sample bit-depth (sample bit-depth) and converted back to linear space (linear space). Quantization levels (quantization levels) exceeding the maximum bit-depth (the maximum bit-depth) of the display are achieved by spatial-temporal dithering. Three average luminance levels are sampled for each chroma across the available dynamic range. Each gradient square area (square) has a field angle of 20 degrees (20 visual degrees). The background wall in the virtual environment has the same color as the average of the gradient stimuli (gradient stimuli).
The experimental procedures of design and observation are as follows: four smooth gradient (smooth) images are determined and only one of the four smooth gradient images is subjected to a quantization process (e.g., a quantization process via a flow shown in fig. 7). Experimental collection 9 experimental observers aged 19-45 years, with normal vision or corrected to normal color vision. These four smooth gradient images were presented to each experimental observer for viewing. For each trial, the position of the image in the four images using the quantified gradient was random. The task of the experimental observer was to indicate which was the image of the quantified gradient using a remote control. The QUEST procedure used 3030 trials to select successive quantized bit depths and calculate the final threshold. To minimize the effect of dark adaptation, the brightness level is displayed from darkest to brightest. The observer was allowed to acclimate to the environmental conditions for 2 minutes prior to the experiment. Each experimental observer was allowed to make selections after an undefined time. They can move their head freely in the virtual reality environment.
After obtaining the experimental results through the above experimental procedure, Model fitting (Model fitting) was performed, and we formulated the CSFA as a simplified parameter Barten Model (simplified parametric Barten Model) with five free variables, including relative scaling factors, as shown in the following formula (23):
Figure BDA0001946333840000182
where u is the spatial frequency, L is the average luminance, p1...5Five free parameters are indicated. We are right to p1...5The optimization is performed such that the error (weighted sum of mean square deviations) between the model predicted values and the observed experimental measured values is minimized. We have found that the above formula gives an optimal solution when p is (39.9565, 0.1722, 0.4864, 120.3724, 0.8699).
Specifically, the experimental comparison of model fitting and subjective standard observation (standard observer) may be as shown in FIG. 9, where FIG. 9 is a comparison of the monochrome contrast sensitivity function (monochrome sensitivity function) obtained from the model fitting and the subjective standard observation.A curve overlap in FIG. 9 indicates that the model data result may closely approximate the observation data result, and that the prediction may always remain within the error bandiThe model constructed in the manner of (and the quantization bit number threshold) can predict more accurately whether contour artifacts (contour artifacts) are visible at a given quantization level. It should be noted that, in order to ensure that the luminance gradients of different chromaticities can be well predicted by the luminance detection model. One can attempt to process the same data set with a larger exponential order than our model to optimize PSNR.
To verify a third measurement variation threshold Δ L provided by an embodiment of the present inventioni(and quantization bit number threshold) and determines the change threshold Δ L generated by the methodi(and quantization bit number threshold) are just as good as the human eye cannot observe the quantization fringes, and the chroma quantization (chroma quantization) experiment associated with the embodiment of the present invention is given below.
This chroma quantification experiment measures the maximum amount of chroma quantification that does not cause a detectable difference (maximum of chroma quantification). We investigated the effect of chrominance quantization using two color spaces, YCbCr and ICtCp. Both color spaces aim to relate luminance to chrominance. The apparatus and experimental procedure for this chroma quantification experiment are the same as for the monochrome quantification experiment described above. The stimulus (stimuli)Consisting of equal-luminance smooth image gradients at three fixed luminance levels in the CIE L0 u0v0 color space, at u0v0Two line segments are selected in the plane, specifically as shown in FIG. 10, FIG. 10 is a visual representation of the Color lines (u: horizontal, v: vertical) of a chromaticity quantization experiment, both intersecting the white point of the display and being orthogonal to the chromaticity dimension of CIE L ' u ' v ', the line segments are parallel to the chromaticity axis (chroma axis), defined by the gamut of the device, and intersect at the white point, for both line segments, a smooth gradient stimulus is generated in a manner similar to the achromatic gradient used in experiment 1, Color saturation (Color saturation) at the top of the image is zero and increases linearly along the image vertical direction, at the bottom is a maximum, specifically as shown in FIG. 11, FIG. 11 is a gradient image for the chromaticity quantization experiment, where the image on the left side of FIG. 11 is to keep the v ' axis component at the white point of the device gamut unchanged and only changes along the gradient of the u ' component, the image on the right side of FIG. 11 is to keep the u's white point of the device, only changes along the v's axis, and the saturation gradient of the image at the top is gradually increasing toward the saturation of the Color gamut at the bottom is 0.
After the experimental results are obtained through the above experimental process, Model fitting (Model fitting) is performed. The only free parameter of the colorimetric model (colorimetric model) is the variable a that determines the exact cone-contrast formula (cone-contrast). We have found that a gives the best fit when a is equal to about 23. The experimental results also show that the experimental data of the colorimetric model can closely approximate the observed experimental data, and the prediction is always positioned in an error bar.
A method of performing Model predictions (Model predictions) based on the chromaticity Model is briefly described below. The chrominance model can be simply extended to take as input the starting color (starting color) and the color direction vector (color direction vector) (instead of the smooth image gradient). The color may then be built along the color direction vector according to a binary search such that the probability that a streak phenomenon can be detected is equal to a probability threshold (e.g., 0.5). We use this extended model to establish the detection threshold and draw a color consistency ellipse graph similar to macadam ellipse, as shown in fig. 12. Fig. 12 shows the comparison of our model predictions against the CIE DeltaE 2000 difference diagram and the original MacAdam ellipse diagram, respectively, each subgraph in fig. 12 corresponding to a different luminance level, where MacAdam ellipse is the result of testing only at a background luminance of 48cd/m 2. It should be noted that our colorimetric model aims to provide better prediction results than the traditional color difference, so the shape of the prediction results of the colorimetric model has certain comparability.
Based on the same inventive concept, an apparatus related to embodiments of the present invention is provided in the following.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an apparatus 70 for color space coding according to an embodiment of the present invention, where the apparatus 70 includes: an image acquisition module 701, a color conversion module 702 and a quantization module 703, wherein:
an image obtaining module 701, configured to obtain linear colors of a plurality of color components of a first image in a first color space;
a color conversion module 702, configured to convert the linear color of each color component into a non-linear color according to a color mapping function;
a quantization module 703, configured to quantize the nonlinear color of each color component according to the quantization bit number corresponding to each color component, so as to obtain a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
In some possible embodiments, the color conversion module 702 is specifically configured to: and converting the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components, wherein the color mapping function corresponding to at least one color component is different from the color mapping functions corresponding to other color components in the plurality of color components.
In some possible embodiments, the color mapping function corresponding to any color component in the plurality of color components is obtained according to a value of a linear color of the any color component and a change threshold of the value of the linear color of the any color component; wherein the threshold value of the change in the linear color value of any color component is used to indicate an error caused by quantization using a threshold value of the quantization bit number of the linear color value of any color component, which is the minimum quantization bit number of the linear color value of any color component that makes an image displayed on a display device without human eyes perceiving the appearance of quantization stripes.
In some possible embodiments, the device 70 further comprises a testing module (not shown) for keeping the color components of the plurality of color components other than the any color component unchanged, and selecting L a plurality of different values on the any color componentiFor each LiConstructing a linear smooth variation of color component intensity with an average luminance of LiL from a plurality of quantized bit numbersiA quantization bit number threshold of (a); the quantization bit number threshold is the minimum quantization bit number which can make human eyes not perceive quantization stripes of the image displayed on the display device in the plurality of quantization bit numbers; the image displayed on the display device is obtained by quantizing the second image according to the number of quantization bits; and setting an average value of amplitudes of sawtooth waves in a difference image between the third image and the second image as the variation threshold, or setting an average value of errors between the third image and the second image as the variation threshold, wherein the third image is an image obtained by quantizing the second image according to the quantization bit number threshold.
In some possible embodiments, the test module is specifically configured to: quantizing the second image according to any one of the quantized bit numbers to obtain a fourth image; determining a plurality of Fourier frequency components corresponding to a difference image between the fourth image and the second image; determining a degree of response of each of the plurality of Fourier frequency components to luminance and/or chrominance; converting the response degree of each Fourier frequency component to brightness and/or chroma into detection probability; and counting the detection probabilities of the multiple Fourier frequency components to obtain a probability statistic value, and determining the quantization bit number adopted by the current quantization processing as the quantization bit number threshold when the probability statistic value meets a preset probability threshold.
In some possible embodiments, the image obtaining module 701 is specifically configured to obtain the linear colors of the plurality of color components in the first color space by at least one of:
obtaining linear colors of the plurality of color components in the first color space by means of camera capture and image signal processing ISP; or generating, by the graphics card device, the linear colors of the plurality of color components in the first color space.
In some possible embodiments, the image acquisition module 701 is further configured to: acquiring linear colors of a plurality of color components of the first image in a second color space; the second color space is different from the first color space; converting the linear color of the plurality of color components in the second color space to the linear color of the plurality of color components in the first color space.
In some possible embodiments, the apparatus further comprises a post-processing module (not shown); the post-processing module is used for executing at least one of the following operations:
carrying out image coding on the quantized nonlinear color of each color component in the quantized image;
performing color space conversion on the quantized nonlinear color of each color component in the quantized image;
storing the quantized nonlinear color of each color component in the quantized image;
and transmitting the quantized nonlinear color of each color component in the quantized image to a display device.
It should be noted that the relevant functional units of the above-mentioned device 70 can be implemented by hardware, software or a combination of hardware and software. Those skilled in the art will appreciate that the functional blocks described in FIG. 13 may be combined or separated into sub-blocks to implement the present scheme. The functional implementation of these functional units may refer to the related description of the source device or the end device in the embodiments of fig. 4-6 above. For the sake of brevity of the description, no further description is provided herein.
In the above embodiments, all or part may be implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer program instructions which, when loaded and executed on a computer, cause a process or function according to an embodiment of the invention to be performed, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one network site, computer, server, or data center to another network site, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer and can be a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes, etc.), optical media (e.g., DVDs, etc.), or semiconductor media (e.g., solid state drives), among others.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

Claims (16)

1. A color space encoding method, characterized in that the method comprises:
acquiring linear colors of a plurality of color components of a first image in a first color space;
converting the linear color of each color component into a nonlinear color according to a color mapping function;
quantizing the nonlinear colors of the color components according to the quantization bit number corresponding to each color component, so as to obtain a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
2. The method of claim 1, wherein converting the linear colors of the respective color components into non-linear colors according to a color mapping function comprises:
and converting the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components, wherein the color mapping function corresponding to at least one color component is different from the color mapping functions corresponding to other color components in the plurality of color components.
3. The method of claim 2,
the color mapping function corresponding to any color component in the plurality of color components is obtained according to the value of the linear color of the any color component and the change threshold value of the linear color of the any color component;
wherein the threshold value of the change in the linear color value of any color component is used to indicate an error caused by quantization using a threshold value of the quantization bit number of the linear color value of any color component, which is the minimum quantization bit number of the linear color value of any color component that makes an image displayed on a display device without human eyes perceiving the appearance of quantization stripes.
4. The method of claim 3, wherein the obtaining the first image precedes linear color of the plurality of color components in the first color space, the method further comprising:
keeping the color components other than said any color component unchanged, selecting a plurality of different values L on said any color componentiFor each LiConstructing a linear smooth variation of color component intensity with an average luminance of LiA second image of (a);
determining L from a plurality of quantization bitsiA quantization bit number threshold of (a); the quantization bit number threshold is the minimum quantization bit number which can make human eyes not perceive quantization stripes of the image displayed on the display device in the plurality of quantization bit numbers; the image displayed on the display device is obtained by quantizing the second image according to the number of quantization bits;
taking an average value of amplitudes of sawtooth waves in a difference image between a third image and the second image as the change threshold, or taking an average value of errors between the third image and the second image as the change threshold; wherein the third image is an image obtained by quantizing the second image according to the quantization bit number threshold.
5. The method of claim 4, wherein L is determined from a plurality of quantization bitsiThe quantization bit number threshold of (2), comprising:
quantizing the second image according to any one of the quantized bit numbers to obtain a fourth image;
determining a plurality of Fourier frequency components corresponding to a difference image between the fourth image and the second image;
determining a degree of response of each of the plurality of Fourier frequency components to luminance and/or chrominance;
converting the response degree of each Fourier frequency component to brightness and/or chroma into detection probability;
and counting the detection probabilities of the multiple Fourier frequency components to obtain a probability statistic value, and determining the quantization bit number adopted by the current quantization processing as the quantization bit number threshold when the probability statistic value meets a preset probability threshold.
6. The method according to any of claims 1-5, wherein said obtaining the linear color of the plurality of color components of the first image in the first color space is obtained by at least one of:
obtaining linear colors of the plurality of color components in the first color space by means of camera capture and image signal processing ISP; or the like, or, alternatively,
generating, by the graphics card device, the linear colors of the plurality of color components in the first color space.
7. The method of any of claims 1-5, wherein obtaining linear colors of a plurality of color components of the first image in the first color space comprises:
acquiring linear colors of a plurality of color components of the first image in a second color space; the second color space is different from the first color space;
converting the linear color of the plurality of color components in the second color space to the linear color of the plurality of color components in the first color space.
8. The method according to any of claims 1-7, wherein after the non-linear colors of the respective color components are quantized to obtain a quantized image,
the method further comprises at least one of:
carrying out image coding on the quantized nonlinear color of each color component in the quantized image;
performing color space conversion on the quantized nonlinear color of each color component in the quantized image;
storing the quantized nonlinear color of each color component in the quantized image;
and transmitting the quantized nonlinear color of each color component in the quantized image to a display device.
9. An apparatus for color space encoding, the apparatus comprising:
an image acquisition module for acquiring linear colors of a plurality of color components of a first image in a first color space;
the color conversion module is used for converting the linear color of each color component into a nonlinear color according to a color mapping function;
the quantization module is used for quantizing the nonlinear colors of the color components according to the quantization bit numbers corresponding to the color components so as to obtain quantized images; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
10. The device of claim 9, wherein the color conversion module is specifically configured to:
and converting the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components, wherein the color mapping function corresponding to at least one color component is different from the color mapping functions corresponding to other color components in the plurality of color components.
11. The apparatus of claim 10,
the color mapping function corresponding to any color component in the plurality of color components is obtained according to the value of the linear color of the any color component and the change threshold value of the linear color of the any color component;
wherein the threshold value of the change in the linear color value of any color component is used to indicate an error caused by quantization using a threshold value of the quantization bit number of the linear color value of any color component, which is the minimum quantization bit number of the linear color value of any color component that makes an image displayed on a display device without human eyes perceiving the appearance of quantization stripes.
12. The apparatus of claim 11, further comprising a testing module to:
keeping the color components other than said any color component unchanged, selecting a plurality of different values L on said any color componentiFor each LiConstructing a linear smooth variation of color component intensity with an average luminance of LiA second image of (a);
determining L from a plurality of quantization bitsiA quantization bit number threshold of (a); the quantization bit number threshold is the minimum quantization bit number which can make human eyes not perceive quantization stripes of the image displayed on the display device in the plurality of quantization bit numbers; the image displayed on the display device is obtained by quantizing the second image according to the number of quantization bits;
and setting an average value of amplitudes of sawtooth waves in a difference image between the third image and the second image as the variation threshold, or setting an average value of errors between the third image and the second image as the variation threshold, wherein the third image is an image obtained by quantizing the second image according to the quantization bit number threshold.
13. The device of claim 12, wherein the testing module is specifically configured to:
quantizing the second image according to any one of the quantized bit numbers to obtain a fourth image;
determining a plurality of Fourier frequency components corresponding to a difference image between the fourth image and the second image;
determining a degree of response of each of the plurality of Fourier frequency components to luminance and/or chrominance;
converting the response degree of each Fourier frequency component to brightness and/or chroma into detection probability;
and counting the detection probabilities of the multiple Fourier frequency components to obtain a probability statistic value, and determining the quantization bit number adopted by the current quantization processing as the quantization bit number threshold when the probability statistic value meets a preset probability threshold.
14. The apparatus according to any of claims 9-13, wherein the image acquisition module is specifically configured to acquire the linear colors of the plurality of color components in the first color space by at least one of:
obtaining linear colors of the plurality of color components in the first color space by means of camera capture and image signal processing ISP; or the like, or, alternatively,
generating, by the graphics card device, the linear colors of the plurality of color components in the first color space.
15. The apparatus of any of claims 9-13, wherein the image acquisition module is further configured to:
acquiring linear colors of a plurality of color components of the first image in a second color space; the second color space is different from the first color space;
converting the linear color of the plurality of color components in the second color space to the linear color of the plurality of color components in the first color space.
16. The apparatus of any of claims 9-15, further comprising a post-processing module; the post-processing module is used for executing at least one of the following operations:
carrying out image coding on the quantized nonlinear color of each color component in the quantized image;
performing color space conversion on the quantized nonlinear color of each color component in the quantized image; storing the quantized nonlinear color of each color component in the quantized image;
and transmitting the quantized nonlinear color of each color component in the quantized image to a display device.
CN201910038935.1A 2019-01-15 2019-01-15 Color space encoding method and apparatus Active CN111435990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910038935.1A CN111435990B (en) 2019-01-15 2019-01-15 Color space encoding method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910038935.1A CN111435990B (en) 2019-01-15 2019-01-15 Color space encoding method and apparatus

Publications (2)

Publication Number Publication Date
CN111435990A true CN111435990A (en) 2020-07-21
CN111435990B CN111435990B (en) 2022-09-09

Family

ID=71580913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910038935.1A Active CN111435990B (en) 2019-01-15 2019-01-15 Color space encoding method and apparatus

Country Status (1)

Country Link
CN (1) CN111435990B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088186A1 (en) * 2021-11-18 2023-05-25 北京与光科技有限公司 Image processing method and apparatus based on spectral imaging, and electronic device
WO2023173953A1 (en) * 2022-03-15 2023-09-21 华为技术有限公司 Probe data processing and coding methods and devices

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080123971A1 (en) * 2002-12-12 2008-05-29 Canon Kabushiki Kaisha Image processing apparatus
CN107211142A (en) * 2015-01-30 2017-09-26 汤姆逊许可公司 The method and apparatus decoded to coloured image
CN107836118A (en) * 2015-05-21 2018-03-23 瑞典爱立信有限公司 Pixel pre-processes and coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080123971A1 (en) * 2002-12-12 2008-05-29 Canon Kabushiki Kaisha Image processing apparatus
CN107211142A (en) * 2015-01-30 2017-09-26 汤姆逊许可公司 The method and apparatus decoded to coloured image
CN107836118A (en) * 2015-05-21 2018-03-23 瑞典爱立信有限公司 Pixel pre-processes and coding

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088186A1 (en) * 2021-11-18 2023-05-25 北京与光科技有限公司 Image processing method and apparatus based on spectral imaging, and electronic device
WO2023173953A1 (en) * 2022-03-15 2023-09-21 华为技术有限公司 Probe data processing and coding methods and devices

Also Published As

Publication number Publication date
CN111435990B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
US11289002B2 (en) System and method for a six-primary wide gamut color system
US10535125B2 (en) Dynamic global tone mapping with integrated 3D color look-up table
WO2020007165A1 (en) Method and device for video signal processing
RU2710291C2 (en) Methods and apparatus for encoding and decoding colour hdr image
JP6937695B2 (en) Methods and Devices for Encoding and Decoding Color Pictures
CN101690161B (en) Apparatus and method for automatically computing gamma correction curve
CN111429827B (en) Display screen color calibration method and device, electronic equipment and readable storage medium
US20170324959A1 (en) Method and apparatus for encoding/decoding a high dynamic range picture into a coded bitstream
KR102523233B1 (en) Method and device for decoding a color picture
GB2568326A (en) Video image processing
US9179042B2 (en) Systems and methods to optimize conversions for wide gamut opponent color spaces
US11689748B2 (en) Pixel filtering for content
US9961236B2 (en) 3D color mapping and tuning in an image processing pipeline
US11006152B2 (en) Method and apparatus for encoding/decoding a high dynamic range picture into a coded bitstream
CN111435990B (en) Color space encoding method and apparatus
Lee et al. Contrast-preserved chroma enhancement technique using YCbCr color space
JP2018507618A (en) Method and apparatus for encoding and decoding color pictures
JP2016025635A (en) Image processing system and method of the same
CN107492365B (en) Method and device for obtaining color gamut mapping fitting function
Wen P‐46: A Color Space Derived from CIELUV for Display Color Management
EP3716619A1 (en) Gamut estimation
CN117594007A (en) Color gamut conversion method and device and display equipment
EP3242481A1 (en) Method and apparatus for encoding/decoding a high dynamic range picture into a coded bitstream
Sarkar Evaluation of the color image and video processing chain and visual quality management for consumer systems
Son et al. Implementation of a real-time color matching between mobile camera and mobile LCD based on 16-bit LUT design

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant