CN111435990B - Color space encoding method and apparatus - Google Patents

Color space encoding method and apparatus Download PDF

Info

Publication number
CN111435990B
CN111435990B CN201910038935.1A CN201910038935A CN111435990B CN 111435990 B CN111435990 B CN 111435990B CN 201910038935 A CN201910038935 A CN 201910038935A CN 111435990 B CN111435990 B CN 111435990B
Authority
CN
China
Prior art keywords
color
image
linear
quantization
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910038935.1A
Other languages
Chinese (zh)
Other versions
CN111435990A (en
Inventor
方华猛
邸佩云
拉法尔·曼提尔克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
University of Cambridge
Original Assignee
Huawei Technologies Co Ltd
University of Cambridge
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd, University of Cambridge filed Critical Huawei Technologies Co Ltd
Priority to CN201910038935.1A priority Critical patent/CN111435990B/en
Publication of CN111435990A publication Critical patent/CN111435990A/en
Application granted granted Critical
Publication of CN111435990B publication Critical patent/CN111435990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/64Systems for the transmission or the storage of the colour picture signal; Details therefor, e.g. coding or decoding means therefor
    • H04N1/648Transmitting or storing the primary (additive or subtractive) colour signals; Compression thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output

Abstract

The application provides a color space encoding method and device, wherein the method comprises the following steps: acquiring linear colors of a plurality of color components of a first image in a first color space; converting the linear color of each color component into a nonlinear color according to a color mapping function; quantizing the nonlinear colors of each color component according to the quantization bit number corresponding to each color component, thereby obtaining a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components. The method and the device are beneficial to effectively reducing the total quantization bit number and saving the storage space and the transmission bandwidth on the premise of not introducing quantization stripes.

Description

Color space encoding method and apparatus
Technical Field
The present invention relates to the field of color coding, and in particular, to a color space coding method and apparatus.
Background
Generally, a camera captures an optical signal from an external environment through a lens, and an image sensor converts the optical signal into an electrical signal which is linear with light intensity. Generally, a camera captures image data (may be referred to as RAW image data for short) in a RAW format through an image Sensor (Sensor), and the RAW image data includes only one gray value on each pixel point, and the gray value is one component of red/green/blue. RAW Image data is processed by a demosaic module of an Image Signal Processor (ISP), and the demosaic module interpolates and supplements other two components to each pixel point, so as to generate Image data in a native RGB format (which may be referred to as native RGB Image data for short). The native RGB image data is transformed to a gamut space (e.g., bt.709, P3, bt.2020 gamut, etc.) via a linear mapping, resulting in new RGB image data (linear color values).
In order to reduce the data amount processed by ISP, reduce the transmission bandwidth of image video and reduce the storage space, OETF module for quantization coding in ISP utilizes the principle that human eyes are non-linearly sensitive to the illumination intensity to carry out non-linear compression coding on the illumination brightness in the captured image data, such as ITU-R BT.709[1 ]]The OETF function is subjected to Gamma (Gamma) correction on the linear color value in the standard, so that the linear color value is converted into a nonlinear color value, and the RGB three-primary-color components are subjected to quantization coding by using the same bit width. Assuming that quantization coding is performed by using n quantization bits, taking a full frequency range (full range) as an example, R, G, B coding values of three color components are: v. of R/G/B =V R/G/B *(2 n -1), wherein V R/G/B Is a normalized nonlinear color value of R, G, B three linear color components, v R/G/B Is the corresponding output color quantized coded codeword (i.e., quantized nonlinear color value). The quantized non-linear color values may then be further video encoded for output or directly output.
In the above color coding, the quantization coding process of the OETF module is a mapping process from a multi-color value to a single-color value, there is a loss of information, if the quantization coding is not appropriate, a loss of visible information of human eyes will be caused, and human eyes can see that quantization stripes (banding) appear on an image, for example, fig. 1 shows that in a scene, after gamma correction and quantization coding are performed on linear color values, stripes visible to human eyes appear, thereby seriously affecting the quality of the image. If the stripe is not generated, the quantization bit number n needs to be increased, which results in an increase in data volume and a waste of transmission bandwidth.
Disclosure of Invention
The embodiment of the invention provides a color space coding method and color space coding equipment, which can effectively reduce the total bit number and save the storage space and the transmission bandwidth on the premise of not introducing quantization stripes.
In a first aspect, an embodiment of the present invention provides a color space encoding method, where the method includes: acquiring linear colors of a plurality of color components of a first image in a first color space; converting the linear color of each color component into a nonlinear color according to a color mapping function; quantizing the nonlinear colors of the color components according to the quantization bit number corresponding to each color component, so as to obtain a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
It can be seen that, by implementing the embodiment of the present invention, different quantization bit numbers can be reasonably allocated to different color components without introducing quantization stripes in the color coding process, that is, a suitable quantization bit number can be designed for different color components, thereby avoiding the waste of quantization bit numbers caused by using a uniform quantization bit number for each color component in the prior art, effectively reducing the amount of transmission data, and saving the storage space and the transmission bandwidth.
Based on the first aspect, in a possible implementation manner, the quantization bit number corresponding to each color component is determined according to a display characteristic of a display device, where the display characteristic of the display device includes at least one of a maximum brightness, a minimum brightness, and a dispersion range of the display device. The quantization stripe occurrence is related to the display characteristics of the display device, and the quantization bit numbers of different color components are designed according to the display characteristics of the display device, so that the quantization stripe occurrence after quantization processing of each color component is avoided.
For example, for linear colors in CIE1931RGB color space, the quantization bit number of each color component can be designed according to the display characteristics of a certain display device, as shown in the following formula:
Figure BDA0001946333840000021
wherein n is r 、n g 、n b Respectively representing the number of quantization bits respectively adopted for the three primary color components R (red), G (green) and B (blue); ceil (·) denotes fetch-UpFinishing; n represents the maximum number of quantized bits in the color component.
Wherein, the parameter 1.000, the parameter 0.9452 and the parameter 0.3116 in the above formula are obtained according to the display characteristics of the display device. For example, the maximum quantization bit number n is 10 bits (bit), and can be calculated by the following formula: the R component is quantized by 10 bits, the G component is quantized by 10 bits, and the B component is quantized by 9 bits, so that the designed quantization bit number can effectively reduce the transmission data amount, save the storage space and the transmission bandwidth, and is favorable for avoiding quantization stripes after quantization processing and improving the image quality.
Based on the first aspect, in a possible implementation manner, the converting the linear color of each color component into a non-linear color according to a color mapping function includes: and converting the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components, wherein the color mapping function corresponding to at least one color component is different from the color mapping functions corresponding to other color components in the plurality of color components.
For example, for linear colors in CIE1931RGB color space, the quantization bit number of each color component can be designed according to the display characteristics of a certain display device, as shown in the following formula:
Figure BDA0001946333840000022
wherein, f (L) r )、f(L g )、f(L b ) Respectively representing color mapping functions corresponding to R, G, and B components of RGB, L r 、L g 、L b Linear color values respectively representing R, G, and B components of RGB, parameters 0.09754, 0.09296, 0.09648, and the like may be determined according to display characteristics of the display device.
That is, in the color coding process, in addition to designing different quantization bit numbers for different color components according to the display characteristics of the display device, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of the display device, so as to reasonably convert the linear colors of the respective color components into nonlinear colors. The two aspects are complementary, and by combining the two aspects, the occurrence of quantization stripes can be effectively avoided, the storage space and the transmission bandwidth are saved, and the technical effect is improved.
Based on the first aspect, in a possible implementation manner, the color mapping function corresponding to any color component in the plurality of color components is further obtained according to a value of a linear color of the any color component and a change threshold of the value of the linear color of the any color component; wherein the threshold value of the change in the linear color value of any color component is used to indicate an error caused by quantization using a threshold value of the quantization bit number of the linear color value of any color component, which is the minimum quantization bit number of the linear color value of any color component that makes an image displayed on a display device without human eyes perceiving the appearance of quantization stripes.
For example, in some application scenarios, the relationship between the variation threshold of any one color component and the color mapping function f (-) is as follows:
Figure BDA0001946333840000031
wherein, AL i Representing the variation threshold of any color component i.
It can be seen that, by applying the embodiment of the present invention, a more reasonable color mapping function of each color component can be obtained, the color mapping function of each color component is used to generate a nonlinear color value, and then quantization processing is performed according to the quantization level of each color component designed by the embodiment of the present invention, so that it can be ensured that no quantization stripe visible to human eyes appears in an image, a storage space and a transmission bandwidth are saved, and a technical effect is improved.
Based on the first aspect, in a possible implementation, the variation threshold Δ L may be obtained by the following procedure i
First, the color gamut range of a device to be displayed, i.e., the color gamut range that the display device can represent, for example, the color gamut range of CIE1931XYZ, can be measured by using a color analyzer, a color corrector, or the like. Then, the CIE1931XYZ color gamut range space of the display device is converted into a first color space (for example, the first color space is the CIE1931RGB color space) where the quantization process is located by using the linear conversion relationship of the color space. Inputting respective color components (such as R, G, B components of RGB) of linear colors of a first color space to the display device, keeping the other color components of the plurality of color components except for the any color component unchanged, and selecting a plurality of different values L on the any color component i For each L i Constructing a linear smooth variation of color component intensity with an average luminance of L i A source image (the source image may be referred to herein as a second image); determining L from a plurality of quantization bits i The quantization bit number threshold of (a) is a minimum quantization bit number of the plurality of quantization bit numbers that a human eye cannot perceive a quantization stripe appearing in an image displayed on the display device; the image displayed on the display device is obtained by quantizing the second image according to the number of quantization bits. And then, quantizing the second image according to the quantization bit number threshold value to obtain a third image. Finally, in some application scenarios, the average value of the amplitudes of the sawtooth waves in the difference image between the third image and the second image may be used as the variation threshold; in still other application scenarios, an average value of errors between the third image and the second image may be taken as the variation threshold.
Based on the first aspect, in a possible implementation, L may be determined according to the following procedure i Quantization bit number threshold of (2): firstly, traversing any one of a plurality of quantization bit numbers (for example, gradually changing the quantization bit number from small to large), and quantizing the second image according to any one of the plurality of quantization bit numbers to obtain a fourth image; then, a difference image between the fourth image and the second image is determined to correspond to a plurality ofA Fourier frequency component; determining a degree of response of each of the plurality of fourier frequency components to luminance and/or chrominance, such as normalized contrast to luminance and/or chrominance; converting the response degree of each Fourier frequency component to brightness and/or chroma into detection probability; and counting the detection probability of the Fourier frequency components to obtain a probability statistic value, and determining the quantization bit number adopted by the current quantization processing as the quantization bit number threshold when the probability statistic value meets a preset probability threshold.
It can be seen that through the above process, the linear color of the first color space can be tested according to the display characteristics of the display device through a modeling manner, so as to obtain the quantization bit threshold of each color component. Furthermore, the variation threshold Δ L corresponding to the linear color value of each color component can be obtained according to the quantization bit threshold of each color component i So as to be dependent on the linear color values of the respective color components and the variation threshold Δ L i A color mapping function for each color component is obtained. Therefore, the embodiment of the invention can realize the design of different color mapping functions for different color components of linear colors, ensure that quantization stripes visible to human eyes do not appear in a quantization image, save storage space and transmission bandwidth and improve technical effects.
The linear colors of the color components of the first image in the first color space according to the embodiments of the present invention may be derived in various ways.
For example, in a possible embodiment, the linear colors of the plurality of color components in the first color space may be obtained by means of camera capture and image signal processing ISP.
For another example, in a possible implementation, the linear colors of the plurality of color components in the first color space may be generated by a graphics card device.
For another example, in a possible embodiment, linear colors of a plurality of color components of the first image in a second color space may be first obtained, wherein the second color space is different from the first color space; then, the linear colors of the plurality of color components in the second color space are converted into the linear colors of the plurality of color components in the first color space by conversion of the color space.
Based on the first aspect, in a possible implementation manner, after the non-linear colors of the color components are quantized to obtain a quantized image, the quantized image may be further post-processed according to the needs of an application scenario, and the relevant post-processing manner may be, for example, one or a combination of the following manners:
carrying out image coding on the quantized nonlinear color of each color component in the quantized image;
performing color space conversion on the quantized nonlinear color of each color component in the quantized image;
storing the quantized nonlinear color of each color component in the quantized image;
and transmitting the quantized nonlinear color of each color component in the quantized image to a display device.
In a second aspect, an embodiment of the present invention provides an apparatus for color space coding, including: the device comprises an image acquisition module, a color conversion module, a quantization module, a test module and a post-processing module. Wherein: the image acquisition module may be configured to acquire linear colors of a plurality of color components of the first image in a first color space; the color conversion module may be configured to convert the linear colors of the respective color components into non-linear colors according to a color mapping function; the quantization module may be configured to quantize the nonlinear color of each color component according to the quantization bit number corresponding to each color component, so as to obtain a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
The functional modules in the device may be specifically adapted to implement the method described in the first aspect.
In a third aspect, an embodiment of the present invention provides another apparatus for color space coding, where the apparatus includes: a memory for storing program instructions and one or more processors coupled to the memory for invoking the program instructions, in particular for performing the method as described in the first aspect.
In a fourth aspect, embodiments of the present invention provide a non-volatile computer-readable storage medium; the computer readable storage medium is used for storing the code for implementing the method of the first aspect. The program code, when executed by a computing device, is for use in the method of the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer program product; the computer program product comprising program instructions which, when executed by a computing device, cause the controller to perform the method of the first aspect as set forth above. The computer program product may be a software installation package, which, in case it is required to use the method provided by any of the possible designs of the first aspect described above, may be downloaded and executed on the controller to implement the method of the first aspect.
It can be seen that, in the color coding process, on one hand, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of display equipment, and the linear colors of the color components are converted into nonlinear colors; on the other hand, different quantization bit numbers can be designed for different color components according to the display characteristics of the display equipment by utilizing different sensitivity degrees of human eyes to different color component changes. Therefore, by combining the two aspects, the occurrence of quantization stripes can be effectively avoided, different quantization bit numbers can be reasonably distributed for different color components, the waste of the quantization bit numbers is avoided, the transmission data volume is effectively reduced, and the storage space and the transmission bandwidth are saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present invention, the drawings required to be used in the embodiments or the background art of the present invention will be described below.
FIG. 1 is a diagram illustrating a linear color image and a scene in which quantization stripes appear after the image is subjected to quantization coding;
fig. 2 is a schematic structural diagram of a source device and an end device and a schematic diagram of a color coding system composed of the source device and the end device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an apparatus that can be used as either or both of a source device and an end device according to an embodiment of the present invention;
FIG. 4 is a flow chart of a color coding method according to an embodiment of the present invention;
FIG. 5 is a flow chart of another color coding method according to an embodiment of the present invention;
FIG. 6 is a flow chart of another color coding method according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart of generating a quantized image from a source image according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a sawtooth scene according to an embodiment of the present invention;
FIG. 9 is a graphical representation of comparative results of experimental data provided by an embodiment of the present invention;
FIG. 10 is a graphical representation of the results of an experiment provided by an embodiment of the present invention;
FIG. 11 is a schematic diagram of a gradient image for a chroma quantization experiment according to an embodiment of the present invention;
FIG. 12 is a graphical representation of comparative results of further experimental data provided by an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described below with reference to the drawings. The terminology used in the description of the embodiments of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
For a better understanding of the embodiments of the present invention, the color space involved in the embodiments of the present invention will be described first.
A color space (or color space, or color system) is usually represented by a three-dimensional model, i.e. three-dimensional coordinates representing three parameters, which describe the position of a specific color in the three-dimensional coordinates. The color space is also called a color model (or color model) in the specific application scenario. Different color spaces can be designed according to different coordinate parameters of the color spaces. In the tristimulus model, if a color and another color of three primary colors mixed with different components all appear the same to the human eye, we refer to the components of the three primary colors as tristimulus values of the color. For the problems of how to select three primary colors, how to quantify, how to determine the stimulus value, and the like, there is a set of universal standard — the Commission Internationale de L' Eclairage (CIE) standard colorimetry system. There are many color spaces under this standard: for example, CIE1931RGB, CIE1931XYZ, CIE1931 xyY, CIE 1976L u v, CIE 1976L a b, and the like. Besides, there may be other defined color spaces such as LMS, CMYK, CIE YUV, HSL, HSB (HSV), YCbCr, etc. The expression of the color spaces is manifold, and different color spaces may have different characteristics, but may be mutually converted.
For example, the CIE1931RGB color space employs monochromatic lights of three wavelengths, 700nm (r), 546.1nm (g), 435.8nm (b), as the three primary colors. Each value in CIE1931RGB scales linearly with cd/m (number of photons), which may be referred to as RGB linear color values. Correspondingly, RGB obtained by further performing nonlinear processing on the RGB linear color values may be referred to as RGB nonlinear color values.
The CIE1931XYZ color space may be derived by linear transformation of the CIE1931RGB standard. Since the CIE1931RGB standard is formulated according to experimental results, the visible light color gamut of the CIE1931RGB standard has a negative value in the coordinate system, and for the convenience of calculation and conversion, the CIE selects a triangular area in the CIE1931RGB color gamut, the triangular area covers all the visible color gamuts, and the visible light color gamut is converted into the positive number gamut by performing linear conversion on the triangular area. Thus, the CIE1931XYZ color space is obtained by imagining the three primary colors X, Y, Z.
The system architecture according to embodiments of the present invention is described below. Referring to fig. 2, fig. 2 is a block diagram of a color coding system according to an embodiment of the present invention, as shown in fig. 2, the color coding system includes a source device 10 and an end device 20, the source device 10 generates color-coded video data/image data, and the end device 20 can perform color decoding and display on the color-coded video data/image data. Various implementations of the source device 10 and the end device 20, or both in combination, may include one or more processors and memory coupled to the one or more processors. The memory can include, but is not limited to, RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures that can be accessed by a computer, as described herein.
The source device 10 and/or the end device 20 can include a variety of devices, including a desktop computer, a mobile computing device, a notebook (e.g., laptop) computer, a tablet computer, a set-top box, a cell phone, a television, a camera, a display device, a digital media player, a video game console, an in-vehicle computer, a display, a projector, a video surveillance appliance, a video conferencing appliance, a live-on-demand appliance, or the like.
End device 20 may receive encoded video data from source device 10 via link 30.
In one example, link 30 may comprise one or more media or devices capable of moving color-coded video/image data from source device 10 to end device 20, in which example the color-coded video/image data may be moved to end device 20 by way of copying/burning through a non-volatile storage medium, for example.
In one example, link 30 may comprise one or more communication media that enable source device 10 to transmit color-coded video/image data directly to end device 20. In this example, the source device 10 may modulate the color-coded video/image data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated data to the end device 20. The one or more communication media may include wireless and/or wired communication media such as Radio Frequency (RF) spectrum, WIFI, bluetooth, mobile networks, cellular data networks, and so forth, and wired communication media such as a physical transmission line for display Interface (DP), High Definition Multimedia Interface (HDMI), coaxial cable, coarse coaxial cable, twisted pair, and fiber optics, and so forth. The one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the internet). The one or more communication media may include a router, switch, base station, or other apparatus that facilitates communication from a source device 10 to an end device 20.
In another example, color-coded video/image data may be output from output interface 160 to storage device 40. Similarly, color-coded video/image data may be accessed from storage device 40 through input interface 260. Storage device 40 may include any of a variety of distributed or locally accessed data storage media, such as a hard drive, blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, frame buffers, or any other suitable digital storage media for storing color-coded video/image data.
In another example, the storage device 40 may correspond to a file server or another intermediate storage device that may hold the color-coded video/image data generated by the source device 10. End device 20 may access the stored video data from storage device 40 via streaming or download. The file server may be any type of server capable of storing color-coded video/image data and transmitting such data to end device 20. Example file servers include web servers (e.g., for a website), FTP servers, Network Attached Storage (NAS) devices, or local disk drives. End device 20 may access the color-coded video/image data through any standard data connection, including an internet connection. This may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both suitable for accessing encoded video data stored on a file server. The transmission of the encoded video data from storage device 40 may be a streaming transmission, a download transmission, or a combination of both.
In some implementations, the color space encoding techniques of embodiments of the present invention may be applied to support a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, streaming video transmissions (e.g., via the internet), or applying color coding for video data/image data stored on a data storage medium, or applying color decoding for video data/image data stored on a data storage medium, or other applications. In some examples, color coding systems may be used to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
The color coding system illustrated in fig. 2 is merely an example, and the techniques of this disclosure embodiment may be applied to color coding settings (e.g., color coding or color decoding) that do not necessarily include any data communication between the source device 10 and the end device 20. In other examples, the data is retrieved from local storage, streamed over a network, and so forth. In many examples, encoding and decoding are performed by devices that do not communicate with each other, but rather only encode data to and/or retrieve data from memory and decode data.
In the example of fig. 2, source device 10 specifically includes an image source 100, a color encoding module 120, and an output interface 160. In some examples, output interface 160 may include a regulator/demodulator (modem) and/or a transmitter. Image source 100 may comprise a combination of one or more of a video/image capture device (e.g., a camera and an ISP module), a video/image archive containing previously captured video/image data, a video feed interface to receive video/image data from a video/image content provider, a computer graphics system (e.g., a graphics card) for generating video/image data, and so forth. In particular, the image source 100 may be used, for example, to provide linear color image data.
The color encoding module 120 may color encode the video/image data from the image source 100, obtaining encoded video/image data, e.g., converting linear color image data to quantized non-linear color image data. In some examples, source device 10 transmits the encoded video/image data directly to end device 20 via output interface 160. For example, may be transmitted directly to the end device 20 via a wireless, DP, or HMDI connection; in other examples, encoded video/image data may also be stored onto storage device 40 for later access by end device 20 for decoding and/or playback.
In a possible embodiment, the source device 10 may further include a source post-processing module 140, where the source post-processing module 140 may be configured to further process the data encoded by the color coding module, for example, perform color space conversion, perform compression coding (for example, inter-frame prediction coding, intra-frame prediction coding, and the like) on the image/video, and store the format to obtain processed video/image data; source device 10 then transmits the processed video/image data to end device 20 via output interface 160. For example, may be transmitted directly to end device 20 via a wireless, wired, etc. connection; in other examples, the video/image data to be transmitted may also be stored onto storage device 40 for later access by end device 20 for decoding and/or playback.
In the example of fig. 1, end device 20 specifically includes an input interface 260, a color decoding module 220, and a display apparatus 200. In some examples, input interface 260 includes a receiver and/or a modem. Input interface 260 may receive encoded video/image data via link 30 and/or from storage device 40. For example, the end device 20 may receive encoded video/image data directly through the input interface 260 that is transmitted by the source device 10 over a wireless, DP, or HDMI connection. The color decoding module 220 may be used to perform color decoding processing on the encoded video/image data to obtain decoded video/image data, e.g., to invert quantized non-linear color image data to linear color image data. The display apparatus 200 may be integrated with the end device 20 or may be external to the end device 20. The display apparatus 200 may be, for example, a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or other types of display devices. In general, display apparatus 200 is used to display decoded video/image data.
In a possible embodiment, when the source device 10 includes the active post-processing module 140, the end device 20 may further include an end post-processing module 240 accordingly. The end device 20 may receive the processed video/image data transmitted by the source device 10 through the wireless, wired, etc. connection manner through the input interface 260, and the end post-processing module 240 may be configured to perform corresponding inverse processing on the video/image data, for example, perform operations such as decoding and decompressing on a compressed and encoded code stream, and further transmit the obtained nonlinear color image data to the color decoding module 220. The color decoding module 220 further performs color decoding processing on the non-linear color image data transmitted from the end post-processing module 240 to obtain decoded video/image data, for example, to convert the quantized non-linear color image data into linear color image data.
The embodiment of the present invention mainly designs the color coding module 120 in the source end device 10 and the color decoding module 220 of the end device 20, and since the color decoding module 220 and the color coding module 120 operate in an inverse manner, in order to avoid the description, the embodiment of the present invention mainly describes related design implementation of the color coding module.
Referring to fig. 3, fig. 3 is a simplified block diagram of an apparatus 300 that may be used as either or both of the source device 10 and the end device 20 of fig. 1 according to an example embodiment. The apparatus 300 may implement the techniques of this disclosure, and the apparatus 300 for implementing color space codecs may take the form of a computing system including multiple computing devices, or a single computing device such as a laptop, a tablet, a set-top box, a cell phone, a television, a camera, a display apparatus, a digital media player, a video game console, an on-board computer, a display, a projector, a video surveillance device, etc. The apparatus 300 comprises a processor 301, a memory 302, and a communication interface 303, the processor 301, the memory 302, and the communication interface 303 being communicatively coupled via a bus 306.
The processor 301 in the apparatus 300 may be a central processor. Alternatively, processor 301 may be any other type of device or devices now or later developed that is capable of manipulating or processing information. Although the disclosed embodiments may be practiced using a single processor, such as processor 301, as shown, parallel processing using more than one processor is also possible, to increase computational speed and efficiency.
In one embodiment, the Memory 302 of the apparatus 300 may be a Read Only Memory (ROM) device or a Random Access Memory (RAM) device. Any other suitable type of storage device may be used as memory 302. The memory 302 may include program code and data that are accessed, read, and written to by the processor 301 over the bus 306.
In one embodiment, the communication interface 303 of the apparatus 300 can be used for sending data to the outside, and/or for receiving data transmitted from the outside, and/or for storing data in an external storage medium, and/or for reading data from the external storage medium. Communication interface 303 may be a wireless communication media interface or may be a wired communication media interface. Wireless communication media interfaces such as radio interfaces for Radio Frequency (RF) spectrum, WIFI, bluetooth, mobile networks, cellular data networks, etc., and wired communication media interfaces such as physical transmission line interfaces for display interfaces (DP), High Definition Multimedia Interfaces (HDMI), coaxial cable, coarse coaxial cable, twisted pair, and fiber optics, etc.
In a possible implementation, the apparatus 300 may further include one or more output devices, such as a display 304. In one example, display 304 may be a touch sensitive display that combines a display and a touch sensitive element operable to sense touch inputs. A display 304 may be coupled to the processor 301 by a bus 306. Other output devices that permit a user to program the apparatus 300 or otherwise use the apparatus 300 may be provided in addition to the display 304 or other output devices may be provided as an alternative to the display 304. When the output device is or includes a display, the display may be implemented in different ways, including by a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT) display, a plasma display, or a Light Emitting Diode (LED) display, such as an Organic LED (OLED) display.
In a possible implementation, the apparatus 300 may further comprise or be in communication with an image sensing device 305, the image sensing device 305 being operable to obtain color image data, for example, the image sensing device 305 comprising a camera and an ISP for capturing/pre-processing images, and for example, the image sensing device 305 comprising a graphics card device for generating images. The image sensing device 305 may also be any device that is currently or later developed that can sense an image.
It should be noted that although the processor 301 and the memory 302 of the apparatus 300 are depicted in fig. 3 as being integrated in a single unit, other configurations may also be used. The operations of processor 301 may be distributed among a number of directly coupleable machines (each machine having one or more processors), or distributed in a local area or other network. Memory 302 may be distributed across multiple machines, such as a network-based memory or a memory in multiple machines running device 300. Although only a single bus is depicted here, the bus 306 of the device 300 may be formed from multiple buses. Further, the secondary memory 302 may be directly coupled to other components of the apparatus 300 or may be accessible over a network, and may comprise a single integrated unit, such as one memory card, or multiple units, such as multiple memory cards. Accordingly, the apparatus 300 may be implemented in a variety of configurations.
In an embodiment of the present invention, the processor 301 may be configured to call the program code in the memory to execute the method for color space encoding described in the embodiments of the methods of the present invention, which will not be described herein again.
Referring to fig. 4, fig. 4 is a flowchart illustrating a color space encoding method according to an embodiment of the present invention, which is comprehensively described from the perspective of a source device and an end device, and the method includes, but is not limited to, the following steps:
step 401: the source device acquires linear color values of a plurality of color components of the first image in a first color space.
In some embodiments, the source device may render a first image having RGB linear color values proportional to brightness through a content generation device (image source) such as a video card.
The RGB linear color values are linear color values of the first image in a CIE1931RGB color space (i.e., the first color space is the CIE1931RGB color space), and the linear color values are in a linear relationship with the light intensity values displayed by the display device. The RGB linear color values include a linear color value of an R component, a linear color value of a G component, and a linear color value of a B component.
It is to be understood that the RGB linear color values described above are merely examples. In a possible embodiment, the source device may also obtain linear color values of multiple color components that may obtain other color spaces, such as CIE1931XYZ, CIE1931 xyY, CIE 1976L u v, CIE 1976L a b, and the like, and further such as LMS, CMYK, YUV, HSL, HSB (HSV), YCbCr, and the like. And will not be described any more.
Step 402: and the source end device converts the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components.
Accordingly, the value of the non-linear color is in a non-linear relationship with the light intensity value captured by the camera or displayed by the display device.
The color mapping function f (-) is used to describe the mapping relationship between linear color values and non-linear color values. In a possible implementation, the mapping relationship between the linear color values and the non-linear color values may be preset according to the color display capability (display characteristic) of the display device, and a specific implementation process will be described in detail later. The color mapping function f (-) thus formulated can be beneficial for avoiding the occurrence of quantization stripes. For example, for a certain display device, for an RGB color space, a color mapping function f (-) corresponding to an RGB image can be designed as shown in the following formula (1):
Figure BDA0001946333840000091
step 403: and the source end device carries out quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component.
In the embodiment of the present invention, the quantization bit numbers corresponding to the color components are not uniform, but are designed separately. Different quantization bit numbers can be designed for each color component by using different sensitivity degrees of human eyes to different color component changes. In a specific embodiment, the number of quantization bits corresponding to each color component may be determined according to a display characteristic of the image of the first color space by a display device, where the display characteristic of the display device includes at least one of a maximum luminance, a minimum luminance, and a chromaticity diagram range of the display device, for example. Therefore, according to the design of the embodiment of the present invention, the quantization bit number corresponding to at least one of the plurality of color components is different from that of other color components.
For example, taking a certain type of display device as an example, for an RGB color space, the quantization bit number corresponding to each color component may be calculated by using the following formula (2):
Figure BDA0001946333840000101
wherein n is r 、n g 、n b Respectively representing the number of quantization bits respectively adopted for the three primary color components R (red), G (green) and B (blue); ceil (·) represents a round-up operation; n represents the maximum number of quantized bits in the color component. For example, when the maximum quantization bit number n is 10 bits (bit), the above expression (2) is usedCan be calculated to obtain: the R component is quantized with 10 bits, the G component is quantized with 10 bits, and the B component is quantized with 9 bits.
Thus, the color coding module of the source device may perform quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component, where the formula is shown in the following equation (3):
Figure BDA0001946333840000103
wherein, C i Representing the i-th color component quantized encoded codeword. L is a radical of an alcohol i A normalized linear color value representing an ith color component; f (-) represents a color mapping function; n is a radical of an alkyl radical i Indicating the ith number of color quantization bits.
Then, by substituting equation (2) into equation (2), the non-linear color value component codeword after quantization coding of each color component can be obtained, as shown in equation (4) below:
Figure BDA0001946333840000102
wherein, C r 、C g 、C b A quantized encoded non-linear color value component codeword representing R, G, B color components, respectively; int represents a rounding operation; f (-) represents a color mapping function; l is a radical of an alcohol r 、L g 、L b Respectively representing R, G, B normalized linear color values of the color components; n is r 、n g 、n b Respectively, indicate the number of quantization bits respectively employed for R, G, B color components.
Step 404: and the source end device sends the non-linear color value component code words after each color component is quantized to the tail end device.
Specifically, the quantized non-linear color component codeword obtained by the source device is transmitted to the end device through a wireless connection or media such as HDMI and DP.
Step 405: and the terminal device respectively adopts the quantization bit number corresponding to each color component to perform inverse quantization processing on the quantized nonlinear color value. The implementation manner of the quantization bit number corresponding to each color component may also refer to the related description of step 403, and for brevity of the description, details are not repeated here.
Step 406: the end device converts the non-linear color values into linear color values through a color decoding module. It is understood that this process is the inverse operation process of the foregoing step 402, and for the brevity of the description, the detailed description is omitted here.
Step 407: optionally, the end device displays the first image corresponding to the linear color value through a display device.
It can be seen that, in the color coding process, on one hand, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of display equipment, and the linear colors of the color components are converted into nonlinear colors; on the other hand, different quantization bit numbers can be designed for different color components according to the display characteristics of the display equipment by utilizing different sensitivity degrees of human eyes to different color component changes. Therefore, by combining the two aspects, the occurrence of quantization stripes can be effectively avoided, different quantization bit numbers can be reasonably distributed for different color components, the waste of the quantization bit numbers is avoided, the transmission data volume is effectively reduced, and the storage space and the transmission bandwidth are saved.
Referring to fig. 5, fig. 5 is a flow chart of another color space encoding method provided by the embodiment of the present invention, which is comprehensively described from the perspective of a source device and an end device, and the method includes, but is not limited to, the following steps:
step 501: the source device obtains linear color values of a plurality of color components of the first image in a first color space.
In some embodiments, the source device may capture a first image (i.e., an arbitrary optical image) via the camera and generate RGB linear color values via processing by the ISP module (image source).
Similarly, the RGB linear color values are linear color values of the first image in the CIE1931RGB color space (i.e. the first color space is the CIE1931RGB color space), and the linear color values are in a linear relationship with the light intensity values captured by the camera. The RGB linear color values include a linear color value of an R component, a linear color value of a G component, and a linear color value of a B component.
It is to be understood that the RGB linear color values described above are merely examples. In a possible embodiment, the source device may also obtain linear color values of multiple color components that may obtain other color spaces, such as CIE1931XYZ, CIE1931 xyY, CIE 1976L u v, CIE 1976L a b, and the like, and further such as LMS, CMYK, YUV, HSL, HSB (HSV), YCbCr, and the like. And will not be described any more.
Step 502: and the source end device converts the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components. For a specific implementation process, reference may be made to the description of step 402, which is not described herein again.
Step 503: and the source end device carries out quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component. For a specific implementation process, reference may be made to the description of step 403, which is not described herein again.
Step 504: the source end device processes the image formed by the quantized nonlinear color values through a source end post-processing module.
The processing may be, for example, operations such as performing color space conversion, performing compression coding on an image/video, and storing a format. For example, the source device may perform image compression encoding processing, such as inter-frame prediction encoding and intra-frame prediction encoding, on the image composed of the series of quantized and encoded nonlinear color values obtained in step 503 through the source post-processing module, so as to obtain an encoded code stream. Alternatively, the source device may store an image composed of the non-linear color values, and so on. The specific compression encoding process or storage process is known to those skilled in the art and will not be described in detail.
Step 505: and the source end device sends the data processed by the source end post-processing module to the end device. For example, the source device sends the code stream that is compressed and encoded by the source post-processing module to the end device.
Step 506: the end device processes data through the end post-processing module. It is understood that this process is the inverse operation process of the foregoing step 504, for example, the end post-processing module may perform image decoding according to the code stream sent by the source device, so as to obtain an image composed of nonlinear color values. For the sake of brevity of the description, no further description is provided herein.
Step 507: and the terminal device respectively adopts the quantization bit number corresponding to each color component to perform inverse quantization processing on the quantized nonlinear color value. The implementation manner of the quantization bit number corresponding to each color component may also refer to the related description of step 403, and for brevity of the description, details are not repeated here.
Step 508: the end device converts the non-linear color values into linear color values through the color decoding module. It is understood that this process is the inverse operation process of the foregoing step 502, and for brevity of the description, the detailed description is omitted here.
Step 509: optionally, the end device displays the first image corresponding to the linear color value through a display device.
It can be seen that, in the color coding process, on one hand, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of display equipment, and the linear colors of the color components are converted into nonlinear colors; on the other hand, different quantization bit numbers can be designed for different color components according to the display characteristics of the display equipment by utilizing different sensitivity degrees of human eyes to different color component changes. After the quantization coding is finished, the quantized nonlinear color can be subjected to post-processing such as image compression coding and the like, and then is sent to a display side for decoding and displaying through a code stream. Therefore, by combining the two aspects, the technical scheme of the invention can effectively avoid the quantization stripes on the display side, can reasonably allocate different quantization bit numbers for different color components, avoids the waste of the quantization bit numbers, effectively reduces the transmission data volume, and saves the storage space and the transmission bandwidth.
Referring to fig. 6, fig. 6 is a schematic flowchart of another color space encoding method according to an embodiment of the present invention, including, but not limited to, the following steps:
step 601: the source device acquires linear color values of a plurality of color components of the first image in the second color space.
In some embodiments, the source device may capture an image (i.e., an arbitrary optical image) through the camera and generate RGB linear color values through ISP module (image source) processing.
In some embodiments, the source device may render an image having RGB linear color values proportional to brightness through a content generation device such as a video card.
The RGB linear color values are linear color values of the first image in the CIE1931RGB color space (i.e. the second color space is the CIE1931RGB color space).
It is to be understood that the RGB linear color values described above are merely examples. In a possible embodiment, the source device may further obtain linear color values of a plurality of color components that may obtain other color spaces, such as CIE1931XYZ, CIE1931 xyY, CIE 1976L u v, CIE 1976L a b, and the like. And will not be described any more.
Step 602: the source device converts the linear colors of the plurality of color components in the second color space to linear colors of the plurality of color components in the first color space.
For example, an image of the first image in CIE1931RGB color space (here, the RGB color space is the second color space) may be referred to as an RGB image, and an image of the first image in LMS color space (here, the LMS color space is the first color space) may be referred to as an LMS image. In a possible embodiment, the conversion of the color space can then be achieved in the following manner.
First, linear color values of an RGB image are converted into linear color values of CIE XYZ 1931 color space. For example, the transformation can be performed by the following formula (5):
Figure BDA0001946333840000121
after obtaining the linear color values of the CIE XYZ 1931 color space by the above equation (5), the linear color values of the CIE XYZ 1931 color space may be further converted into linear color values of the LMS color space by the following equation (6):
Figure BDA0001946333840000122
it is to be understood that the LMS image described above is merely an example, and in a possible embodiment, the first image may also be an image of other color spaces, such as CMYK, YUV, HSL, HSB (HSV), YCbCr, and the like. And will not be described one by one here.
Step 603: the source device converts the linear color of each color component into a nonlinear color according to a color mapping function corresponding to each color component in the plurality of color components of the first image.
Similarly, in a possible implementation, the mapping relationship between the linear color values and the non-linear color values may be predetermined according to the color display capability (display characteristic) of the display device, so as to obtain the color mapping function f (·) corresponding to each color component of the first image (e.g., the LMS image). The determination process of the color mapping function f (-) can be referred to similarly in the related description of the foregoing step 402, and the color mapping function f (-) is also advantageous for avoiding the occurrence of quantization stripes. In this way, the LMS linear color values may be mapped to nonlinear color values (which may be represented by L ' M ' S ') according to the color mapping function f (-) employed by the LMS image.
Step 604: and the source end device carries out quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component.
Specifically, the quantization bit number of each color component of the non-linear color value L 'M' S 'may be designed according to the display characteristic of the image of the LMS color space by the display device, where the quantization bit number corresponding to at least one color component in each color component of the L' M 'S' is different from that of other color components. Therefore, the color coding module of the source end device can perform quantization coding on the nonlinear color of each color component according to the quantization bit number corresponding to each color component. The specific implementation process may refer to the related description of step 403, and for brevity of the description, will not be described herein again.
Step 605: and carrying out color space conversion and image coding compression on the image consisting of the quantized nonlinear color values.
In some specific applications, the quantized non-linear color values obtained in step 504 may be color space transformed to facilitate a subsequent image compression encoding process. For example, the quantized non-linear color values L ' M ' S ' obtained in step 504 may be transformed into the ICtCp color space to obtain ICtCp non-linear color values, as shown in the following equation (7):
Figure BDA0001946333840000131
then, the source device may perform image compression coding processing, such as inter-frame prediction coding, intra-frame prediction coding, and the like, on the obtained ICtCp nonlinear color value, thereby obtaining a coded code stream. Alternatively, the source device may store an image of the ICtCp non-linear color values, and so on. The specific compression encoding process or storage process is known to those skilled in the art and will not be described in detail.
Step 606: optionally, the source device may send the code stream subjected to the compression coding to the end device. Accordingly, the subsequent end device may parse the code stream and display the image through the reverse operation process, which will not be described in detail herein.
It can be seen that, in the color coding process, on one hand, corresponding color mapping functions can be designed for different color components of an image according to the display characteristics of display equipment, and the linear colors of the color components are converted into nonlinear colors; on the other hand, different quantization bit numbers can be designed for different color components according to the display characteristics of the display equipment by utilizing different sensitivity degrees of human eyes to different color component changes. Before color coding, color space conversion can be carried out to obtain an image needing color coding; after the quantization coding is finished, the quantized nonlinear color can be subjected to post-processing such as color space conversion, compression coding and the like on the image, and then is sent to a display side for decoding and displaying through a code stream. Therefore, by combining the two aspects, the technical scheme of the invention can effectively avoid the quantization stripes on the display side, can reasonably allocate different quantization bit numbers for different color components, avoids the waste of the quantization bit numbers, effectively reduces the transmission data volume, and saves the storage space and the transmission bandwidth.
In some embodiments of the present invention, a method of generating a color mapping function f (-) for each color component of a linear color is described below.
In this embodiment of the present invention, a color mapping function corresponding to any one of the color components is obtained according to a linear color value of the any one color component and a variation threshold of the any one color component; for example, the relationship between the variation threshold of any color component and the color mapping function f (-) is shown in the following equation (8):
Figure BDA0001946333840000132
wherein, Δ L i Representing the variation threshold of any color component i. Δ L i Specifically, the method is used for indicating an error caused by quantization using a quantization bit number threshold of a linear color value of any color component when other color components except for any color component in a plurality of color components of a linear color of a first color space are kept unchanged, where the quantization bit number threshold of the linear color value of any color component is a minimum quantization bit number of the linear color value of any color component which makes a human eye not perceive an image displayed on a display device as having a quantization stripe. Wherein the color gamut range of the display deviceEnclosing the first color space.
Then, the above equation (8) can be solved by using a numerical solving method to obtain f (-) corresponding to N linear color values, and the nonlinear color values corresponding to other linear color values can be obtained by using an interpolation method according to f (-).
In a specific application scenario, before executing any one of the method embodiments of fig. 4-6, the color mapping function f (-) corresponding to each color component of the linear color may be predetermined. Specifically, Δ L corresponding to each color component may be predetermined i Then, f (·) corresponding to each color component is determined according to equation (8) above. Determining Δ L is further set forth below i Some of the ways of (1).
The first measurement Δ L provided by the embodiment of the present invention is given below i The method (1).
In this way, the color gamut range of the device to be displayed, i.e. the color gamut range that the display device can represent, for example, the color gamut range of CIE1931XYZ, can be measured by a color analyzer, a color corrector, or the like. Then, the CIE1931XYZ color gamut range space of the display device is converted into a first color space (for example, the first color space is the CIE1931RGB color space) in which the quantization process is located by using the linear conversion relationship of the color space. Then, each color component (e.g., R, G, B components of RGB) of the linear color of the first color space is input to the display device, each holding two of the linear color components of the linear color constant, and N different values L are selected on the remaining one color component i Each L is determined i Recording each L corresponding to the quantization bit number for making human eyes not aware of the occurrence of quantization stripes in the display device i Corresponding quantization error Δ L i
For example, for one L i Constructing an average luminance equal to L i In particular, all pixels of the top row of the image have a brightness of L i The luminance of the lower left pixel of the image is 0 (optionally set to the lowest luminance of the display device), and the luminance of the lower right pixel of the image is 2L i The image increases linearly and smoothly from top to bottom in brightness, keeping the contrast from 0 to 1. Using existing photoelectric convertersAn electro-Optical Transfer Function (OETF) (optionally, PQ or sRGB) quantizes a linear image with n bits, and then converts the image into a linear domain through an electro-Optical Transfer Function (EOTF), and human eyes observe the quantized image. The quantization bit number n is gradually increased from small to large until the existence of quantization stripes in the image is not seen by human eyes. Assuming that the human eye cannot observe the existence of quantization stripes in the image, the number of quantization bits is N i The average value of quantization errors between the source image and the observed image (i.e., the image after the quantization process of the source image) is taken as L i Corresponding variation threshold Δ L i
A second measurement Δ L provided by an embodiment of the present invention is given below i In a manner described herein.
In this manner, the color gamut of the device to be displayed, i.e., the color gamut that the display device can represent, for example, the color gamut of CIE1931XYZ, can also be measured first by using a color analyzer, a color corrector, or the like. Then, the CIE1931XYZ color gamut range space of the display device is converted into a first color space (for example, the first color space is the CIE1931RGB color space) where the quantization process is located by using the linear conversion relationship of the color space. Then, each color component (e.g., R, G, B components of RGB) of the linear color of the first color space is input to the display device, two of the linear color components of the linear color are kept unchanged, and N different values L are selected on the remaining one color component i For each L i Constructing a linear smooth variation of color component intensity with an average luminance of L i The image (source image) of (1). Determining each L i Whether quantization stripes (banding) are visible at different quantization levels, i.e. quantized images quantized with different numbers of quantization bits.
In particular, to minimize the visual effect of quantization streaks due to quantization, we construct a model that can predict whether contour artifacts (contour artifacts) are visible at a given quantization level.
The implementation of the model may include the steps of: first, in the amount ofIn the process of changing the quantization bit number from small to large (or from large to small), under any specific quantization bit number, a set of spatial frequencies corresponding to a difference image (such as a quantization error sawtooth wave image) between a quantized image (i.e. an image obtained by quantizing a source image by using the specific quantization bit number) and the source image are firstly determined; we then use these frequencies and the contrast sensitivity function CSF (-) to determine our sensitivity to each spatial frequency component. Next, we convert this sensitivity into detection probability using a physical psychology function; finally, summarizing the detection probabilities corresponding to all spatial frequency components by utilizing probability summation to obtain a probability statistic value, comparing the probability statistic value with a preset probability threshold value, and if the probability statistic value is equal to (or approximate to) the probability threshold value, determining the specific quantization bit number as a quantization bit number threshold value; the quantization bit number threshold represents a minimum quantization bit number for which a human eye does not perceive quantization fringes in an image displayed on the display device. In this case, the average amplitude value of the quantization error sawtooth wave image between the quantized image and the source image may be used as the variation threshold Δ L i Alternatively, the average value of quantization errors between the quantized image and the source image may be used as the variation threshold Δ L i . The above process is described in detail as follows:
first, referring to FIG. 7, for each L i Constructing a linear smooth variation of color component intensity with an average luminance of L i The image (left image in fig. 7) consisting of a row of gradients with contrast varying from 0 (top) to 1 (bottom). Given average luminance level L i The brightness of all the pixels in the top row in the source image is equal to L and the brightness of the pixels in the bottom row varies from 0 to 2L on a linear scale i (the lower left corner is 0 and the lower right corner is 2L as shown in the figure) i ). In the flow shown in fig. 7, after smooth gradient images with different contrasts are generated in linear space, they are transferred to an arbitrary color space using a transfer function before quantization. Then, quantization processing is performed based on a specific quantization level. After quantization, the inverse of the transfer function is applied to a linear space. Compensation using inverse display model (GOG) prior to delivery to a display deviceDisplay characteristics of the device. Thus, a desired quantized image (e.g., the right image in FIG. 7) applicable to the display device can be obtained
Then, to determine the spatial frequency of the contours, an analysis error signal, e.g., a quantization error sawtooth image between the quantized image and the source image, can be obtained at a particular quantization level, and we analyze the fourier transform of the error signal — the difference between the smooth gradient and the contour gradient. Contour artifacts on smooth gradients appear as jagged shapes (e.g., as shown in fig. 8). An analytical formula for Fourier transform of a sawtooth waveform with a period w and an amplitude h is given by the following equation (9):
Figure BDA0001946333840000151
then, for some natural numbers k, the Fourier coefficient a of the sawtooth k As shown in the following formula (10):
Figure BDA0001946333840000152
where h is the amplitude of the sawtooth. The frequency w of the Fourier component is given by k As shown in the following formula (11):
Figure BDA0001946333840000153
where p represents the device angular resolution in units of number of pixels per degree. w is the sawtooth period in units of pixels. We have found that when k > 16, the Fourier components are not significant and do not improve the accuracy of the model.
To calculate the probability of detecting each fourier component of the contour, we determine the sensitivity S from the contrast sensitivity function CSF (-) as shown in equation (12) below:
Figure BDA0001946333840000154
where w is the spatial frequency, L b Is the background brightness, and Δ L det Is the detectable amplitude of the frequency component. Then, the contrast (amplitude divided by background brightness) of the contour pattern is normalized by multiplying by the sensitivity so that the normalized value is equal to 1 when the kth frequency component can be detected exactly. The normalized contrast is given by the following equation (13):
Figure BDA0001946333840000155
next, we convert the contrast into a probability P using a form of psychometric function k As shown in the following formula (14):
Figure BDA0001946333840000156
where the index beta is the slope of the psychometric function.
Finally, the probabilities over all fourier components are assembled using a probability summation to obtain a probability statistic P, as shown in equation (15) below:
P=1-П k (1-P k ) (15)
then, the probability statistic is compared with a preset probability threshold, and if the probability statistic is equal to or approximately equal to the probability threshold (e.g., the probability threshold is 0.5), the specific quantization bit number is determined as the quantization bit number threshold, which is the minimum quantization bit number that does not cause contour artifacts. If the probability statistic is not equal or not approximately equal to the probability threshold, the quantization level may be changed until an appropriate number of quantization bits is found as the quantization bits threshold. Alternatively, to determine the quantization bit number threshold, a binary search algorithm may be performed such that the result P is equal to a probability threshold (e.g., a probability threshold of 0.5).
A third measurement Δ L provided by an embodiment of the present invention is given below i In a manner described herein.
This approach differs from the second approach described above in that the model in the second approach above only considers contour fringes due to brightness variations. The change in luminance contributes greatly to the appearance of the stripes, but the change in chromaticity also contributes. Here we extend the model by using a chrominance contrast sensitivity function and a probability summation across the visual channels to take into account the effect of chrominance variation on the formation of fringes, so the extended model can be referred to as a color difference model.
For example, our color difference model takes as input two colors specified in the CIE1931XYZ color space, and predicts the probability that a difference between the two colors can be observed. If we consider the stripe effect, the difference between the two colors is reflected in the height of the sawtooth pattern introduced by the stripes. Such a model can be easily extended with a binary search algorithm, in particular taking as input the initial color and the color direction vector, resulting in a color direction vector such that the detection probability is equal to 0.5. The color difference model is used for drawing a MacAdam ellipse graph to obtain a detection threshold value.
First, we convert the two colors from XYZ to LMS space (alternatively, the conversion can be implemented using the CIE1931 color matching functions). Each channel of the three stimulation spaces is proportional to the response of the long, medium and short pyramidal cells in the retina. It should be noted that there is currently no standard way to measure the absolute response of each cone-shaped structure, and that the response values are relative only. Converting CIE XYZ tristimulus values to LMS response values, we use the following linear transformation equation (16):
Figure BDA0001946333840000161
the cone response further translates into a contrast response of the color vision mechanism: one achromatic (black to white) and two colors: red to green and yellow green to violet. These mechanisms are not clear for the toning method to precisely adjust the color direction, but we can do without these toning knowledge. We use the simplest formula with versatility to calculate the color contrast as shown in equation (17) below:
Figure BDA0001946333840000162
where a represents the achromatic (luminance) response, R represents the red-green response, and B represents the yellow-green-violet response.
Given two colors to be distinguished, we need to calculate the contrast between them. Since there is no single way to express color contrast, we have conducted experiments with many expressions. We have found that macadam ellipses perform better predictions if the color contrast is calculated using the following equation (18):
Figure BDA0001946333840000163
C A the expression of (1) is regularized to luminance contrast. C R And C Y Is normalized using a mixture of luminance and color mechanism responses. We have found that a good fit can be obtained with a k value of 2/3.
Given color contrast component C A 、C R And C Y We follow the same steps as luminance stripe prediction: we multiply the color contrast by the corresponding contrast sensitivity function and Fourier coefficient a of the sawtooth pattern k And converting the normalized contrast into a detection probability. Corresponds to C A 、C R And C Y The detection probability of the kth fourier component is given by the following expressions (19), (20), (21), respectively:
P A,k =1-exp(ln(0.5)(C A a k CSF A (rho k ,A 1 ) β ) (19)
P A,k =1-exp(ln(0.5)(C R a k CSF R (rho k ,A 1 ) β ) (20)
P A,k =1-exp(ln(0.5)(C Y a k CSF Y (rho k ,A 1 ) β ) (21)
we observed that when the typical exponential parameter β of the psychometric function is 3.5, the color ellipse on the chromaticity diagram appears square. Such a square ellipse is not common in the literature. Reducing the exponent parameter to β -2 eliminates this irregular shape.
Then, we will integrate all fourier coefficients and the response across the three color channels using a form of probability summation to compute the final statistical probability P as shown in equation (22) below:
P=1-П k (1-P A,k )·П k (1-P R,k )·П k (1-P Y,k ) (22)
similarly, the probability statistic may be compared with a preset probability threshold, and if the probability statistic is equal to or approximately equal to the probability threshold (e.g., the probability threshold is 0.5), the specific quantization bit number is determined as a quantization bit number threshold, which is the minimum quantization bit number that does not cause contour artifacts. If the probability statistic is not equal or not approximately equal to the probability threshold, the quantization level may be changed until an appropriate number of quantization bits is found as the quantization bits threshold. Alternatively, to determine the quantization bit number threshold, a binary search algorithm may be performed such that the result P is equal to a probability threshold (e.g., a probability threshold of 0.5).
To verify the second measurement variation threshold Δ L provided by embodiments of the present invention i (and quantization bit number threshold), and determines the change threshold Δ L generated by the method i (and quantization bit number threshold) just as well as making the quantization fringes invisible to the human eye, a monochrome quantization (monochrome quantization) experiment associated with an embodiment of the present invention is given below.
The exact shape of the CSFA depends on many free variables in the Barten model (Barten's model), including the brightness of the adaptation field, the angular size of the object, the background brightness and the viewing angle, and other factors. We performed psychophysical experiments to ensure that our Contrast Sensitivity Function (CSF) fits the visual conditions of our application scenario. We aimed at novel display technologies, using Hua Mate Pro 9 and DayDream VR helmets (peak brightness 44cd/m 2). We measured the display characteristics using a spectroradiometer (spectroradiometer) and then fit a gamma-offset-gainmodel. The experiments were performed in dark rooms to minimize the effects of external light sources.
We use a number of monochrome smooth gradient images such as the one shown on the left of the aforementioned figure 7, the relevant description of which is as described above. Each image consists of a line of gradients (gradients) with contrast varying from 0 (top) to 1 (bottom). Given an average luminance level L, the luminance of all pixels in the top row is equal to L, and the luminance of pixels in the bottom row varies from 0 to 2L on a linear scale. We measured the effect of luminance quantization at 7 chromaticities: white points, close to the three primary colors (red, green and blue), and their opponent colors (cyan, magenta and yellow), the exact color coordinates (color co-ordinates) of these colors are shown in table 1 below.
Table 1: chromaticity coordinates of the monochrome smoothed gradient image in CIE L' UV.
Figure BDA0001946333840000171
Figure BDA0001946333840000181
The stimulus (stimuli) is created according to the process steps shown in fig. 7. First, one of two transfer functions (transfer functions:) is used: PQ or sRGB, converting each linear gradient image into a 0-1 range. The values are then quantized to a sample bit-depth (sample bit-depth) and converted back to linear space (linear space). Quantization levels (quantization levels) exceeding the maximum bit-depth (the maximum bit-depth) of the display are achieved by spatial-temporal dithering. Three average luminance levels are sampled for each chroma across the available dynamic range. Each gradient square area (square) has a field angle of 20 degrees (20 visual degrees). The background wall in the virtual environment has the same color as the average of the gradient stimuli (gradient stimuli).
The experimental procedures of design and observation are as follows: four smoothed gradient images are determined and only one of the four smoothed gradient images is subjected to a quantization process (e.g., a quantization process via a flow shown in fig. 7). Experimental collection 9 experimental observers aged 19-45 years, with normal vision or corrected to normal color vision. These four smooth gradient images were presented to each experimental observer for viewing. For each trial, the position of the image in the four images using the quantified gradient was random. The task of the experimental observer was to indicate which was the image of the quantified gradient using a remote control. The QUEST procedure used 3030 trials to select successive quantized bit depths and calculate the final threshold. To minimize the effect of dark adaptation, the brightness level is displayed from darkest to brightest. The observer was allowed to acclimate to the ambient conditions for 2 minutes prior to the experiment. Each experimental observer was allowed to make selections after an undefined time. They can move their head freely in the virtual reality environment.
After obtaining the experimental results through the above experimental procedure, Model fitting (Model fitting) was performed, and we formulated the CSFA as a simplified parameter Barten Model (simplified parametric Barten Model) with five free variables, including relative scaling factors, as shown in the following formula (23):
Figure BDA0001946333840000182
where u is the spatial frequency, L is the average luminance, p 1...5 Five free parameters are indicated. We are right to p 1...5 The optimization is performed such that the error (weighted sum of mean square deviations) between the model predicted values and the observed experimental measured values is minimized. We have found that the above formula gives an optimal solution when p is (39.9565, 0.1722, 0.4864, 120.3724, 0.8699).
Specifically, the experimental comparison of model fitting and subjective standard observation (standard observer) can be shown in the figureFig. 9 is a graph comparing a single color contrast sensitivity function (achromotic contrast sensitivity function) obtained by model fitting and subjective standard observation. The overlap of the curves in fig. 9 indicates that the model data results can closely approximate the observed data results, while the prediction can always remain within the error band. The experimental result proves that the second measurement change threshold delta L provided based on the embodiment of the invention i The model constructed in the manner of (and quantization bit number threshold) can predict more accurately whether contour artifacts (contouring artifacts) are visible at a given quantization level. It should be noted that, in order to ensure that the luminance gradients of different chromaticities can be well predicted by the luminance detection model. One can attempt to process the same data set with a larger exponential order than our model to optimize PSNR.
To verify the third measurement variation threshold Δ L provided by the embodiments of the present invention i (and quantization bit number threshold), and determining the change threshold Δ L generated by the method i (and quantization bit number threshold) are just good enough to make the quantization fringes invisible to the human eye, and the chroma quantization (chroma quantization) experiment associated with an embodiment of the present invention is given below.
This chroma quantification experiment measures the maximum amount of chroma quantification that does not cause a detectable difference. We investigated the effect of chrominance quantization using two color spaces, YCbCr and ICtCp. Both color spaces aim to relate luminance to chrominance. The apparatus and experimental procedure for this chroma quantification experiment are the same as for the monochrome quantification experiment described above. The stimulus (stimuli) consists of equal-luminance smooth image gradients at three fixed luminance levels in the CIEL0u0v0 color space. At u 0 v 0 Two line segments are selected in the plane, and as shown in FIG. 10, FIG. 10 is a visual representation of the color lines (u: horizontal, v: vertical) of the chromaticity quantization experiment, both intersecting the white point of the display and being orthogonal to the chromaticity dimension of CIE L ' u ' v '. The line segments are parallel to the chromaticity axis (chroma axis), defined by the gamut of the device, and intersect at the white point. For two line segments, the experimentThe achromatic gradient used in 1 generates a smooth gradient stimulus in a similar manner. Color saturation (Color saturation) at the top of the image is zero and increases linearly along the vertical direction of the image and is maximum at the bottom, as shown in fig. 11 in particular, fig. 11 is a gradient image for a chromaticity quantization experiment, wherein the image on the left side of fig. 11 is such that the v 'axis component at the white point of the device gamut is kept constant and only changes along the gradient of the u' component; the right side of fig. 11 is a gradient along the v 'axis component only, keeping the u' component at the white point of the device gamut constant; the saturation is 0 at the top of the image, gradually increasing down the image, and is maximum at the bottom.
After the experimental results are obtained through the above experimental process, Model fitting (Model fitting) is performed. The only free parameter of the colorimetric model (colorimetric model) is the variable a that determines the exact cone-contrast formula (cone-contrast format). We have found that a gives the best fit when a is equal to about 23. The experimental results also show that the experimental data of the colorimetric model can closely approximate the observed experimental data, and the prediction is always positioned in an error bar.
A method of performing Model predictions (Model predictions) based on the chromaticity Model is briefly described below. The chrominance model can be simply extended to take as input the starting color (starting color) and the color direction vector (color direction vector) (instead of the smoothed image gradient). The color may then be built along the color direction vector according to a binary search such that the probability that a streak phenomenon can be detected is equal to a probability threshold (e.g., 0.5). We use this extended model to establish the detection threshold and draw a color consistency ellipse graph similar to macadam ellipse, as shown in fig. 12. Fig. 12 shows the results of our model predictions compared to the CIE DeltaE 2000 difference plot and the original MacAdam ellipse plot, respectively, each of which in fig. 12 corresponds to a different luminance level, where MacAdam ellipses are the results tested only at a background luminance of 48cd/m 2. It should be noted that our colorimetric model aims to provide better prediction results than the traditional color difference, so the shape of the prediction results of the colorimetric model has certain comparability.
Based on the same inventive concept, an apparatus related to embodiments of the present invention is provided in the following.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an apparatus 70 for color space coding according to an embodiment of the present invention, where the apparatus 70 includes: an image acquisition module 701, a color conversion module 702 and a quantization module 703, wherein:
an image obtaining module 701, configured to obtain linear colors of a plurality of color components of a first image in a first color space;
a color conversion module 702, configured to convert the linear color of each color component into a non-linear color according to a color mapping function;
a quantization module 703, configured to quantize the nonlinear color of each color component according to the quantization bit number corresponding to each color component, so as to obtain a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
In some possible embodiments, the color conversion module 702 is specifically configured to: and converting the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components, wherein the color mapping function corresponding to at least one color component is different from the color mapping functions corresponding to other color components in the plurality of color components.
In some possible embodiments, the color mapping function corresponding to any color component in the plurality of color components is obtained according to a value of a linear color of the any color component and a change threshold of the value of the linear color of the any color component; the threshold value of the change of the linear color value of any color component is used for indicating an error caused by quantization by using a threshold value of a quantization bit number of the linear color value of any color component, and the threshold value of the quantization bit number of the linear color value of any color component is the minimum quantization bit number of the linear color value of any color component which enables human eyes not to perceive that an image displayed on a display device has quantization stripes.
In some possible embodiments, the device 70 further comprises a testing module (not shown) for: keeping the color components except the color component unchanged, and selecting different values L on the color component i For each L i Constructing a linear smooth variation of color component intensity with an average luminance of L i A second image of (a); determining L from a plurality of quantization bits i A quantization bit number threshold of (a); the quantization bit number threshold is the minimum quantization bit number which can make human eyes not perceive quantization stripes of the image displayed on the display device in the plurality of quantization bit numbers; the image displayed on the display device is obtained by quantizing the second image according to the number of quantization bits; and setting an average value of amplitudes of sawtooth waves in a difference image between the third image and the second image as the variation threshold, or setting an average value of errors between the third image and the second image as the variation threshold, wherein the third image is an image obtained by quantizing the second image according to the quantization bit number threshold.
In some possible embodiments, the test module is specifically configured to: quantizing the second image according to any one of the quantized bit numbers to obtain a fourth image; determining a plurality of Fourier frequency components corresponding to a difference image between the fourth image and the second image; determining a degree of response of each of the plurality of Fourier frequency components to luminance and/or chrominance; converting the response degree of each Fourier frequency component to brightness and/or chroma into detection probability; and counting the detection probabilities of the multiple Fourier frequency components to obtain a probability statistic value, and determining the quantization bit number adopted by the current quantization processing as the quantization bit number threshold when the probability statistic value meets a preset probability threshold.
In some possible embodiments, the image acquisition module 701 is specifically configured to acquire the linear colors of the plurality of color components in the first color space by at least one of:
obtaining linear colors of the plurality of color components in the first color space by means of camera capture and image signal processing ISP; or generating, by the graphics card device, the linear colors of the plurality of color components in the first color space.
In some possible embodiments, the image acquisition module 701 is further configured to: acquiring linear colors of a plurality of color components of the first image in a second color space; the second color space is different from the first color space; converting the linear color of the plurality of color components in the second color space to the linear color of the plurality of color components in the first color space.
In some possible embodiments, the apparatus further comprises a post-processing module (not shown); the post-processing module is used for executing at least one of the following operations:
carrying out image coding on the quantized nonlinear color of each color component in the quantized image;
performing color space conversion on the quantized nonlinear color of each color component in the quantized image;
storing the quantized nonlinear color of each color component in the quantized image;
and transmitting the quantized nonlinear colors of the color components in the quantized image to a display device.
It should be noted that the relevant functional units of the above-mentioned device 70 can be implemented by hardware, software or a combination of hardware and software. Those skilled in the art will appreciate that the functional blocks described in FIG. 13 may be combined or separated into sub-blocks to implement the present scheme. The functional implementation of these functional units may refer to the related description of the source device or the end device in the embodiments of fig. 4-6 above. For the sake of brevity of the description, no further description is provided herein.
In the above embodiments, all or part may be implemented by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer program instructions which, when loaded and executed on a computer, cause a process or function according to an embodiment of the invention to be performed, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one network site, computer, server, or data center to another network site, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer, and can also be a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes, etc.), optical media (e.g., DVDs, etc.), or semiconductor media (e.g., solid state drives), among others.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

Claims (16)

1. A color space encoding method, characterized in that the method comprises:
acquiring linear colors of a plurality of color components of a first image in a first color space;
converting the linear color of each color component into a nonlinear color according to a color mapping function;
quantizing the nonlinear colors of the color components according to the quantization bit number corresponding to each color component, so as to obtain a quantized image; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
2. The method of claim 1, wherein converting the linear colors of the respective color components into non-linear colors according to a color mapping function comprises:
and converting the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components, wherein the color mapping function corresponding to at least one color component is different from the color mapping functions corresponding to other color components in the plurality of color components.
3. The method of claim 2,
the color mapping function corresponding to any color component in the plurality of color components is obtained according to the value of the linear color of the any color component and the change threshold value of the linear color of the any color component;
wherein the threshold value of the change in the linear color value of any color component is used to indicate an error caused by quantization using a threshold value of the quantization bit number of the linear color value of any color component, which is the minimum quantization bit number of the linear color value of any color component that makes an image displayed on a display device without human eyes perceiving the appearance of quantization stripes.
4. The method of claim 3, wherein the obtaining the linear color of the plurality of color components of the first image in the first color space is preceded by:
keeping the color components except for any color component unchanged, and selecting a plurality of different values L on any color component i For each L i Constructing a linear smooth variation of color component intensity with an average luminance of L i A second image of (a);
from a plurality ofAmong the quantization bit numbers, L is determined i A quantization bit number threshold of (a); the quantization bit number threshold is the minimum quantization bit number which can make human eyes not perceive quantization stripes of the image displayed on the display device in the plurality of quantization bit numbers; the image displayed on the display device is obtained by quantizing the second image according to the number of quantization bits;
taking an average value of amplitudes of sawtooth waves in a difference image between a third image and the second image as the change threshold, or taking an average value of errors between the third image and the second image as the change threshold; wherein the third image is an image obtained by quantizing the second image according to the quantization bit number threshold.
5. The method of claim 4, wherein L is determined from a plurality of quantization bits i The quantization bit number threshold of (2), comprising:
quantizing the second image according to any one of the quantization bit numbers to obtain a fourth image;
determining a plurality of Fourier frequency components corresponding to a difference image between the fourth image and the second image;
determining a degree of response of each of the plurality of Fourier frequency components to luminance and/or chrominance;
converting the response degree of each Fourier frequency component to brightness and/or chroma into detection probability;
and counting the detection probabilities of the multiple Fourier frequency components to obtain a probability statistic value, and determining the quantization bit number adopted by the current quantization processing as the quantization bit number threshold when the probability statistic value meets a preset probability threshold.
6. The method according to any of claims 1-5, wherein said obtaining the linear color of the plurality of color components of the first image in the first color space is obtained by at least one of:
obtaining linear colors of the plurality of color components in the first color space by means of camera capture and image signal processing ISP; or the like, or a combination thereof,
generating, by the graphics card device, the linear colors of the plurality of color components in the first color space.
7. The method of any of claims 1-5, wherein obtaining linear colors of a plurality of color components of the first image in the first color space comprises:
acquiring linear colors of a plurality of color components of the first image in a second color space; the second color space is different from the first color space;
converting the linear color of the plurality of color components in the second color space to the linear color of the plurality of color components in the first color space.
8. The method according to any of claims 1-5, wherein after the non-linear colors of the respective color components are quantized to obtain a quantized image,
the method further comprises at least one of:
carrying out image coding on the quantized nonlinear color of each color component in the quantized image;
performing color space conversion on the quantized nonlinear color of each color component in the quantized image;
storing the quantized nonlinear color of each color component in the quantized image;
and transmitting the quantized nonlinear colors of the color components in the quantized image to a display device.
9. An apparatus for color space encoding, the apparatus comprising:
an image acquisition module for acquiring linear colors of a plurality of color components of a first image in a first color space;
the color conversion module is used for converting the linear color of each color component into a nonlinear color according to the color mapping function;
the quantization module is used for quantizing the nonlinear colors of the color components according to the quantization bit numbers corresponding to the color components so as to obtain quantized images; the number of quantization bits corresponding to at least one color component is different from the number of quantization bits corresponding to the other color components among the plurality of color components.
10. The device of claim 9, wherein the color conversion module is specifically configured to:
and converting the linear color of each color component into a nonlinear color according to the color mapping function corresponding to each color component in the plurality of color components, wherein the color mapping function corresponding to at least one color component is different from the color mapping functions corresponding to other color components in the plurality of color components.
11. The apparatus of claim 10,
the color mapping function corresponding to any color component in the plurality of color components is obtained according to the value of the linear color of the any color component and the change threshold value of the linear color of the any color component;
wherein the threshold value of the change in the linear color value of any color component is used to indicate an error caused by quantization using a threshold value of the quantization bit number of the linear color value of any color component, which is the minimum quantization bit number of the linear color value of any color component that makes an image displayed on a display device without human eyes perceiving the appearance of quantization stripes.
12. The apparatus of claim 11, further comprising a testing module to:
keeping the color components except for any color component unchanged, and selecting a plurality of different values L on any color component i For each L i Constructing a linear smooth variation of color component intensity with an average luminance of L i A second image of (a);
determining L from a plurality of quantization bits i A quantization bit number threshold of (a); the quantization bit number threshold is the minimum quantization bit number which can make human eyes not perceive that the image displayed on the display equipment has quantization stripes in the plurality of quantization bit numbers; the image displayed on the display device is obtained by quantizing the second image according to the number of quantization bits;
and setting an average value of amplitudes of sawtooth waves in a difference image between a third image and the second image as the variation threshold, or setting an average value of errors between the third image and the second image as the variation threshold, wherein the third image is an image obtained by quantizing the second image according to the quantization bit number threshold.
13. The device according to claim 12, characterized in that the test module is specifically configured to:
quantizing the second image according to any one of the quantized bit numbers to obtain a fourth image;
determining a plurality of Fourier frequency components corresponding to a difference image between the fourth image and the second image;
determining a degree of response of each of the plurality of Fourier frequency components to luminance and/or chrominance;
converting the response degree of each Fourier frequency component to brightness and/or chroma into detection probability;
and counting the detection probabilities of the multiple Fourier frequency components to obtain a probability statistic value, and determining the quantization bit number adopted by the current quantization processing as the quantization bit number threshold when the probability statistic value meets a preset probability threshold.
14. The apparatus according to any of the claims 9-13, wherein the image acquisition module is specifically configured to acquire the linear colors of the plurality of color components in the first color space by at least one of:
obtaining linear colors of the plurality of color components in the first color space by means of camera capture and image signal processing ISP; or the like, or, alternatively,
generating, by the graphics card device, the linear colors of the plurality of color components in the first color space.
15. The apparatus of any of claims 9-13, wherein the image acquisition module is further configured to:
acquiring linear colors of a plurality of color components of the first image in a second color space; the second color space is different from the first color space;
converting the linear color of the plurality of color components in the second color space to the linear color of the plurality of color components in the first color space.
16. The apparatus of any of claims 9-13, further comprising a post-processing module; the post-processing module is configured to perform at least one of:
carrying out image coding on the quantized nonlinear color of each color component in the quantized image;
performing color space conversion on the quantized nonlinear color of each color component in the quantized image;
storing the quantized nonlinear color of each color component in the quantized image;
and transmitting the quantized nonlinear colors of the color components in the quantized image to a display device.
CN201910038935.1A 2019-01-15 2019-01-15 Color space encoding method and apparatus Active CN111435990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910038935.1A CN111435990B (en) 2019-01-15 2019-01-15 Color space encoding method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910038935.1A CN111435990B (en) 2019-01-15 2019-01-15 Color space encoding method and apparatus

Publications (2)

Publication Number Publication Date
CN111435990A CN111435990A (en) 2020-07-21
CN111435990B true CN111435990B (en) 2022-09-09

Family

ID=71580913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910038935.1A Active CN111435990B (en) 2019-01-15 2019-01-15 Color space encoding method and apparatus

Country Status (1)

Country Link
CN (1) CN111435990B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088186A1 (en) * 2021-11-18 2023-05-25 北京与光科技有限公司 Image processing method and apparatus based on spectral imaging, and electronic device
CN116797675A (en) * 2022-03-15 2023-09-22 华为技术有限公司 Method and device for processing and encoding probe data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4410989B2 (en) * 2002-12-12 2010-02-10 キヤノン株式会社 Image processing apparatus and image decoding processing apparatus
EP3051818A1 (en) * 2015-01-30 2016-08-03 Thomson Licensing Method and device for decoding a color picture
US10575001B2 (en) * 2015-05-21 2020-02-25 Telefonaktiebolaget Lm Ericsson (Publ) Pixel pre-processing and encoding

Also Published As

Publication number Publication date
CN111435990A (en) 2020-07-21

Similar Documents

Publication Publication Date Title
US10535125B2 (en) Dynamic global tone mapping with integrated 3D color look-up table
WO2020007165A1 (en) Method and device for video signal processing
CN101690161B (en) Apparatus and method for automatically computing gamma correction curve
CN111429827B (en) Display screen color calibration method and device, electronic equipment and readable storage medium
JP2020173451A5 (en)
JP2020514807A5 (en)
US11615734B2 (en) Method and apparatus for colour imaging
US20200296428A1 (en) A method and a device for encoding a high dynamic range picture, corresponding decoding method and decoding device
JP2018525883A (en) Method and device for encoding and decoding color pictures
US9961236B2 (en) 3D color mapping and tuning in an image processing pipeline
US11689748B2 (en) Pixel filtering for content
CN111435990B (en) Color space encoding method and apparatus
US11875719B2 (en) Metameric stabilization via custom viewer color matching function
Lee et al. Contrast-preserved chroma enhancement technique using YCbCr color space
JP2016025635A (en) Image processing system and method of the same
JP2018507618A (en) Method and apparatus for encoding and decoding color pictures
CN107492365B (en) Method and device for obtaining color gamut mapping fitting function
Wen P‐46: A Color Space Derived from CIELUV for Display Color Management
EP3716619A1 (en) Gamut estimation
CN117594007A (en) Color gamut conversion method and device and display equipment
EP3002748A1 (en) A method for calibrating colors of at least one display device, and corresponding display device, computer program product and computer-readable medium
Sarkar Evaluation of the color image and video processing chain and visual quality management for consumer systems
Son et al. Implementation of a real-time color matching between mobile camera and mobile LCD based on 16-bit LUT design

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant