WO2018072270A1 - Method and device for enhancing image display - Google Patents

Method and device for enhancing image display Download PDF

Info

Publication number
WO2018072270A1
WO2018072270A1 PCT/CN2016/108729 CN2016108729W WO2018072270A1 WO 2018072270 A1 WO2018072270 A1 WO 2018072270A1 CN 2016108729 W CN2016108729 W CN 2016108729W WO 2018072270 A1 WO2018072270 A1 WO 2018072270A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
value
image
displayed
pixel point
Prior art date
Application number
PCT/CN2016/108729
Other languages
French (fr)
Chinese (zh)
Inventor
王世通
王妙锋
刘苑文
李俊霖
李丽娴
黄帅
钟顺才
刘海啸
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680080613.8A priority Critical patent/CN108701351B/en
Publication of WO2018072270A1 publication Critical patent/WO2018072270A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image display enhancement method and apparatus.
  • Smartphones have become an increasingly important electronic product in people's lives. It not only has the functions of traditional mobile phone call communication, but also undertakes the mission of the entertainment terminal in the Internet era. Browsing images is one of the most basic features of a smartphone.
  • the application sends the image file information to the frame of the smartphone, so that the frame layer parses the image file information and calls
  • the decoder decodes the image file information, and then sends it to the hardware layer of the smartphone for rendering and the like, and then displays it through a liquid crystal display (English: Liquid Crystal Display, referred to as LCD).
  • the embodiment of the present application provides an image display enhancement method, device, and terminal, which are used to solve the problem that the image display effect existing in the prior art is poor.
  • an embodiment of the present application provides an image display enhancement method, where the method includes:
  • the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; performing at least one of the following processing on the image to be displayed based on the scene information: color gamut Mapping, color management, contrast enhancement, sharpening or scaling; The processed image to be displayed after processing.
  • Scene information may include, but is not limited to, a blue sky scene, a green plant scene, a backlight scene, a night scene, and the like.
  • the scene information may be written to the image to be displayed when being photographed by the camera, so that the scene information is obtained by analyzing the image information included in the image to be displayed.
  • the image before the image is displayed, the image is subjected to gamut mapping, color management, contrast enhancement, sharpening, and scaling processing based on the scene information of the image to be displayed, and the android color difference is generated when the Android native system uses the application to browse the image. , aliasing, unclear, low contrast and other display effects.
  • the processing of the image data stream depends on a hardware device such as a chip or a screen driving circuit to enhance the display effect of the image, but is limited by the image enhancement algorithm that can be executed by the hardware, and the display effect is poor, and the present application implements
  • the example is not limited to hardware devices, thereby solving the problem in the prior art that the hardware-based display effect enhancement method is limited by the algorithm flexibility and the display effect is poor, thereby improving the display effect of the image.
  • the scene information is included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field.
  • the scene information may be written into a MakerNote field of an EXIF data area of an exchangeable image file (English: Exchangeable Image File, EXIF) of the image. in.
  • the scene information may be included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field.
  • the scene information includes at least one of the following: a sensitivity ISO value, an aperture value, a shutter time, or an exposure EV value.
  • the scene information may be included in a vendor note makernotes field of the exchangeable image file EXIF data area of the image to be displayed, or may be included in other fields of the exchangeable image file EXIF data area.
  • the scene information includes at least one of the following: sensitivity ISO value, aperture value, shutter time, or exposure EV value.
  • the performing gamut mapping on the image to be displayed based on the scene information includes:
  • the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed
  • the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed
  • the first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
  • the color to be protected includes at least one of the following: skin color, gray color, and memory color.
  • the grayscale color is black based on the reference color, and the black of different saturations is used to display the image.
  • Each gray object has a brightness value from 0% (white) to 100% (black) of the gray bar.
  • People have a deep memory of the understanding of certain colors in long-term practice, so the understanding of these colors has a certain regularity and forms an inherent habit. These colors are called memory colors.
  • the gamut mapping algorithm of the present application realizes the distinguishing process of the adjustment color and the protection color based on the double three-dimensional lookup table, and can realize the gamut mapping and the color chromaticity correction while protecting the skin color, the gray color, the memory color, and ensuring the transition from the adjustment color to the protection color. No false contours appear.
  • the method further includes:
  • the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm.
  • the color corresponds to the color value
  • the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
  • the three-dimensional interpolation algorithm may be a trilinear interpolation method, a tetrahedral interpolation method, a pyramid method or a prism interpolation method, or the like.
  • the fusion weight is determined by the following formula:
  • W blending represents the fusion weight
  • T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color
  • the first color value and the second color value are fused based on the predetermined fusion weight to obtain the color value of the ith pixel point after the gamut mapping, including:
  • the color value of the ith pixel point after the gamut mapping is determined by:
  • a out A 1 *W blending +A 2 *(1-W blending );
  • a out represents a color value of the i-th pixel point after gamut mapping
  • a 1 represents the first color value
  • a 2 represents the second color value
  • W blending represents the fusion weight
  • the performing color management on the image to be displayed based on the scene information includes:
  • the brightness saturation hue YSH values of the jth pixel points in the image to be displayed are respectively adjusted as follows:
  • Different scene information corresponding to different adjustment strategies can improve the saturation to optimize the display effect; for example, scenes with skin color, etc., may reduce the saturation and make the skin smoother.
  • the adjustment policy corresponding to different scenario information may be pre-configured in the electronic device.
  • the color management method provided in the embodiment of the present application can solve the problem of brightness change and brightness display change caused by brightness change based on the YSH color space, and the method is simple.
  • the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
  • Y out represents the j-th output luminance values of pixels
  • Y in represents the input luminance value of the j-th pixel point
  • ⁇ Y represents the luminance compensation value
  • a1, a2, a3, a4 , a5 represents the luminance compensation value
  • A6 represents a quadratic function function coefficient
  • S in represents a saturation value of the j-th pixel point before adjustment
  • S out represents a saturation value of the adjusted j-th pixel point.
  • the quadratic function coefficients are determined as follows:
  • the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
  • the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel
  • the two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  • the a1, a2, a3, a4, a5, a6 are obtained by the following formula:
  • a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point
  • H p1 ⁇ H ⁇ H p2 a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1
  • a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
  • the performing zooming on the image to be displayed based on the scene information includes:
  • the image to be displayed is scaled according to the number of taps and the determined cubic function.
  • the number of taps is also the number of sampled pixels sampled.
  • the scaling method provided by the embodiment of the present application can perform adaptive anti-aliasing based on image scene information and the highest frequency of the image, thereby improving the image optimization effect.
  • the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
  • x represents the position of the sampled pixel selected based on the number of taps
  • k(x) represents the weight value corresponding to the sampled pixel point
  • dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
  • an embodiment of the present application provides an image display enhancement device, where the device includes:
  • a display enhancement module configured to acquire scene information of an image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; and performing, according to the scene information, the image to be displayed At least one type of processing: gamut mapping, color management, contrast enhancement, sharpening, or scaling;
  • a display module configured to display the image to be displayed after being processed by the display enhancement module.
  • the scene information is included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field.
  • the scene information includes at least one of the following: sensitivity ISO value, aperture value, shutter time, or exposure EV value.
  • the display enhancement module performs gamut mapping on the image to be displayed based on the scene information, specifically for:
  • the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed
  • the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed
  • the first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
  • the display enhancement module is further configured to:
  • the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm.
  • the color corresponds to the color value
  • the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
  • the display enhancement module is further configured to determine the fusion weight by the following formula:
  • W blending represents the fusion weight
  • T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color
  • the display enhancement module fuses the first color value and the second color value to obtain a color of the ith pixel point after gamut mapping based on a predetermined fusion weight.
  • the value is specifically used to:
  • the color value of the ith pixel point after the gamut mapping is determined by:
  • a out A 1 *W blending +A 2 *(1-W blending );
  • a out represents a color value of the i-th pixel point after gamut mapping
  • a 1 represents the first color value
  • a 2 represents the second color value
  • W blending represents the fusion weight
  • the display enhancement module when the color management is performed on the image to be displayed based on the scene information, the display enhancement module is specifically configured to:
  • the brightness saturation hue YSH values of the jth pixel points in the image to be displayed are respectively adjusted as follows:
  • the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
  • Y out represents the j-th output luminance values of pixels
  • Y in represents the input luminance value of the j-th pixel point
  • ⁇ Y represents the luminance compensation value
  • a1, a2, a3, a4 , a5 represents the luminance compensation value
  • A6 represents a quadratic function function coefficient
  • S in represents a saturation value of the j-th pixel point before adjustment
  • S out represents a saturation value of the adjusted j-th pixel point.
  • the display enhancement module is further configured to determine the quadratic function function coefficients by:
  • the a1, a2 A3, a4, a5, and a6 are the saturation values of the j-th pixel before the adjustment, the saturation value of the j-th pixel after the adjustment, and the brightness compensation value, respectively, based on the minimum Solved by a two-square fitting algorithm; or,
  • the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel
  • the two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  • the display enhancement module is specifically configured to determine a1, a2, a3, a4 by the following formula: , a5, a6:
  • a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point
  • H p1 ⁇ H ⁇ H p2 a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1
  • a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
  • the display enhancement module is specifically configured to: when performing scaling on the image to be displayed based on the scene information,
  • the image to be displayed is scaled according to the number of taps and the determined cubic function.
  • the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
  • x represents the position of the sampled pixel selected based on the number of taps
  • k(x) represents the weight value corresponding to the sampled pixel point
  • dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
  • the embodiment of the present application further provides a terminal, including:
  • a processor configured to acquire scene information of an image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; and performing at least one of the following on the image to be displayed based on the scene information Processing: gamut mapping, color management, contrast enhancement, sharpening or scaling;
  • the scene information is included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field.
  • the scene information includes at least one of the following: sensitivity ISO value, aperture value, shutter time, or exposure EV value.
  • the processor performs gamut mapping on the image to be displayed based on the scenario information, specifically for:
  • the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed
  • the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed
  • the first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
  • the processor is further configured to:
  • the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm.
  • the color corresponds to the color value
  • the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
  • the fusion weight is determined by the following formula:
  • W blending represents the fusion weight
  • T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color
  • the processor combines the first color value and the second color value to obtain a color value of the ith pixel point after gamut mapping based on a predetermined fusion weight
  • a predetermined fusion weight When specifically used to:
  • the color value of the ith pixel point after the gamut mapping is determined by:
  • a out A 1 *W blending +A 2 *(1-W blending );
  • a out represents a color value of the i-th pixel point after gamut mapping
  • a 1 represents the first color value
  • a 2 represents the second color value
  • W blending represents the fusion weight
  • the processor performs color management on the image to be displayed based on the scene information, specifically for:
  • the brightness saturation hue YSH values of the jth pixel points in the image to be displayed are respectively adjusted as follows:
  • the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
  • Y out represents the j-th output luminance values of pixels
  • Y in represents the input luminance value of the j-th pixel point
  • ⁇ Y represents the luminance compensation value
  • a1, a2, a3, a4 , a5 represents the luminance compensation value
  • A6 represents a quadratic function function coefficient
  • S in represents a saturation value of the j-th pixel point before adjustment
  • S out represents a saturation value of the adjusted j-th pixel point.
  • the processor is further configured to determine the quadratic function coefficients by:
  • the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment Saturation value, after The adjusted saturation value of the jth pixel point and the brightness compensation value are obtained based on a least squares fitting algorithm; or
  • the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel
  • the two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  • the processor is further configured to determine the a1, a2, a3, by the following formula: A4, a5, a6:
  • a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point
  • H p1 ⁇ H ⁇ H p2 a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1
  • a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
  • the processor further performs scaling on the image to be displayed based on the scene information, specifically for:
  • the image to be displayed is scaled according to the number of taps and the determined cubic function.
  • the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
  • x represents the position of the sampled pixel selected based on the number of taps
  • k(x) represents the weight value corresponding to the sampled pixel point
  • dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
  • the embodiment of the present application further provides an image gamut mapping method, where the method includes:
  • the first three-dimensional lookup table is configured to keep the gamut mapping of the color to be protected in the image to be processed, and the same as before the gamut mapping, the second The three-dimensional lookup table is configured to adjust a color that is not protected in the image to be processed;
  • the first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
  • the gamut mapping method of the present application is based on a dual three-dimensional lookup table to realize the distinguishing process of the adjustment color and the protection color, and can realize the gamut mapping and the color chromaticity correction while protecting the skin color, the gray color, the memory color, and ensuring the transition from the adjustment color to the protection color. No false contours appear.
  • the method further includes:
  • the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm.
  • the color corresponds to the color value
  • the color corresponding color value of the ith pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by a three-dimensional interpolation algorithm.
  • the fusion weight is determined by the following formula:
  • W blending represents the fusion weight
  • T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color
  • the first color value and the second color value are fused based on the predetermined fusion weight to obtain the color value of the ith pixel point after the gamut mapping, including:
  • the color value of the ith pixel point after the gamut mapping is determined by:
  • a out A 1 *W blending +A 2 *(1-W blending );
  • a out represents a color value of the i-th pixel point after gamut mapping
  • a 1 represents the first color value
  • a 2 represents the second color value
  • W blending represents the fusion weight
  • the embodiment of the present application further provides a color management method, including:
  • the brightness saturation hue YSH values of the jth pixel in the image to be processed are respectively adjusted as follows:
  • the brightness Y of the point is input to the quadratic function corresponding to the H value of the predetermined jth pixel point to obtain the compensated brightness Y value of the jth pixel point; the j takes a positive integer not greater than N .
  • the color management method provided in the embodiment of the present application can solve the problem of brightness change and brightness display change caused by brightness change based on the YSH color space, and the method is simple.
  • the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
  • Y out represents the j-th output luminance values of pixels
  • Y in represents the input luminance value of the j-th pixel point
  • ⁇ Y represents the luminance compensation value
  • a1, a2, a3, a4 , a5 represents the luminance compensation value
  • A6 represents a quadratic function function coefficient
  • S in represents a saturation value of the j-th pixel point before adjustment
  • S out represents a saturation value of the adjusted j-th pixel point.
  • the quadratic function coefficients are determined as follows:
  • the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
  • the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel
  • the two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  • the a1, a2, a3, a4, a5, a6 are obtained by the following formula:
  • a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point
  • H p1 ⁇ H ⁇ H p2 a (p1,k) represents the kth coefficient of the quadratic function corresponding to H p1
  • a (p2,k) represents the quadratic function corresponding to H p2
  • the embodiment of the present application further provides an image scaling method, including:
  • the image to be processed is scaled according to the number of taps and the determined cubic function.
  • the scaling method provided by the embodiment of the present application can perform adaptive anti-aliasing based on image scene information and the highest frequency of the image, thereby improving the image optimization effect.
  • the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
  • x represents the position of the sampled pixel selected based on the number of taps
  • k(x) represents the weight value corresponding to the sampled pixel point
  • dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
  • FIG. 1 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • FIG. 2 is a flowchart of an image display enhancement method according to an embodiment of the present application.
  • FIG. 3 is a flowchart of an image gamut mapping method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a correspondence relationship between a fusion weight W blending and a ⁇ d according to an embodiment of the present application
  • FIG. 5 is a flowchart of an image color management method according to an embodiment of the present application.
  • FIG. 6 is a flowchart of an image scaling method according to an embodiment of the present application.
  • FIG. 7A is a schematic diagram of an image display enhancement apparatus according to an embodiment of the present application.
  • FIG. 7B is a schematic diagram of an image display enhancement method according to an embodiment of the present application.
  • the embodiment of the present invention provides an image display enhancement method and device, which are used to solve the problem that the image display effect existing in the prior art is poor.
  • the method and the device are based on the same inventive concept. Since the principles of the method and the device for solving the problem are similar, the implementation of the device and the method can be referred to each other, and the repeated description is not repeated.
  • the image display enhancement scheme of the embodiments of the present application may be implemented using an electronic device that can be used for display, including but not limited to a personal computer, a server computer, a handheld or laptop device, a mobile device (such as a mobile phone, a tablet). Computers, personal digital assistants, media players, etc.), consumer electronics, small computers, mainframe computers, and more.
  • the electronic device is preferably an intelligent mobile terminal.
  • the solution provided by the embodiment of the present application is specifically described below by taking an intelligent mobile terminal as an example.
  • the terminal 100 includes a display device 110, a processor 120, and a memory 130.
  • the memory 130 can be used to store software programs and data, and the processor 120 runs the software stored in the memory 130. Programs and data, thereby performing various functional applications and data processing of the terminal 100.
  • the memory 130 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as an image display enhancement function, etc.), and the like; the storage data area may be stored according to the terminal 100.
  • memory 130 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the processor 120 is a control center of the terminal 100, and connects various parts of the entire terminal by various interfaces and lines, and executes various functions and processing data of the terminal 100 by running or executing software programs and/or data stored in the memory 130. Therefore, the terminal is monitored as a whole.
  • the processor 120 may include one or more general-purpose processors, and may also include one or more digital signal processors (English: Digital Signal Processor, DSP for short) for performing related operations to implement the embodiments of the present application. Technical solution.
  • the terminal 100 may further include an input device 140 for receiving input digital information, character information or contact touch/contactless gestures, and generating signal inputs related to user settings and function control of the terminal 100, and the like.
  • the input device 140 may include a touch panel 141.
  • the touch panel 141 also referred to as a touch screen, can collect touch operations on or near the user (such as the user's operation on the touch panel 141 or on the touch panel 141 using any suitable object or accessory such as a finger, a stylus, or the like. ), and drive the corresponding connection device according to a preset program.
  • the touch panel 141 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 120 is provided and can receive commands from the processor 120 and execute them. For example, the user clicks an image thumbnail on the touch panel 141 with a finger, and the touch detection device detects the signal brought by the click, and then transmits the signal to the touch controller, and the touch controller then applies the signal. The signal is converted into coordinates and sent to the processor 120.
  • the processor 120 determines an operation performed on the image (such as image enlargement, full-screen display of the image) according to the coordinates and the type of the signal (click or double-click), and then determines to execute the The memory space required for the operation, if it needs to occupy the memory If the space is smaller than the free memory, the enlarged image is displayed on the display panel 111 included in the display device in full screen, thereby realizing image display.
  • an operation performed on the image such as image enlargement, full-screen display of the image
  • the type of the signal click or double-click
  • the touch panel 141 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input device 140 may further include other input devices 142, which may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like. One or more of them.
  • the display device 110 includes a display panel 111 for displaying information input by the user or information provided to the user, and various menu interfaces of the terminal device 100, etc., which are mainly used for displaying images in the terminal 100 in the embodiment of the present application.
  • the display panel can be configured by using a liquid crystal display (English: Liquid Crystal Display, LCD for short) or OLED (Organic Light-Emitting Diode).
  • the touch panel 141 can cover the display panel 111 to form a touch display screen.
  • the terminal 100 may further include a power source 150 for powering other modules and a camera 160 for taking photos or videos.
  • Terminal 100 may also include one or more sensors 170, such as acceleration sensors, light sensors, and the like.
  • the terminal 100 may further include a radio frequency (RF) circuit 180 for performing network communication with the wireless network device, and may further include a WiFi module 190 for performing WiFi communication with other devices.
  • RF radio frequency
  • the embodiment of the present application is applied to an Android system architecture.
  • the following is a brief description of the Android (android) system architecture.
  • the Android system architecture includes: an application layer (APP), a framework layer, a core library (lib/HAL) layer, a driver layer, and a hardware layer.
  • the driver layer includes a CPU driver, a GPU driver, a display controller driver, and the like.
  • the core library layer is the core part of the Android system, including input/output services, core services, graphics device interfaces, and graphics engine (Graphics Engine) for CPU or GPU graphics processing.
  • the graphics engine may include a 2D engine (such as Skia), a 3D engine, a composition, a Frame Buffer, an EGL (Embedded-System Graphics Library), etc., where EGL is a rendering API and an underlying native platform window.
  • the interface between the systems, the API refers to the Application Programming Interface.
  • the application layer may include a gallery, a media player (Media Player), a browser, and the like.
  • the hardware layer may include a central processing unit (CPU) and a graphics processing unit (GPU) (corresponding to a specific implementation of the processor 150 in FIG. 1), and may also include a memory (equivalent to a figure).
  • the memory 130 in 1 includes memory and external memory, and may further include an input device (corresponding to the input device 140 in FIG. 1), a display device (corresponding to the display device 110 in FIG.
  • the hardware layer 450 may also include the power source, camera, RF circuit, and WiFi module shown in FIG. 1, and may also include other hardware modules not shown in FIG. 1, such as a memory controller and display control. And so on.
  • the image display enhancement method provided by the embodiment of the present application may be implemented in the storage software program shown in FIG. 1 , and may be specifically executed by the processor 120 .
  • the image display enhancement method provided by the embodiment of the present application includes:
  • S210 Obtain scene information of an image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed.
  • the scene information can be obtained by performing scene analysis on the image to be displayed by using an existing scene recognition algorithm.
  • Scene information may include, but is not limited to, a blue sky scene, a green plant scene, a backlight scene, a night scene, and the like.
  • the scene information may be written to the image to be displayed when being photographed by the camera, so that the scene information is obtained by analyzing the image information included in the image to be displayed.
  • the scene information may be written into the EXIF data area of the exchangeable image file (English: Exchangeable Image File, EXIF).
  • the scene information may be included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field.
  • the scene information includes at least one of the following: a sensitivity ISO value, an aperture value, a shutter time, or an exposure EV value.
  • the scene information may be included in a vendor note makernotes field of the exchangeable image file EXIF data area of the image to be displayed, or may be included in other fields of the exchangeable image file EXIF data area.
  • S220 Perform at least one of the following processes on the image to be displayed based on the scenario information: gamut mapping (English: Gamut Mapping, GMP for short), color management (English: Adaptive Color Management, ACM), contrast enhancement (English: Adaptive Contrast Enhancement, referred to as: ACE), sharpening, scaling.
  • gamut mapping English: Gamut Mapping, GMP for short
  • color management English: Adaptive Color Management, ACM
  • contrast enhancement English: Adaptive Contrast Enhancement, referred to as: ACE
  • sharpening scaling.
  • the image before the image is displayed, the image is subjected to gamut mapping, color management, contrast enhancement, sharpening, and scaling processing based on the scene information of the image to be displayed, and the android color difference is generated when the Android native system uses the application to browse the image. , aliasing, unclear, low contrast and other display effects.
  • the processing of the image data stream depends on a hardware device such as a chip or a screen driving circuit to enhance the display effect of the image, but is limited by the image enhancement algorithm that can be executed by the hardware, and the display effect is poor, and the present application implements
  • the example is not limited to hardware devices, thereby solving the problem in the prior art that the hardware-based display effect enhancement method is limited by the algorithm flexibility and the display effect is poor, thereby improving the display effect of the image.
  • the method when performing the gamut mapping on the to-be-displayed image based on the scenario information, the method may be implemented as follows:
  • S310 determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut on the color to be protected in the image to be displayed.
  • the second three-dimensional lookup table is used to adjust the color that is not required to be protected in the image to be displayed.
  • the color to be protected includes at least one of the following: skin color, gray color, and memory color.
  • the gray color is the corresponding color when the red (R) value, the green (G) value, and the blue (B) value are equal. Color, reflecting the different brightness conditions from black to white.
  • the first color value and the second color value are fused according to the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping.
  • the i takes a positive integer not greater than N.
  • the gamut mapping algorithm of the present application realizes the distinguishing process of the adjustment color and the protection color based on the double three-dimensional lookup table, and can realize the gamut mapping and the color chromaticity correction while protecting the skin color, the gray color, the memory color, and ensuring the transition from the adjustment color to the protection color. No false contours appear.
  • the first three-dimensional lookup table may include color values corresponding to any input color.
  • the first three-dimensional lookup table (3dlut1) may also be calculated after performing M x M ⁇ M node partitioning on the color space. In the three dimensions, each of the two dimensions is divided into M nodes, and the partitioning may be a uniform or a non-uniform partitioning, which is not specifically limited in this embodiment.
  • the color space can be red, green and blue (RGB) space, or hue saturation brightness (HSV) space, or brightness blue red (YCbCr) space, or Lab space, L for Luminosity, and a for magenta to green.
  • the range, b represents the range from yellow to blue, and the like.
  • the first three-dimensional lookup table may include only the input/output correspondence of the colors at the nodes.
  • the input-output correspondence of colors at non-nodes can be generated using a three-dimensional interpolation algorithm based on the color of the nodes.
  • the three-dimensional interpolation algorithm may be a trilinear interpolation method, a tetrahedral interpolation method, a pyramid method or a prism interpolation method, or the like.
  • the first three-dimensional lookup table may include color values corresponding to any of the input colors.
  • the lookup table (3dlut2) can also be calculated after dividing the color space by N ⁇ N ⁇ N nodes.
  • the N-th node is divided into a plurality of nodes, and the partitioning may be a uniform partitioning or a non-uniform partitioning, which is not specifically limited in this embodiment.
  • the color space may be an RGB space, or an HSV space, or a YCbCr space, or a Lab space.
  • the second three-dimensional lookup table may include only the input/output correspondence of the colors at the nodes.
  • the input-output correspondence of colors at non-nodes can be generated using a three-dimensional interpolation algorithm based on the color of the nodes.
  • the three-dimensional interpolation algorithm may be a trilinear interpolation method, a tetrahedral interpolation method, a pyramid method or a prism interpolation method, or the like.
  • the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
  • the fusion weight may be determined by the following formula:
  • W blending represents the fusion weight
  • T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color
  • the fusion weights W blending and ⁇ d satisfy the relationship shown in FIG. 4 .
  • the method is determined as follows The color value of the i-th pixel after the gamut mapping:
  • a out A 1 *W blending +A 2 *(1-W blending );
  • a out represents a color value of the i-th pixel point after gamut mapping
  • a 1 represents the first color value
  • a 2 represents the second color value
  • W blending represents the fusion weight
  • Rin, Gin, and Bin represent the color of the i-th pixel.
  • R1, G1, and B1 are the output results of Rin, Gin, and Bin through the first three-dimensional lookup table 3dlut1
  • R2, G2, and B2 are the output results of Rin, Gin, and Bin through the second three-dimensional lookup table 3dlut2
  • ⁇ d is Rin, Gin
  • the Bin node distance is the shortest distance from the convex hull composed of multiple nodes included in 3lut1, which can be calculated by the spatial geometric relationship.
  • the three-channel values Rout, Gout, and Bout output after the gamut mapping are determined by the fusion weight W blending and R1, G1, B1, R2, G2, and B2:
  • the method when the color management is performed on the image to be displayed based on the scene information, the method may be implemented as follows:
  • Different scene information corresponding to different adjustment strategies can improve the saturation to optimize the display effect; for example, scenes with skin color, etc., may reduce the saturation and make the skin smoother.
  • the adjustment strategy corresponding to different scene information It can be pre-configured in an electronic device.
  • the brightness saturation hue YSH value of the j-th pixel in the image to be displayed is adjusted in the manner described in S520 to S530, respectively, and the j is taken over a positive integer not greater than N.
  • the other pixel point YSH values among the N pixel points in the image to be displayed are adjusted according to the adjustment method of the jth pixel point.
  • the color management method provided in the embodiment of the present application can solve the problem of brightness change and brightness display change caused by brightness change based on the YSH color space, and the method is simple.
  • the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
  • Y out represents the j-th output luminance values of pixels
  • Y in represents the input luminance value of the j-th pixel point
  • ⁇ Y represents the luminance compensation value
  • a1, a2, a3, a4 , a5 represents the luminance compensation value
  • A6 represents a quadratic function function coefficient
  • S in represents a saturation value of the j-th pixel point before adjustment
  • S out represents a saturation value of the adjusted j-th pixel point.
  • each color tone in the YSH space may be configured to correspond to a quadratic function.
  • the YSH color space can also be divided to obtain p primary colors (primary hue), each primary color corresponding to a set of equation parameters, the set of equation parameters constitute a quadratic function, the quadratic function of the hue between the main tones
  • the parameters are generated using parameter interpolation of the quadratic function corresponding to the nearest two main tones.
  • the a1, a2, a3, a4, a5, a6 are respectively used in advance before the jth before adjustment
  • the saturation value of the pixel, the adjusted saturation value of the jth pixel point, and the brightness compensation value are obtained based on a least squares fitting algorithm.
  • the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel
  • the two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  • the a1, a2, a3, a4, a5, and a6 are obtained by the following formula:
  • a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point
  • H p1 ⁇ H ⁇ H p2 a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1
  • a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
  • performing scaling on the image to be displayed based on the scenario information may be implemented in the following manner, as shown in FIG. 6 :
  • S610 Determine a zoom factor of the image to be displayed, and determine a highest frequency of the periodic pattern in the image to be displayed according to the scene information. The highest frequency of the periodic pattern in the image corresponding to different scene information is different.
  • the periodic pattern in the image to be displayed refers to a pattern having a preset regular arrangement in the image, such as an arrangement pattern of railings in daily life, an arrangement pattern of tiles on the roof, and the like.
  • the highest frequency of the periodic pattern refers to transforming the image to be displayed containing the periodic pattern into the frequency domain After that, the periodic pattern content corresponds to the highest frequency in the frequency.
  • the determining, according to the scenario information, a highest frequency of the periodic pattern in the image to be displayed including:
  • the preset lookup table can be generated as follows:
  • the correspondence between the scaling factor and the number of taps of the first filter is determined, which can be specifically expressed by the following formula:
  • tap1 represents the number of first filter taps
  • srcLen represents the resolution of the original image
  • dstLen represents the resolution of the target image.
  • SizeFactor represents the preset zoom factor, which can be an empirical value.
  • the first filter tap number tap1 is finely adjusted based on the highest frequency f max of the image to obtain the second filter tap number tap2.
  • tap2 tap1/(FreqFactor*f max )
  • FreqFactor represents a preset frequency coefficient.
  • the number of taps is also the number of sampled pixels. Specifically, searching for a corresponding pixel position of the target image dst pixel point on the source image src, adaptively selecting the determined number of taps around the corresponding pixel point tap2 sampling pixel points (reference points), and calculating according to the bicubic interpolation function
  • the weight value corresponding to each sampled pixel point is weighted and summed according to the weight value of tap2 sampled pixel points and the pixel value of tap2 sampled pixel points, and the pixel value of the scaled image is obtained by weighting, and finally obtained after scaling Target image.
  • the cutoff frequency of the filter and the highest frequency of the image content do not necessarily match, and the image content corresponding to the highest frequency may be aliased or blurred. . Therefore, according to the scaling method provided by the embodiment of the present application, adaptive anti-aliasing can be performed based on the image scene information and the highest frequency of the image, thereby improving the image optimization effect.
  • the cubic function corresponding to the bicubic filter satisfies the following formula:
  • x is an input variable representing the distance between the reference point and the center
  • k(x) represents the weight value corresponding to the reference point
  • dB and dC respectively represent the coefficients of the cubic equation function
  • B and C are constants.
  • the center described here is the corresponding pixel point position of the target image dst pixel on the source image src.
  • determining a coefficient of a cubic function corresponding to the bicubic filter according to a highest frequency of the periodic pattern in the image to be displayed which may be specifically implemented as follows:
  • g is a constant and f max represents the highest frequency of the image.
  • the embodiment of the present application further provides an image display enhancement device, which is applied to an electronic device, and is specifically applied to the terminal 100, and is implemented by the processor 120 in the terminal 100.
  • the device includes:
  • a display enhancement module 710 configured to acquire scene information of an image to be displayed, where scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; and based on the scene information Performing at least one of the following processes on the image to be displayed: gamut mapping, color management, contrast enhancement, sharpening, or scaling.
  • the display module 720 is configured to display the image to be displayed processed by the display enhancement module 710.
  • the display enhancement module 710 may further include a gamut mapping module 711, a color management module 712, and a scaling module 713.
  • the gamut mapping module 711 included by the display enhancement module 710 is configured to perform the gamut mapping module 711 according to the scenario information.
  • the display image performs color management, it may be specifically executed by the color management module 712; when performing scaling on the image to be displayed based on the scene information, the zooming module 713 may be specifically executed.
  • the Android system is used as an example.
  • the method further includes, as shown in FIG. 7B. Show: application module 730.
  • the application module 730 When the image signal of the image to be displayed is sent to the application module 730 for browsing, the application module 730 first sends the image data stream of the image to be displayed to the scene information analysis module 740; the scene information analysis module 740 analyzes the image data stream. Processing the scene information of the image to be displayed.
  • the scene information included in the image to be displayed may be written in the image to be displayed when the image is captured by the camera.
  • the scene information When the scene information is written into the acquired image, the scene information may be written into a MakerNote field of an EXIF data area of an exchangeable image file (English: Exchangeable Image File, EXIF) of the image. in.
  • the scene information parsing module 740 may extract the scene information of the image based on the scene classification algorithm, and the scene classification algorithm may adopt an algorithm disclosed in the prior art, which is not specifically limited in the embodiment of the present application.
  • the scene information parsing module 740 after the scene information is obtained from the image to be displayed, sends the scene information to the display enhancement module 710.
  • the scene information parsing module 740 can also send the scene information to the application module 730, so that the application module 730 receives the scene information.
  • the image data stream is then sent to the display enhancement module 710, so that the display enhancement module 710 receives the from the scene information parsing module 740.
  • the image data stream is subjected to display enhancement processing based on the scene information, including but not limited to gamut mapping, color management, contrast enhancement, sharpening, zooming, etc.
  • display enhancement Module 710 sends the processed image data stream to display module 720 for display.
  • the scene information analysis module 740 obtains the scene information from the image to be displayed
  • the scene information and the image data stream of the image to be displayed are sent to the display enhancement module 710, so that the display enhancement module 710 displays the image data stream based on the scene information.
  • the application module 730 can be, but is not limited to, an image browsing application, a social application, an online shopping application, and the like.
  • the apparatus may further include: an image data decoding module.
  • the application module 730 sends the image data stream of the image to be displayed to the scene information parsing module 740, and then sends the image data stream to the image data decoding module to decode the image data stream, and sends the decoded image data to the scene information. Parsing module 740.
  • the image data decoding module may specifically call the skia library to decode the image data.
  • the Skia library is a 2D (2-dimensional) vector image processing function library. The fonts, coordinate transformations, and bitmaps contained in the image are all efficiently and concisely expressed by calling the Skia library.
  • the display enhancement module 710 is specifically configured to:
  • the first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed
  • the second three-dimensional lookup table is configured to adjust a color that is not required to be protected in the image to be displayed; and determine that a color of an i-th pixel point among the N pixels included in the image to be displayed corresponds to the first a first color value in a three-dimensional lookup table, and determining that a color of the i-th pixel point corresponds to a second color value in the second three-dimensional lookup table;
  • the first color is based on a predetermined fusion weight
  • the value and the second color value are fused to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
  • the display enhancement module 710 is further configured to:
  • the color corresponding to the color of the ith pixel point is not included in the first three-dimensional lookup table, Obtaining a color corresponding color value of the i-th pixel point based on interpolating a color included in the first three-dimensional lookup table by a three-dimensional interpolation algorithm; or not including the i-th pixel point in the second three-dimensional lookup table
  • the color corresponding color value of the i-th pixel point is obtained based on interpolating the color included in the second three-dimensional lookup table by the three-dimensional interpolation algorithm.
  • the display enhancement module 710 is further configured to determine the fusion weight by using the following formula:
  • W blending represents the fusion weight
  • T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color
  • the display enhancement module 710 is configured to: when the first color value and the second color value are fused to obtain the color value of the ith pixel point after the gamut mapping based on the predetermined fusion weight, specifically:
  • the color value of the ith pixel point after the gamut mapping is determined by:
  • a out A 1 *W blending +A 2 *(1-W blending );
  • a out represents a color value of the i-th pixel point after gamut mapping
  • a 1 represents the first color value
  • a 2 represents the second color value
  • W blending represents the fusion weight
  • the display enhancement module 710 is specifically configured to: when performing color management on the image to be displayed based on the scenario information:
  • the quadratic function corresponding to the H value obtains the compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
  • Y out represents the j-th output luminance values of pixels
  • Y in represents the input luminance value of the j-th pixel point
  • ⁇ Y represents the luminance compensation value
  • a1, a2, a3, a4 , a5 represents the luminance compensation value
  • A6 represents a quadratic function function coefficient
  • S in represents a saturation value of the j-th pixel point before adjustment
  • S out represents a saturation value of the adjusted j-th pixel point.
  • the display enhancement module 710 is further configured to determine the quadratic function function coefficients by:
  • the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
  • the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel
  • the two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  • the display enhancement module 710 is specifically configured to determine a1, a2, a3, a4, a5, a6 by the following formula:
  • a k denotes the kth coefficient in a1, a2, a3, a4, a5, a6;
  • H denotes the saturation value of the jth pixel point, and
  • H p1 and H p2 denote the nearest neighbor of the H value of the jth pixel point
  • the display enhancement module 710 is specifically configured to: when performing scaling on the image to be displayed based on the scene information,
  • x represents the input variable
  • k(x) represents the weight value corresponding to the input variable x
  • dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
  • the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • the functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated module is implemented in the form of a software functional module and sold as a standalone product Or when used, it can be stored in a computer readable storage medium.
  • the technical solution of the present application in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a computer device which may be a personal computer, a terminal device, etc.
  • a processor such as processor 120 shown in FIG. 1
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
  • An embodiment of the present application further provides an image gamut mapping method, where the method includes:
  • Obtaining a to-be-processed image determining, by the image of the to-be-processed image, a color of an i-th pixel point corresponding to a first color value in the first three-dimensional lookup table, and determining the i-th pixel point a color corresponding to the second color value in the second three-dimensional lookup table;
  • the first three-dimensional lookup table is configured to keep the color gamut mapping of the color to be protected in the image to be processed after being the same as before the gamut mapping
  • the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be processed; and the first color value and the second color value are merged according to the predetermined fusion weight to obtain a gamut-mapped a color value of the i-th pixel, the i taking a positive integer not greater than N.
  • the gamut mapping method of the present application is based on a dual three-dimensional lookup table to realize the distinguishing process of the adjustment color and the protection color, and can realize the gamut mapping and the color chromaticity correction while protecting the skin color, the gray color, the memory color, and ensuring the transition from the adjustment color to the protection color. No false contours appear.
  • the method further includes:
  • the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm.
  • the color corresponds to the color value
  • the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
  • the fusion weight is determined by the following formula:
  • W blending represents the fusion weight
  • T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color
  • the first color value and the second color value are fused based on the predetermined fusion weight to obtain the color value of the ith pixel point after the gamut mapping, including:
  • the color value of the ith pixel point after the gamut mapping is determined by:
  • a out A 1 *W blending +A 2 *(1-W blending );
  • a out represents a color value of the i-th pixel point after gamut mapping
  • a 1 represents the first color value
  • a 2 represents the second color value
  • W blending represents the fusion weight
  • the embodiment of the present application further provides a color management method, which may be implemented by an electronic device.
  • the method includes:
  • the brightness saturation hue YSH values of the jth pixel in the image to be processed are respectively adjusted as follows:
  • the color management method provided in the embodiment of the present application can solve the problem of brightness change and brightness display change caused by brightness change based on the YSH color space, and the method is simple.
  • the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
  • Y out represents the j-th output luminance values of pixels
  • Y in represents the input luminance value of the j-th pixel point
  • ⁇ Y represents the luminance compensation value
  • a1, a2, a3, a4 , a5 represents the luminance compensation value
  • A6 represents a quadratic function function coefficient
  • S in represents a saturation value of the j-th pixel point before adjustment
  • S out represents a saturation value of the adjusted j-th pixel point.
  • the quadratic function coefficients are determined as follows:
  • the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
  • the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel
  • the two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  • the a1, a2, a3, a4, a5, a6 are obtained by the following formula:
  • a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point
  • H p1 ⁇ H ⁇ H p2 a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1
  • a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
  • the embodiment of the present application further provides an image scaling method, which may be implemented by an electronic device, and the method includes:
  • the image to be processed is scaled according to the number of taps and the determined cubic function.
  • the scaling method provided by the embodiment of the present application can perform adaptive anti-aliasing based on image scene information and the highest frequency of the image, thereby improving the image optimization effect.
  • the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
  • x represents the position of the sampled pixel selected based on the number of taps
  • k(x) represents the weight value corresponding to the sampled pixel point
  • dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

Provided are a method and device for enhancing image display and a terminal capable of solving a problem in the prior art in which image display performance is poor. The method comprises: acquiring scene information of an image to be displayed, the scene information of the image to be displayed being obtained by performing scene analysis on the image to be displayed; executing, on the basis of the scene information, at least one of the following processing operations on the image to be displayed: color gamut mapping, color management, contrast enhancement, sharpening, or zooming; and displaying the processed image.

Description

一种图像显示增强方法及装置Image display enhancement method and device
本申请要求在2016年10月17日提交中国专利局、申请号为201610903016.2、发明名称为“一种增强图片显示效果的方法和终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201610903016.2, entitled "A Method and Terminal for Enhancing Picture Display Effect", filed on October 17, 2016, the entire contents of which are incorporated by reference. In this application.
技术领域Technical field
本申请涉及图像处理技术领域,尤其涉及一种图像显示增强方法及装置。The present application relates to the field of image processing technologies, and in particular, to an image display enhancement method and apparatus.
背景技术Background technique
智能手机已成为人们生活中越来越重要的电子产品,它不仅具有传统手机通话通信的功能,同时承担着互联网时代娱乐终端的使命。浏览图像是智能手机最基本的功能之一。在用户使用图库、微信等应用(英文:application,简称:APP)进行图像浏览时,应用将图像文件信息发送到智能手机的框架(frameworks)层,从而框架层对图像文件信息进行解析,并调用解码器对图像文件信息进行解码,然后发送到智能手机的硬件层进行渲染等处理后,通过液晶显示器(英文:Liquid Crystal Display,简称:LCD)显示。Smartphones have become an increasingly important electronic product in people's lives. It not only has the functions of traditional mobile phone call communication, but also undertakes the mission of the entertainment terminal in the Internet era. Browsing images is one of the most basic features of a smartphone. When the user browses the image using the library, WeChat, etc. (English: application, abbreviation: APP), the application sends the image file information to the frame of the smartphone, so that the frame layer parses the image file information and calls The decoder decodes the image file information, and then sends it to the hardware layer of the smartphone for rendering and the like, and then displays it through a liquid crystal display (English: Liquid Crystal Display, referred to as LCD).
现有技术中,在对图像显示之前,并不对图像进行处理,从而导致显示的图像效果较差。In the prior art, the image is not processed before the image is displayed, resulting in a poor image display effect.
发明内容Summary of the invention
本申请实施例提供了提供一种图像显示增强方法、装置及终端,用以解决现有技术中存在的图像显示效果较差的问题。The embodiment of the present application provides an image display enhancement method, device, and terminal, which are used to solve the problem that the image display effect existing in the prior art is poor.
第一方面,本申请实施例提供了一种图像显示增强方法,该方法包括:In a first aspect, an embodiment of the present application provides an image display enhancement method, where the method includes:
获取待显示图像的场景信息,所述待显示图像的场景信息是对所述待显示图像进行场景分析后得到的;基于所述场景信息对所述待显示图像执行如下至少一种处理:色域映射、颜色管理、对比度增强、锐化或缩放;显示经 过处理后的所述待显示图像。Acquiring the scene information of the image to be displayed, the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; performing at least one of the following processing on the image to be displayed based on the scene information: color gamut Mapping, color management, contrast enhancement, sharpening or scaling; The processed image to be displayed after processing.
场景信息可以包括但不仅限于蓝天场景、绿色植物场景、逆光场景、夜景场景等等。另外,场景信息可以是在通过摄像头拍摄时写入所述待显示图像的,从而场景信息是对所述待显示图像包括的图像信息分析得到的。Scene information may include, but is not limited to, a blue sky scene, a green plant scene, a backlight scene, a night scene, and the like. In addition, the scene information may be written to the image to be displayed when being photographed by the camera, so that the scene information is obtained by analyzing the image information included in the image to be displayed.
本申请实施例中,在图像显示之前基于待显示图像的场景信息对图像进行色域映射、颜色管理、对比度增强、锐化、缩放处理,解决安卓(android)原生系统使用应用浏览图像时存在色差、混叠,不清晰、对比度较低等显示效果问题。另外,现有技术中图像数据流进行处理时依赖于芯片或者屏幕驱动电路等硬件设备提升图像的显示效果,但受限于硬件所能执行的图像增强算法,显示效果不佳,而本申请实施例不限于硬件设备,从而解决了现有技术中,基于硬件的显示效果增强方法受限于算法灵活性而显示效果不佳的问题,从而提高了图像的显示效果。In the embodiment of the present application, before the image is displayed, the image is subjected to gamut mapping, color management, contrast enhancement, sharpening, and scaling processing based on the scene information of the image to be displayed, and the android color difference is generated when the Android native system uses the application to browse the image. , aliasing, unclear, low contrast and other display effects. In addition, in the prior art, the processing of the image data stream depends on a hardware device such as a chip or a screen driving circuit to enhance the display effect of the image, but is limited by the image enhancement algorithm that can be executed by the hardware, and the display effect is poor, and the present application implements The example is not limited to hardware devices, thereby solving the problem in the prior art that the hardware-based display effect enhancement method is limited by the algorithm flexibility and the display effect is poor, thereby improving the display effect of the image.
在一种可能的设计中,所述场景信息包括在所述待显示图像的元数据Metadata中、或者可交换图像文件EXIF数据区域中、或者厂商注释makernotes字段中。In one possible design, the scene information is included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field.
在将所述场景信息写入获取的图像时,可以将所述场景信息写入所述图像的可交换图像文件(英文:Exchangeable Image File,简称:EXIF)EXIF数据区域的厂商注释(MakerNote)字段中。所述场景信息可以包括在所述待显示图像的元数据Metadata中、或者可交换图像文件EXIF数据区域中、或者厂商注释makernotes字段中。所述场景信息包括以下至少一项:感光度ISO值、光圈值、快门时间或曝光EV值。具体的,所述场景信息可以包括在所述待显示图像的可交换图像文件EXIF数据区域的厂商注释makernotes字段,也可以包括在可交换图像文件EXIF数据区域的其它字段中。When the scene information is written into the acquired image, the scene information may be written into a MakerNote field of an EXIF data area of an exchangeable image file (English: Exchangeable Image File, EXIF) of the image. in. The scene information may be included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field. The scene information includes at least one of the following: a sensitivity ISO value, an aperture value, a shutter time, or an exposure EV value. Specifically, the scene information may be included in a vendor note makernotes field of the exchangeable image file EXIF data area of the image to be displayed, or may be included in other fields of the exchangeable image file EXIF data area.
在一种可能的设计中,所述场景信息包括以下至少一项:感光度ISO值、光圈值、快门时间或曝光EV值。In one possible design, the scene information includes at least one of the following: sensitivity ISO value, aperture value, shutter time, or exposure EV value.
在一种可能的设计中,所述基于所述场景信息对所述待显示图像执行色域映射,包括: In a possible design, the performing gamut mapping on the image to be displayed based on the scene information includes:
确定所述场景信息对应的第一三维查找表以及第二三维查找表,所述第一三维查找表用于保持对所述待显示图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待显示图像中无需保护的颜色进行调整;Determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed Similarly, the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed;
确定所述待显示图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;Determining, that the color of the i-th pixel point among the N pixel points included in the image to be displayed corresponds to a first color value in the first three-dimensional lookup table, and determining that the color of the i-th pixel point corresponds to Describe a second color value in the second three-dimensional lookup table;
基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。The first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
其中,需保护的颜色包括如下至少一项:皮肤颜色、灰度色、记忆色。Among them, the color to be protected includes at least one of the following: skin color, gray color, and memory color.
灰度色是以黑色为基准色,不同的饱和度的黑色来显示图像。每个灰度对象都具有从0%(白色)到灰度条100%(黑色)的亮度值。人们在长期实践中对某些颜色的认识形成了深刻的记忆,因此对这些颜色的认识有一定的规律并形成固有的习惯,这类颜色就称为记忆色。The grayscale color is black based on the reference color, and the black of different saturations is used to display the image. Each gray object has a brightness value from 0% (white) to 100% (black) of the gray bar. People have a deep memory of the understanding of certain colors in long-term practice, so the understanding of these colors has a certain regularity and forms an inherent habit. These colors are called memory colors.
本申请的色域映射算法基于双三维查找表实现调整色和保护色的区分处理,能够实现色域映射、色偏校正的同时保护肤色、灰度色、记忆色,并保证从调整色到保护色过渡不出现伪轮廓。The gamut mapping algorithm of the present application realizes the distinguishing process of the adjustment color and the protection color based on the double three-dimensional lookup table, and can realize the gamut mapping and the color chromaticity correction while protecting the skin color, the gray color, the memory color, and ensuring the transition from the adjustment color to the protection color. No false contours appear.
在一种可能的设计中,所述方法还包括:In one possible design, the method further includes:
在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值;或者When the color corresponding color value of the i-th pixel point is not included in the first three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm. The color corresponds to the color value; or
在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding color value of the i-th pixel point is not included in the second three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
三维插值算法可以是三线性插值方法、四面体插值方法、金字塔方法或者棱柱插值方法等等。The three-dimensional interpolation algorithm may be a trilinear interpolation method, a tetrahedral interpolation method, a pyramid method or a prism interpolation method, or the like.
在一种可能的设计中,所述融合权重通过如下公式确定: In one possible design, the fusion weight is determined by the following formula:
Figure PCTCN2016108729-appb-000001
Figure PCTCN2016108729-appb-000001
其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
在一种可能的设计中,基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,包括:In a possible design, the first color value and the second color value are fused based on the predetermined fusion weight to obtain the color value of the ith pixel point after the gamut mapping, including:
通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:The color value of the ith pixel point after the gamut mapping is determined by:
Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
在一种可能的设计中,所述基于所述场景信息对所述待显示图像执行颜色管理,包括:In a possible design, the performing color management on the image to be displayed based on the scene information includes:
获取所述场景信息对应的调整所述待显示图像的像素点饱和度的调整策略;Acquiring an adjustment strategy for adjusting pixel saturation of the image to be displayed corresponding to the scene information;
针对所述待显示图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下调整:The brightness saturation hue YSH values of the jth pixel points in the image to be displayed are respectively adjusted as follows:
保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值;Keeping the jth pixel dot hue H value unchanged, adjusting the saturation S value of the jth pixel point based on the adjustment strategy;
将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。 Obtaining a quadratic function function corresponding to the S value of the jth pixel point before adjustment, the adjusted S value, and the brightness Y of the jth pixel point corresponding to the H value of the predetermined jth pixel point The compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
其中针对不同的场景信息对应不同的调整策略,比如存在蓝天、自然植物等场景,可以提高饱和度,以优化显示效果;比如存在肤色等场景,则可能会降低饱和度,使皮肤更加平滑。其中,不同的场景信息对应的调整策略可以预先配置在电子设备内。Different scene information corresponding to different adjustment strategies, such as the existence of blue sky, natural plants and other scenes, can improve the saturation to optimize the display effect; for example, scenes with skin color, etc., may reduce the saturation and make the skin smoother. The adjustment policy corresponding to different scenario information may be pre-configured in the electronic device.
本申请实施例中提供的上述颜色管理方式,能够解决基于YSH颜色空间进行饱和度调整时亮度变化及由亮度变化导致的显示效果问题,且方式简单。The color management method provided in the embodiment of the present application can solve the problem of brightness change and brightness display change caused by brightness change based on the YSH color space, and the method is simple.
在一种可能的设计中,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:In a possible design, the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
Figure PCTCN2016108729-appb-000002
Figure PCTCN2016108729-appb-000002
其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
在一种可能的设计中,所述二次方程函数系数通过如下方式确定:In one possible design, the quadratic function coefficients are determined as follows:
在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
在一种可能的设计中,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6通过如下公式得到:In one possible design, in the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are obtained by the following formula:
Figure PCTCN2016108729-appb-000003
Figure PCTCN2016108729-appb-000003
ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
在一种可能的设计中,所述基于所述场景信息对所述待显示图像执行缩放,包括:In a possible design, the performing zooming on the image to be displayed based on the scene information includes:
确定所述待显示图像的缩放倍数,以及根据所述场景信息确定所述待显示图像中周期性图案的最高频率;Determining a zoom factor of the image to be displayed, and determining a highest frequency of the periodic pattern in the image to be displayed according to the scene information;
根据预设的查找表确定所述缩放倍数以及所述待显示图像中周期性图案的最高频率对应的双立方滤波器的抽头数量;以及根据所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数;Determining, according to a preset lookup table, the zoom factor and the number of taps of the bicubic filter corresponding to the highest frequency of the periodic pattern in the image to be displayed; and determining the double according to the highest frequency of the periodic pattern in the image to be displayed The coefficient of the cubic equation function corresponding to the cubic filter;
根据所述抽头数量以及确定的三次方程函数对所述待显示图像进行缩放。The image to be displayed is scaled according to the number of taps and the determined cubic function.
其中抽头数量也就是所采样的采样像素点的数量。The number of taps is also the number of sampled pixels sampled.
具体的,查找目标图像dst像素点在源图像src上的对应像素点位置,自适应选取对应像素点周围所述确定的抽头数量个采样像素点(参考点),根据双三次插值函数求出每个采样像素点对应的权重值,根据抽头数量个采样像素点的权重值和抽头数量个采样像素点的像素值,加权求和得到缩放后的图像的像素值,最后得到缩放后的目标图像。Specifically, searching for a corresponding pixel position of the target image dst pixel point on the source image src, adaptively selecting the determined number of sampling pixel points (reference points) around the corresponding pixel point, and determining each according to the bicubic interpolation function. The weight value corresponding to each sampling pixel point is weighted and summed according to the weight value of the sampling number of sampling pixels and the pixel value of the sampling number of sampling pixels, and the pixel value of the scaled image is obtained by weighting, and finally the scaled target image is obtained.
本申请实施例提供的缩放方法,能够基于图像场景信息及图像最高频率进行自适应抗混叠,从而提高了图像的优化效果。The scaling method provided by the embodiment of the present application can perform adaptive anti-aliasing based on image scene information and the highest frequency of the image, thereby improving the image optimization effect.
在一种可能的设计中,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件: In one possible design, the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
Figure PCTCN2016108729-appb-000004
Figure PCTCN2016108729-appb-000004
Figure PCTCN2016108729-appb-000005
Figure PCTCN2016108729-appb-000005
其中,x表示基于抽头数量所采样选择的采样像素点的位置,k(x)表示采样像素点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常数。Where x represents the position of the sampled pixel selected based on the number of taps, k(x) represents the weight value corresponding to the sampled pixel point, and dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
第二方面,本申请实施例提供了一种图像显示增强装置,该装置包括:In a second aspect, an embodiment of the present application provides an image display enhancement device, where the device includes:
显示增强模块,用于获取待显示图像的场景信息,所述待显示图像的场景信息是对所述待显示图像进行场景分析后得到的;并基于所述场景信息对所述待显示图像执行如下至少一种处理:色域映射、颜色管理、对比度增强、锐化或缩放;a display enhancement module, configured to acquire scene information of an image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; and performing, according to the scene information, the image to be displayed At least one type of processing: gamut mapping, color management, contrast enhancement, sharpening, or scaling;
显示模块,用于显示经过所述显示增强模块处理后的所述待显示图像。And a display module, configured to display the image to be displayed after being processed by the display enhancement module.
在一种可能的设计中,所述场景信息包括在所述待显示图像的元数据Metadata中、或者可交换图像文件EXIF数据区域中、或者厂商注释makernotes字段中。In one possible design, the scene information is included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field.
在一种可能的设计中,所述场景信息包括以下至少一项:感光度ISO值、光圈值、快门时间或曝光EV值。In one possible design, the scene information includes at least one of the following: sensitivity ISO value, aperture value, shutter time, or exposure EV value.
在一种可能的设计中,所述显示增强模块,在基于所述场景信息对所述待显示图像执行色域映射,具体用于:In a possible design, the display enhancement module performs gamut mapping on the image to be displayed based on the scene information, specifically for:
确定所述场景信息对应的第一三维查找表以及第二三维查找表,所述第一三维查找表用于保持对所述待显示图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待显示图像中无需保护的颜色进行调整; Determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed Similarly, the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed;
确定所述待显示图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;Determining, that the color of the i-th pixel point among the N pixel points included in the image to be displayed corresponds to a first color value in the first three-dimensional lookup table, and determining that the color of the i-th pixel point corresponds to Describe a second color value in the second three-dimensional lookup table;
基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。The first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
在一种可能的设计中,所述显示增强模块,还用于:In one possible design, the display enhancement module is further configured to:
在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值;或者When the color corresponding color value of the i-th pixel point is not included in the first three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm. The color corresponds to the color value; or
在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding color value of the i-th pixel point is not included in the second three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
在一种可能的设计中,所述显示增强模块,还用于通过如下公式确定所述融合权重:In one possible design, the display enhancement module is further configured to determine the fusion weight by the following formula:
Figure PCTCN2016108729-appb-000006
Figure PCTCN2016108729-appb-000006
其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
在一种可能的设计中,所述显示增强模块,在基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值时,具体用于: In a possible design, the display enhancement module fuses the first color value and the second color value to obtain a color of the ith pixel point after gamut mapping based on a predetermined fusion weight. When the value is used, it is specifically used to:
通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:The color value of the ith pixel point after the gamut mapping is determined by:
Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
在一种可能的设计中,所述显示增强模块,在基于所述场景信息对所述待显示图像执行颜色管理时,具体用于:In a possible design, when the color management is performed on the image to be displayed based on the scene information, the display enhancement module is specifically configured to:
获取所述场景信息对应的调整所述待显示图像的像素点饱和度的调整策略;Acquiring an adjustment strategy for adjusting pixel saturation of the image to be displayed corresponding to the scene information;
针对所述待显示图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下调整:The brightness saturation hue YSH values of the jth pixel points in the image to be displayed are respectively adjusted as follows:
保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值;Keeping the jth pixel dot hue H value unchanged, adjusting the saturation S value of the jth pixel point based on the adjustment strategy;
将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。Obtaining a quadratic function function corresponding to the S value of the jth pixel point before adjustment, the adjusted S value, and the brightness Y of the jth pixel point corresponding to the H value of the predetermined jth pixel point The compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
在一种可能的设计中,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:In a possible design, the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
Figure PCTCN2016108729-appb-000007
Figure PCTCN2016108729-appb-000007
其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
在一种可能的设计中,所述显示增强模块,还用于通过如下方式确定所述二次方程函数系数:In one possible design, the display enhancement module is further configured to determine the quadratic function function coefficients by:
在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、 a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2 A3, a4, a5, and a6 are the saturation values of the j-th pixel before the adjustment, the saturation value of the j-th pixel after the adjustment, and the brightness compensation value, respectively, based on the minimum Solved by a two-square fitting algorithm; or,
在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
在一种可能的设计中,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述显示增强模块,具体用于通过如下公式确定a1、a2、a3、a4、a5、a6:In a possible design, in the case where there is no quadratic function corresponding to the H value of the jth pixel, the display enhancement module is specifically configured to determine a1, a2, a3, a4 by the following formula: , a5, a6:
Figure PCTCN2016108729-appb-000008
Figure PCTCN2016108729-appb-000008
ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
在一种可能的设计中,所述显示增强模块,在基于所述场景信息对所述待显示图像执行缩放时,具体用于:In a possible design, the display enhancement module is specifically configured to: when performing scaling on the image to be displayed based on the scene information,
确定所述待显示图像的缩放倍数,以及根据所述场景信息确定所述待显示图像中周期性图案的最高频率;Determining a zoom factor of the image to be displayed, and determining a highest frequency of the periodic pattern in the image to be displayed according to the scene information;
根据预设的查找表确定所述缩放倍数以及所述待显示图像中周期性图案的最高频率对应的双立方滤波器的抽头数量;以及根据所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数;Determining, according to a preset lookup table, the zoom factor and the number of taps of the bicubic filter corresponding to the highest frequency of the periodic pattern in the image to be displayed; and determining the double according to the highest frequency of the periodic pattern in the image to be displayed The coefficient of the cubic equation function corresponding to the cubic filter;
根据所述抽头数量以及确定的三次方程函数对所述待显示图像进行缩放。The image to be displayed is scaled according to the number of taps and the determined cubic function.
在一种可能的设计中,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件: In one possible design, the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
Figure PCTCN2016108729-appb-000009
Figure PCTCN2016108729-appb-000009
Figure PCTCN2016108729-appb-000010
Figure PCTCN2016108729-appb-000010
其中,x表示基于抽头数量所采样选择的采样像素点的位置,k(x)表示采样像素点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常数。Where x represents the position of the sampled pixel selected based on the number of taps, k(x) represents the weight value corresponding to the sampled pixel point, and dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
第三方面,本申请实施例还提供了一种终端,包括:In a third aspect, the embodiment of the present application further provides a terminal, including:
处理器,用于获取待显示图像的场景信息,所述待显示图像的场景信息是对所述待显示图像进行场景分析后得到的;基于所述场景信息对所述待显示图像执行如下至少一种处理:色域映射、颜色管理、对比度增强、锐化或缩放;a processor, configured to acquire scene information of an image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; and performing at least one of the following on the image to be displayed based on the scene information Processing: gamut mapping, color management, contrast enhancement, sharpening or scaling;
显示器,用于显示经过所述处理处理后的所述待显示图像。And a display for displaying the image to be displayed after the processing.
在一种可能的设计中,所述场景信息包括在所述待显示图像的元数据Metadata中、或者可交换图像文件EXIF数据区域中、或者厂商注释makernotes字段中。In one possible design, the scene information is included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field.
在一种可能的设计中,所述场景信息包括以下至少一项:感光度ISO值、光圈值、快门时间或曝光EV值。In one possible design, the scene information includes at least one of the following: sensitivity ISO value, aperture value, shutter time, or exposure EV value.
在一种可能的设计中,所述处理器,在基于所述场景信息对所述待显示图像执行色域映射,具体用于:In a possible design, the processor performs gamut mapping on the image to be displayed based on the scenario information, specifically for:
确定所述场景信息对应的第一三维查找表以及第二三维查找表,所述第一三维查找表用于保持对所述待显示图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待显示图像中无需保护的颜色进行调整; Determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed Similarly, the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed;
确定所述待显示图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;Determining, that the color of the i-th pixel point among the N pixel points included in the image to be displayed corresponds to a first color value in the first three-dimensional lookup table, and determining that the color of the i-th pixel point corresponds to Describe a second color value in the second three-dimensional lookup table;
基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。The first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
在一种可能的设计中,所述处理器,还用于:In a possible design, the processor is further configured to:
在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值;或者When the color corresponding color value of the i-th pixel point is not included in the first three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm. The color corresponds to the color value; or
在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding color value of the i-th pixel point is not included in the second three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
在一种可能的设计中,所述融合权重通过如下公式确定:In one possible design, the fusion weight is determined by the following formula:
Figure PCTCN2016108729-appb-000011
Figure PCTCN2016108729-appb-000011
其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
在一种可能的设计中,所述处理器,在基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值时,具体用于:In a possible design, the processor combines the first color value and the second color value to obtain a color value of the ith pixel point after gamut mapping based on a predetermined fusion weight When specifically used to:
通过如下方式确定经过色域映射后的所述第i个像素点的颜色值: The color value of the ith pixel point after the gamut mapping is determined by:
Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
在一种可能的设计中,所述处理器,在基于所述场景信息对所述待显示图像执行颜色管理,具体用于:In a possible design, the processor performs color management on the image to be displayed based on the scene information, specifically for:
获取所述场景信息对应的调整所述待显示图像的像素点饱和度的调整策略;Acquiring an adjustment strategy for adjusting pixel saturation of the image to be displayed corresponding to the scene information;
针对所述待显示图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下调整:The brightness saturation hue YSH values of the jth pixel points in the image to be displayed are respectively adjusted as follows:
保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值;Keeping the jth pixel dot hue H value unchanged, adjusting the saturation S value of the jth pixel point based on the adjustment strategy;
将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。Obtaining a quadratic function function corresponding to the S value of the jth pixel point before adjustment, the adjusted S value, and the brightness Y of the jth pixel point corresponding to the H value of the predetermined jth pixel point The compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
在一种可能的设计中,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:In a possible design, the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
Figure PCTCN2016108729-appb-000012
Figure PCTCN2016108729-appb-000012
其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
在一种可能的设计中,所述处理器,还用于通过如下方式确定所述二次方程函数系数:In one possible design, the processor is further configured to determine the quadratic function coefficients by:
在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过 调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment Saturation value, after The adjusted saturation value of the jth pixel point and the brightness compensation value are obtained based on a least squares fitting algorithm; or
在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
在一种可能的设计中,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述处理器,还用于通过如下公式确定所述a1、a2、a3、a4、a5、a6:In a possible design, in the case where there is no quadratic function corresponding to the H value of the jth pixel, the processor is further configured to determine the a1, a2, a3, by the following formula: A4, a5, a6:
Figure PCTCN2016108729-appb-000013
Figure PCTCN2016108729-appb-000013
ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
在一种可能的设计中,所述处理器,还基于所述场景信息对所述待显示图像执行缩放,具体用于:In a possible design, the processor further performs scaling on the image to be displayed based on the scene information, specifically for:
确定所述待显示图像的缩放倍数,以及根据所述场景信息确定所述待显示图像中周期性图案的最高频率;Determining a zoom factor of the image to be displayed, and determining a highest frequency of the periodic pattern in the image to be displayed according to the scene information;
根据预设的查找表确定所述缩放倍数以及所述待显示图像中周期性图案的最高频率对应的双立方滤波器的抽头数量;以及根据所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数;Determining, according to a preset lookup table, the zoom factor and the number of taps of the bicubic filter corresponding to the highest frequency of the periodic pattern in the image to be displayed; and determining the double according to the highest frequency of the periodic pattern in the image to be displayed The coefficient of the cubic equation function corresponding to the cubic filter;
根据所述抽头数量以及确定的三次方程函数对所述待显示图像进行缩放。The image to be displayed is scaled according to the number of taps and the determined cubic function.
在一种可能的设计中,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件: In one possible design, the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
Figure PCTCN2016108729-appb-000014
Figure PCTCN2016108729-appb-000014
Figure PCTCN2016108729-appb-000015
Figure PCTCN2016108729-appb-000015
其中,x表示基于抽头数量所采样选择的采样像素点的位置,k(x)表示采样像素点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常数。Where x represents the position of the sampled pixel selected based on the number of taps, k(x) represents the weight value corresponding to the sampled pixel point, and dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
第四方面,本申请实施例还提供了一种图像色域映射方法,该方法包括:In a fourth aspect, the embodiment of the present application further provides an image gamut mapping method, where the method includes:
获取待处理图像;Get the image to be processed;
确定所述待处理图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;所述第一三维查找表用于保持对所述待处理图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待处理图像中无需保护的颜色进行调整;Determining, that the color of the i-th pixel point of the N pixels included in the image to be processed corresponds to a first color value in the first three-dimensional lookup table, and determining that the color of the i-th pixel point corresponds to a second color value in the second three-dimensional lookup table; the first three-dimensional lookup table is configured to keep the gamut mapping of the color to be protected in the image to be processed, and the same as before the gamut mapping, the second The three-dimensional lookup table is configured to adjust a color that is not protected in the image to be processed;
基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。The first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
本申请的色域映射方法基于双三维查找表实现调整色和保护色的区分处理,能够实现色域映射、色偏校正的同时保护肤色、灰度色、记忆色,并保证从调整色到保护色过渡不出现伪轮廓。The gamut mapping method of the present application is based on a dual three-dimensional lookup table to realize the distinguishing process of the adjustment color and the protection color, and can realize the gamut mapping and the color chromaticity correction while protecting the skin color, the gray color, the memory color, and ensuring the transition from the adjustment color to the protection color. No false contours appear.
在一种可能的设计中,所述方法还包括:In one possible design, the method further includes:
在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值;或者When the color corresponding color value of the i-th pixel point is not included in the first three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm. The color corresponds to the color value; or
在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时, 基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding to the color of the ith pixel point is not included in the second three-dimensional lookup table, The color corresponding color value of the ith pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by a three-dimensional interpolation algorithm.
在一种可能的设计中,所述融合权重通过如下公式确定:In one possible design, the fusion weight is determined by the following formula:
Figure PCTCN2016108729-appb-000016
Figure PCTCN2016108729-appb-000016
其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
在一种可能的设计中,基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,包括:In a possible design, the first color value and the second color value are fused based on the predetermined fusion weight to obtain the color value of the ith pixel point after the gamut mapping, including:
通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:The color value of the ith pixel point after the gamut mapping is determined by:
Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
第五方面,本申请实施例还提供了一种颜色管理方法,包括:In a fifth aspect, the embodiment of the present application further provides a color management method, including:
获取待处理图像,并确定调整所述待处理器图像的像素点饱和度的调整策略;Obtaining an image to be processed, and determining an adjustment strategy for adjusting pixel saturation of the image to be processed by the processor;
针对所述待处理图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下调整:The brightness saturation hue YSH values of the jth pixel in the image to be processed are respectively adjusted as follows:
保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值;Keeping the jth pixel dot hue H value unchanged, adjusting the saturation S value of the jth pixel point based on the adjustment strategy;
将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素 点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。S value of the jth pixel before adjustment, adjusted S value, and the jth pixel The brightness Y of the point is input to the quadratic function corresponding to the H value of the predetermined jth pixel point to obtain the compensated brightness Y value of the jth pixel point; the j takes a positive integer not greater than N .
本申请实施例中提供的上述颜色管理方式,能够解决基于YSH颜色空间进行饱和度调整时亮度变化及由亮度变化导致的显示效果问题,且方式简单。The color management method provided in the embodiment of the present application can solve the problem of brightness change and brightness display change caused by brightness change based on the YSH color space, and the method is simple.
在一种可能的设计中,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:In a possible design, the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
Figure PCTCN2016108729-appb-000017
Figure PCTCN2016108729-appb-000017
其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
在一种可能的设计中,所述二次方程函数系数通过如下方式确定:In one possible design, the quadratic function coefficients are determined as follows:
在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
在一种可能的设计中,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6通过如下公式得到:In one possible design, in the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are obtained by the following formula:
Figure PCTCN2016108729-appb-000018
Figure PCTCN2016108729-appb-000018
ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2, a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1,k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2,k) represents the quadratic function corresponding to H p2 The kth coefficient.
第六方面,本申请实施例还提供了一种图像缩放方法,包括:In a sixth aspect, the embodiment of the present application further provides an image scaling method, including:
确定所述待处理图像的缩放倍数以及所述待处理图像的最高频率;Determining a zoom factor of the image to be processed and a highest frequency of the image to be processed;
根据预设的查找表确定所述缩放倍数以及所述待处理图像的最高频率对应的双立方滤波器的抽头数量;以及根据所述待处理图像的最高频率确定双立方滤波器对应的三次方程函数的系数;Determining, according to a preset lookup table, the zoom factor and the number of taps of the bicubic filter corresponding to the highest frequency of the image to be processed; and determining a cubic function corresponding to the bicubic filter according to the highest frequency of the image to be processed Coefficient of
根据所述抽头数量以及确定的三次方程函数对所述待处理图像进行缩放。The image to be processed is scaled according to the number of taps and the determined cubic function.
本申请实施例提供的缩放方法,能够基于图像场景信息及图像最高频率进行自适应抗混叠,从而提高了图像的优化效果。The scaling method provided by the embodiment of the present application can perform adaptive anti-aliasing based on image scene information and the highest frequency of the image, thereby improving the image optimization effect.
在一种可能的设计中,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件:In one possible design, the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
Figure PCTCN2016108729-appb-000019
Figure PCTCN2016108729-appb-000019
Figure PCTCN2016108729-appb-000020
Figure PCTCN2016108729-appb-000020
其中,x表示基于抽头数量所采样选择的采样像素点的位置,k(x)表示采样像素点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常数。Where x represents the position of the sampled pixel selected based on the number of taps, k(x) represents the weight value corresponding to the sampled pixel point, and dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
附图说明DRAWINGS
图1为本申请实施例提供的终端设备示意图;FIG. 1 is a schematic diagram of a terminal device according to an embodiment of the present application;
图2为本申请实施例提供的图像显示增强方法流程图; 2 is a flowchart of an image display enhancement method according to an embodiment of the present application;
图3为本申请实施例提供的图像色域映射方法流程图;FIG. 3 is a flowchart of an image gamut mapping method according to an embodiment of the present application;
图4为本申请实施例提供的融合权重Wblending与Δd的对应关系示意图;4 is a schematic diagram of a correspondence relationship between a fusion weight W blending and a Δd according to an embodiment of the present application;
图5为本申请实施例提供的图像颜色管理方法流程图;FIG. 5 is a flowchart of an image color management method according to an embodiment of the present application;
图6为本申请实施例提供的图像缩放方法流程图;FIG. 6 is a flowchart of an image scaling method according to an embodiment of the present application;
图7A为本申请实施例提供的图像显示增强装置示意图;FIG. 7A is a schematic diagram of an image display enhancement apparatus according to an embodiment of the present application;
图7B为本申请实施例提供的图像显示增强方法示意图。FIG. 7B is a schematic diagram of an image display enhancement method according to an embodiment of the present application.
具体实施方式detailed description
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。The present invention will be further described in detail with reference to the accompanying drawings, in which FIG. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
本申请实施例提供一种图像显示增强方法及装置,用以解决现有技术中存在的图像显示效果较差的问题。其中,方法和装置是基于同一发明构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。The embodiment of the present invention provides an image display enhancement method and device, which are used to solve the problem that the image display effect existing in the prior art is poor. The method and the device are based on the same inventive concept. Since the principles of the method and the device for solving the problem are similar, the implementation of the device and the method can be referred to each other, and the repeated description is not repeated.
本申请实施例的图像显示增强方案可使用能够用于显示的电子设备进行实施,该电子设备包括但不限于个人计算机、服务器计算机、手持式或膝上型设备、移动设备(比如移动电话、平板电脑、个人数字助理、媒体播放器等等)、消费型电子设备、小型计算机、大型计算机,等等。但该电子设备优选为智能移动终端,下面以智能移动终端为例对本申请实施例提供的方案进行具体描述。The image display enhancement scheme of the embodiments of the present application may be implemented using an electronic device that can be used for display, including but not limited to a personal computer, a server computer, a handheld or laptop device, a mobile device (such as a mobile phone, a tablet). Computers, personal digital assistants, media players, etc.), consumer electronics, small computers, mainframe computers, and more. However, the electronic device is preferably an intelligent mobile terminal. The solution provided by the embodiment of the present application is specifically described below by taking an intelligent mobile terminal as an example.
参考图1所示,为本申请实施例应用的终端的硬件结构示意图。如图1所示,终端100包括显示设备110、处理器120以及存储器130。存储器130可用于存储软件程序以及数据,处理器120通过运行存储在存储器130的软 件程序以及数据,从而执行终端100的各种功能应用以及数据处理。存储器130可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如图像显示增强功能等)等;存储数据区可存储根据终端100的使用所创建的数据(比如音频数据、电话本、三维查找表等)等。此外,存储器130可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。处理器120是终端100的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器130内的软件程序和/或数据,执行终端100的各种功能和处理数据,从而对终端进行整体监控。处理器120可以包括一个或多个通用处理器,还可包括一个或多个数字信号处理器(英文:Digital Signal Processor,简称:DSP),用于执行相关操作,以实现本申请实施例所提供的技术方案。Referring to FIG. 1 , it is a schematic diagram of a hardware structure of a terminal applied to an embodiment of the present application. As shown in FIG. 1, the terminal 100 includes a display device 110, a processor 120, and a memory 130. The memory 130 can be used to store software programs and data, and the processor 120 runs the software stored in the memory 130. Programs and data, thereby performing various functional applications and data processing of the terminal 100. The memory 130 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as an image display enhancement function, etc.), and the like; the storage data area may be stored according to the terminal 100. Use the created data (such as audio data, phone book, 3D lookup table, etc.). Moreover, memory 130 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The processor 120 is a control center of the terminal 100, and connects various parts of the entire terminal by various interfaces and lines, and executes various functions and processing data of the terminal 100 by running or executing software programs and/or data stored in the memory 130. Therefore, the terminal is monitored as a whole. The processor 120 may include one or more general-purpose processors, and may also include one or more digital signal processors (English: Digital Signal Processor, DSP for short) for performing related operations to implement the embodiments of the present application. Technical solution.
终端100还可以包括输入设备140,用于接收输入的数字信息、字符信息或接触式触摸操作/非接触式手势,以及产生与终端100的用户设置以及功能控制有关的信号输入等。具体地,本申请实施例中,该输入设备140可以包括触控面板141。触控面板141,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板141上或在触控面板141的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板141可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器120,并能接收处理器120发来的命令并加以执行。例如,用户在触控面板141上用手指单击一张图像缩略图,触摸检测装置检测到此次单击带来的这个信号,然后将该信号传送给触摸控制器,触摸控制器再将这个信号转换成坐标发送给处理器120,处理器120根据该坐标和该信号的类型(单击或双击)确定对该图像所执行的操作(如图像放大、图像全屏显示),然后,确定执行该操作所需要占用的内存空间,若需要占用的内存 空间小于空闲内存,则将该放大后的图像全屏显示在显示设备包括的显示面板111上,从而实现图像显示。The terminal 100 may further include an input device 140 for receiving input digital information, character information or contact touch/contactless gestures, and generating signal inputs related to user settings and function control of the terminal 100, and the like. Specifically, in the embodiment of the present application, the input device 140 may include a touch panel 141. The touch panel 141, also referred to as a touch screen, can collect touch operations on or near the user (such as the user's operation on the touch panel 141 or on the touch panel 141 using any suitable object or accessory such as a finger, a stylus, or the like. ), and drive the corresponding connection device according to a preset program. Optionally, the touch panel 141 may include two parts: a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information. The processor 120 is provided and can receive commands from the processor 120 and execute them. For example, the user clicks an image thumbnail on the touch panel 141 with a finger, and the touch detection device detects the signal brought by the click, and then transmits the signal to the touch controller, and the touch controller then applies the signal. The signal is converted into coordinates and sent to the processor 120. The processor 120 determines an operation performed on the image (such as image enlargement, full-screen display of the image) according to the coordinates and the type of the signal (click or double-click), and then determines to execute the The memory space required for the operation, if it needs to occupy the memory If the space is smaller than the free memory, the enlarged image is displayed on the display panel 111 included in the display device in full screen, thereby realizing image display.
触控面板141可以采用电阻式、电容式、红外线以及表面声波等多种类型实现。除了触控面板141,输入设备140还可以包括其他输入设备142,其他输入设备142可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The touch panel 141 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch panel 141, the input device 140 may further include other input devices 142, which may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like. One or more of them.
显示设备110,包括的显示面板111,用于显示由用户输入的信息或提供给用户的信息以及终端设备100的各种菜单界面等,在本申请实施例中主要用于显示终端100中图像。可选的,显示面板可以采用液晶显示器(英文:Liquid Crystal Display,简称:LCD)或OLED(英文:Organic Light-Emitting Diode,简称:有机发光二极管)等形式来配置显示面板111。在其他一些实施例中,触控面板141可覆盖显示面板111上,形成触摸显示屏。The display device 110 includes a display panel 111 for displaying information input by the user or information provided to the user, and various menu interfaces of the terminal device 100, etc., which are mainly used for displaying images in the terminal 100 in the embodiment of the present application. Optionally, the display panel can be configured by using a liquid crystal display (English: Liquid Crystal Display, LCD for short) or OLED (Organic Light-Emitting Diode). In some other embodiments, the touch panel 141 can cover the display panel 111 to form a touch display screen.
除以上之外,终端100还可以包括用于给其他模块供电的电源150以及用于拍摄照片或视频的摄像头160。终端100还可以包括一个或多个传感器170,例如加速度传感器、光传感器等。终端100还可以包括无线射频(Radio Frequency,RF)电路180,用于与无线网络设备进行网络通信,还可以包括WiFi模块190,用于与其他设备进行WiFi通信。In addition to the above, the terminal 100 may further include a power source 150 for powering other modules and a camera 160 for taking photos or videos. Terminal 100 may also include one or more sensors 170, such as acceleration sensors, light sensors, and the like. The terminal 100 may further include a radio frequency (RF) circuit 180 for performing network communication with the wireless network device, and may further include a WiFi module 190 for performing WiFi communication with other devices.
本申请实施例应用于安卓(android)系统架构下。下面针对安卓(android)系统架构进行简要说明。The embodiment of the present application is applied to an Android system architecture. The following is a brief description of the Android (android) system architecture.
安卓系统架构中包括:应用层(APP)、框架(Framework)层、核心库(lib/HAL)层、驱动层以及硬件层。驱动层包括CPU驱动、GPU驱动以及显示控制器驱动等。核心库层是安卓系统的核心部分,包括输入/输出服务、核心服务、图形设备接口以及实现CPU或GPU图形处理的图形引擎(Graphics Engine)等。图形引擎可包括2D引擎(比如Skia)、3D引擎、合成器(Composition)、帧缓冲区(Frame Buffer)、EGL(Embedded-System Graphics Library)等,其中EGL是一种渲染API与底层原始平台窗口系统之间的接口,API指的是应用程序编程接口(Application Programming Interface)。框架层可 包括图形服务(Graphic Service)、系统服务(System service)、渲染服务(SurfaceFlinger)等;图形服务中,可包括如微件(Widget)、画布(Canvas)以及视图(Views)等。应用层可包括图库、媒体播放器(Media Player)以及浏览器(Browser)等。硬件层可以包括中央处理器(Central Processing Unit,CPU)和图形处理器(Graphic Processing Unit,GPU)(相当于图1中的处理器150的一种具体实现),还可以包括存储器(相当于图1中的存储器130),包括内存和外存,还可以包括输入设备(相当于图1中的输入设备140)、显示设备(相当于图1中的显示设备110),还可以包括一个或多个传感器456(相当于图1中的传感器170)。当然除此之外,硬件层450还可以包括图1中示出的电源、摄像头、RF电路和WiFi模块,还可以包括图1中也没有示出的其他硬件模块,例如内存控制器和显示控制器等。The Android system architecture includes: an application layer (APP), a framework layer, a core library (lib/HAL) layer, a driver layer, and a hardware layer. The driver layer includes a CPU driver, a GPU driver, a display controller driver, and the like. The core library layer is the core part of the Android system, including input/output services, core services, graphics device interfaces, and graphics engine (Graphics Engine) for CPU or GPU graphics processing. The graphics engine may include a 2D engine (such as Skia), a 3D engine, a composition, a Frame Buffer, an EGL (Embedded-System Graphics Library), etc., where EGL is a rendering API and an underlying native platform window. The interface between the systems, the API refers to the Application Programming Interface. Frame layer Including graphics services (Graphic Service), system services (System service), rendering services (SurfaceFlinger), etc.; graphics services, such as widgets, canvases (Canvas) and views (Views). The application layer may include a gallery, a media player (Media Player), a browser, and the like. The hardware layer may include a central processing unit (CPU) and a graphics processing unit (GPU) (corresponding to a specific implementation of the processor 150 in FIG. 1), and may also include a memory (equivalent to a figure). The memory 130 in 1 includes memory and external memory, and may further include an input device (corresponding to the input device 140 in FIG. 1), a display device (corresponding to the display device 110 in FIG. 1), and may also include one or more Sensor 456 (corresponding to sensor 170 in Figure 1). Of course, in addition, the hardware layer 450 may also include the power source, camera, RF circuit, and WiFi module shown in FIG. 1, and may also include other hardware modules not shown in FIG. 1, such as a memory controller and display control. And so on.
现有技术中,用户在使用应用(图库或者微信等等)浏览图像时,文件流会送入到框架层,框架层对图像信息进行解析后调用核心库层进行图像数据解码创建位图文件(bitmap)对象,并返回给应用,从而应用将bitmap对象发送到硬件层进行渲染后显示。从上所知,由硬件层对图像显示之前,并不对图像进行处理,从而导致显示的图像效果较差。In the prior art, when a user browses an image using an application (gallery or WeChat, etc.), the file stream is sent to the frame layer, and the frame layer parses the image information and then calls the core library layer to decode the image data to create a bitmap file ( The bitmap object is returned to the application, so that the application sends the bitmap object to the hardware layer for rendering. As can be seen from the above, before the image is displayed by the hardware layer, the image is not processed, resulting in poor display of the image.
基于此,本申请实施例提供的图像显示增强方法可以实现在图1所示的存储软件程序中,具体可以由处理器120来执行。具体的,如图2所示,本申请实施例提供的图像显示增强方法,包括:Based on this, the image display enhancement method provided by the embodiment of the present application may be implemented in the storage software program shown in FIG. 1 , and may be specifically executed by the processor 120 . Specifically, as shown in FIG. 2, the image display enhancement method provided by the embodiment of the present application includes:
S210,获取待显示图像的场景信息,所述待显示图像的场景信息是对所述待显示图像进行场景分析后得到的。S210: Obtain scene information of an image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed.
其中,可以通过现有的场景识别算法针对待显示图像进行场景分析得到场景信息。场景信息可以包括但不仅限于蓝天场景、绿色植物场景、逆光场景、夜景场景等等。另外,场景信息可以是在通过摄像头拍摄时写入所述待显示图像的,从而场景信息是对所述待显示图像包括的图像信息分析得到的。在将所述场景信息写入获取的图像时,可以将所述场景信息写入所述图像的可交换图像文件(英文:Exchangeable Image File,简称:EXIF)EXIF数据区 域的厂商注释(MakerNote)字段中。所述场景信息可以包括在所述待显示图像的元数据Metadata中、或者可交换图像文件EXIF数据区域中、或者厂商注释makernotes字段中。所述场景信息包括以下至少一项:感光度ISO值、光圈值、快门时间或曝光EV值。具体的,所述场景信息可以包括在所述待显示图像的可交换图像文件EXIF数据区域的厂商注释makernotes字段,也可以包括在可交换图像文件EXIF数据区域的其它字段中。The scene information can be obtained by performing scene analysis on the image to be displayed by using an existing scene recognition algorithm. Scene information may include, but is not limited to, a blue sky scene, a green plant scene, a backlight scene, a night scene, and the like. In addition, the scene information may be written to the image to be displayed when being photographed by the camera, so that the scene information is obtained by analyzing the image information included in the image to be displayed. When the scene information is written into the acquired image, the scene information may be written into the EXIF data area of the exchangeable image file (English: Exchangeable Image File, EXIF). The vendor's comment (MakerNote) field in the field. The scene information may be included in the metadata Metadata of the image to be displayed, or in the exchangeable image file EXIF data area, or in the vendor comment makernotes field. The scene information includes at least one of the following: a sensitivity ISO value, an aperture value, a shutter time, or an exposure EV value. Specifically, the scene information may be included in a vendor note makernotes field of the exchangeable image file EXIF data area of the image to be displayed, or may be included in other fields of the exchangeable image file EXIF data area.
S220,基于所述场景信息对所述待显示图像执行如下至少一种处理:色域映射(英文:Gamut Mapping,简称:GMP)、颜色管理(英文:Adaptive Color Management,简称:ACM)、对比度增强(英文:Adaptive Contrast Enhancement,简称:ACE)、锐化、缩放。S220: Perform at least one of the following processes on the image to be displayed based on the scenario information: gamut mapping (English: Gamut Mapping, GMP for short), color management (English: Adaptive Color Management, ACM), contrast enhancement (English: Adaptive Contrast Enhancement, referred to as: ACE), sharpening, scaling.
S230,显示经过处理后的所述待显示图像。S230. Display the processed image to be displayed.
本申请实施例中,在图像显示之前基于待显示图像的场景信息对图像进行色域映射、颜色管理、对比度增强、锐化、缩放处理,解决安卓(android)原生系统使用应用浏览图像时存在色差、混叠,不清晰、对比度较低等显示效果问题。另外,现有技术中图像数据流进行处理时依赖于芯片或者屏幕驱动电路等硬件设备提升图像的显示效果,但受限于硬件所能执行的图像增强算法,显示效果不佳,而本申请实施例不限于硬件设备,从而解决了现有技术中,基于硬件的显示效果增强方法受限于算法灵活性而显示效果不佳的问题,从而提高了图像的显示效果。In the embodiment of the present application, before the image is displayed, the image is subjected to gamut mapping, color management, contrast enhancement, sharpening, and scaling processing based on the scene information of the image to be displayed, and the android color difference is generated when the Android native system uses the application to browse the image. , aliasing, unclear, low contrast and other display effects. In addition, in the prior art, the processing of the image data stream depends on a hardware device such as a chip or a screen driving circuit to enhance the display effect of the image, but is limited by the image enhancement algorithm that can be executed by the hardware, and the display effect is poor, and the present application implements The example is not limited to hardware devices, thereby solving the problem in the prior art that the hardware-based display effect enhancement method is limited by the algorithm flexibility and the display effect is poor, thereby improving the display effect of the image.
可选地,所述基于所述场景信息对所述待显示图像执行色域映射时,具体可以通过如下方式实现,如图3所示:Optionally, when performing the gamut mapping on the to-be-displayed image based on the scenario information, the method may be implemented as follows:
S310,确定所述场景信息对应的第一三维查找表以及第二三维查找表,所述第一三维查找表用于保持对所述待显示图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待显示图像中无需保护的颜色进行调整。S310, determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut on the color to be protected in the image to be displayed. The second three-dimensional lookup table is used to adjust the color that is not required to be protected in the image to be displayed.
其中,需保护的颜色包括如下至少一项:皮肤颜色、灰度色、记忆色。Among them, the color to be protected includes at least one of the following: skin color, gray color, and memory color.
灰度色是红色(R)值、绿色(G)值、蓝色(B)值均相等时对应的颜 色,反应了从黑到白不同的亮度情况。在调整过程中对灰色进行保护,避免会调整过程中破坏R=G=B的关系,从而避免图像原本是灰度色的区域出现彩色,形成错误的颜色表现。The gray color is the corresponding color when the red (R) value, the green (G) value, and the blue (B) value are equal. Color, reflecting the different brightness conditions from black to white. The gray is protected during the adjustment process to avoid the relationship of R=G=B in the adjustment process, thereby avoiding the appearance of color in the area where the image is originally grayscale, and forming a wrong color expression.
另外,人们在长期实践中对某些颜色的认识形成了深刻的记忆,因此对这些颜色的认识有一定的规律并形成固有的习惯,这类颜色就称为记忆色。在调整过程中对记忆色进行保护,避免了破坏人们对某些物体固有颜色的认识。比如,人们对于草地的认识为绿色。In addition, people have a deep memory of the understanding of certain colors in long-term practice, so the understanding of these colors has a certain regularity and forms an inherent habit. Such colors are called memory colors. The memory color is protected during the adjustment process, avoiding the destruction of people's understanding of the inherent color of certain objects. For example, people's understanding of grassland is green.
S320,确定所述待显示图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值。S320, determining that a color of an i-th pixel point among the N pixel points included in the image to be displayed corresponds to a first color value in the first three-dimensional lookup table, and determining a color corresponding to the i-th pixel point a second color value in the second three-dimensional lookup table.
S330,基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值。S330. The first color value and the second color value are fused according to the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping.
所述i取遍不大于N的正整数。The i takes a positive integer not greater than N.
本申请的色域映射算法基于双三维查找表实现调整色和保护色的区分处理,能够实现色域映射、色偏校正的同时保护肤色、灰度色、记忆色,并保证从调整色到保护色过渡不出现伪轮廓。The gamut mapping algorithm of the present application realizes the distinguishing process of the adjustment color and the protection color based on the double three-dimensional lookup table, and can realize the gamut mapping and the color chromaticity correction while protecting the skin color, the gray color, the memory color, and ensuring the transition from the adjustment color to the protection color. No false contours appear.
其中,第一三维查找表中可以包括任意输入的颜色对应的颜色值。第一三维查找表(3dlut1)还可以是在对颜色空间进行M×M×M的结点划分后计算得到的。其中三个维度上,每个维度划分M个结点,在划分时可以是均匀划分也可以是非均匀划分,本申请实施例对此不作具体限定。颜色空间可以是红绿蓝(RGB)空间、或者色调饱和度亮度(HSV)空间、或者亮度蓝色红色(YCbCr)空间、或者Lab空间,L表示明度(Luminosity),a表示从洋红色至绿色的范围,b表示从黄色至蓝色的范围等。此时,第一三维查找表中可以仅包括结点处颜色的输入输出对应关系。非结点处颜色的输入输出对应关系可以根据结点出的颜色使用三维插值算法产生。三维插值算法可以是三线性插值方法、四面体插值方法、金字塔方法或者棱柱插值方法等等。The first three-dimensional lookup table may include color values corresponding to any input color. The first three-dimensional lookup table (3dlut1) may also be calculated after performing M x M×M node partitioning on the color space. In the three dimensions, each of the two dimensions is divided into M nodes, and the partitioning may be a uniform or a non-uniform partitioning, which is not specifically limited in this embodiment. The color space can be red, green and blue (RGB) space, or hue saturation brightness (HSV) space, or brightness blue red (YCbCr) space, or Lab space, L for Luminosity, and a for magenta to green. The range, b represents the range from yellow to blue, and the like. At this time, the first three-dimensional lookup table may include only the input/output correspondence of the colors at the nodes. The input-output correspondence of colors at non-nodes can be generated using a three-dimensional interpolation algorithm based on the color of the nodes. The three-dimensional interpolation algorithm may be a trilinear interpolation method, a tetrahedral interpolation method, a pyramid method or a prism interpolation method, or the like.
第一三维查找表中可以包括任意输入的颜色对应的颜色值。第二三维查 找表(3dlut2)还可以是在对颜色空间进行N×N×N的结点划分后计算得到的。其中三个维度上,每个维度划分N个结点,在划分时可以是均匀划分也可以是非均匀划分,本申请实施例对此不作具体限定。颜色空间可以是RGB空间、或者HSV空间、或者YCbCr空间、或者Lab空间等。此时,第二三维查找表中可以仅包括结点处颜色的输入输出对应关系。非结点处颜色的输入输出对应关系可以根据结点出的颜色使用三维插值算法产生。三维插值算法可以是三线性插值方法、四面体插值方法、金字塔方法或者棱柱插值方法等等。The first three-dimensional lookup table may include color values corresponding to any of the input colors. Second three-dimensional investigation The lookup table (3dlut2) can also be calculated after dividing the color space by N×N×N nodes. In the three dimensions, the N-th node is divided into a plurality of nodes, and the partitioning may be a uniform partitioning or a non-uniform partitioning, which is not specifically limited in this embodiment. The color space may be an RGB space, or an HSV space, or a YCbCr space, or a Lab space. At this time, the second three-dimensional lookup table may include only the input/output correspondence of the colors at the nodes. The input-output correspondence of colors at non-nodes can be generated using a three-dimensional interpolation algorithm based on the color of the nodes. The three-dimensional interpolation algorithm may be a trilinear interpolation method, a tetrahedral interpolation method, a pyramid method or a prism interpolation method, or the like.
在上述提供的基于所述场景信息对所述待显示图像执行色域映射的实现方式的基础上,在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值;或者On the basis of the implementation of performing the gamut mapping on the image to be displayed based on the scene information, when the color corresponding to the color of the ith pixel is not included in the first three-dimensional lookup table, Obtaining a color corresponding color value of the i-th pixel point based on interpolating a color included in the first three-dimensional lookup table by a three-dimensional interpolation algorithm; or
在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding color value of the i-th pixel point is not included in the second three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
在基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值时,所述融合权重可以通过如下公式确定:When the first color value and the second color value are fused to obtain the color value of the ith pixel point after the gamut mapping based on the predetermined fusion weight, the fusion weight may be determined by the following formula:
Figure PCTCN2016108729-appb-000021
Figure PCTCN2016108729-appb-000021
其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜 色结点位于所述凸包外部。其中,融合权重Wblending与Δd满足图4所示的关系。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull. Among them, the fusion weights W blending and Δd satisfy the relationship shown in FIG. 4 .
在其中可能的实现方式中,基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值时,通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:In a possible implementation manner, when the first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, the method is determined as follows The color value of the i-th pixel after the gamut mapping:
Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
以RGB颜色空间为例。以RGB颜色空间为例进行说明。Rin、Gin、Bin表示第i个像素点的颜色。R1、G1、B1是Rin、Gin、Bin通过第一三维查找表3dlut1的输出结果,R2、G2、B2是Rin、Gin、Bin通过第二三维查找表3dlut2的输出结果,Δd是Rin、Gin、Bin结点距离由3lut1中包括的多个结点所构成的凸包(convex hull)的最短距离,具体可以通过空间几何关系计算得到。Δd<0时,(Rin,Gin,Bin)位于凸包内部;Δd=0时,(Rin,Gin,Bin)位于凸包上;Δd>0时,(Rin,Gin,Bin)位于凸包外部。Take the RGB color space as an example. Take the RGB color space as an example for illustration. Rin, Gin, and Bin represent the color of the i-th pixel. R1, G1, and B1 are the output results of Rin, Gin, and Bin through the first three-dimensional lookup table 3dlut1, and R2, G2, and B2 are the output results of Rin, Gin, and Bin through the second three-dimensional lookup table 3dlut2, and Δd is Rin, Gin, The Bin node distance is the shortest distance from the convex hull composed of multiple nodes included in 3lut1, which can be calculated by the spatial geometric relationship. When Δd<0, (Rin, Gin, Bin) is located inside the convex hull; when Δd=0, (Rin, Gin, Bin) is located on the convex hull; when Δd>0, (Rin, Gin, Bin) is located outside the convex hull .
具体可以通过融合权重Wblending及R1、G1、B1、R2、G2、B2确定经过色域映射后输出的三通道值Rout、Gout、Bout:Specifically, the three-channel values Rout, Gout, and Bout output after the gamut mapping are determined by the fusion weight W blending and R1, G1, B1, R2, G2, and B2:
Figure PCTCN2016108729-appb-000022
Figure PCTCN2016108729-appb-000022
本申请实施例中,在所述基于所述场景信息对所述待显示图像执行颜色管理时,具体可以通过如下方式实现,如图5所示:In the embodiment of the present application, when the color management is performed on the image to be displayed based on the scene information, the method may be implemented as follows:
S510,获取所述场景信息对应的调整所述待显示图像的像素点饱和度的调整策略。S510. Acquire an adjustment strategy for adjusting pixel saturation of the image to be displayed corresponding to the scene information.
其中针对不同的场景信息对应不同的调整策略,比如存在蓝天、自然植物等场景,可以提高饱和度,以优化显示效果;比如存在肤色等场景,则可能会降低饱和度,使皮肤更加平滑。其中,不同的场景信息对应的调整策略 可以预先配置在电子设备内。Different scene information corresponding to different adjustment strategies, such as the existence of blue sky, natural plants and other scenes, can improve the saturation to optimize the display effect; for example, scenes with skin color, etc., may reduce the saturation and make the skin smoother. Among them, the adjustment strategy corresponding to different scene information It can be pre-configured in an electronic device.
针对所述待显示图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下S520~S530所述的方式调整,所述j取遍不大于N的正整数。待显示图像中N个像素点中其它像素点YSH值均按照第j个像素点的调整方法调整。The brightness saturation hue YSH value of the j-th pixel in the image to be displayed is adjusted in the manner described in S520 to S530, respectively, and the j is taken over a positive integer not greater than N. The other pixel point YSH values among the N pixel points in the image to be displayed are adjusted according to the adjustment method of the jth pixel point.
S520,保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值。S520: Keep the h-th value of the jth pixel point unchanged, and adjust the saturation S value of the j-th pixel point based on the adjustment strategy.
S530,将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值。S530. Input the S value of the jth pixel point before adjustment, the adjusted S value, and the brightness Y of the jth pixel point into a quadratic equation corresponding to the H value of the predetermined jth pixel point. The function obtains the compensated luminance Y value of the jth pixel.
本申请实施例中提供的上述颜色管理方式,能够解决基于YSH颜色空间进行饱和度调整时亮度变化及由亮度变化导致的显示效果问题,且方式简单。The color management method provided in the embodiment of the present application can solve the problem of brightness change and brightness display change caused by brightness change based on the YSH color space, and the method is simple.
可选地,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:Optionally, the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
Figure PCTCN2016108729-appb-000023
Figure PCTCN2016108729-appb-000023
其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
本申请实施例中,可以配置YSH空间中每个色调均对应一个二次方程函数。还可以将YSH颜色空间进行划分得到p个主色调(Primary hue),每个主色调对应一组方程参数,该一组方程参数构成一个二次方程函数,主色调之间色调的二次方程函数的参数使用相邻最近两主色调对应的二次方程函数的参数插值产生。In the embodiment of the present application, each color tone in the YSH space may be configured to correspond to a quadratic function. The YSH color space can also be divided to obtain p primary colors (primary hue), each primary color corresponding to a set of equation parameters, the set of equation parameters constitute a quadratic function, the quadratic function of the hue between the main tones The parameters are generated using parameter interpolation of the quadratic function corresponding to the nearest two main tones.
在预先确定每个主色调对应的二次方程函数时,对于p个主色调的一个主色调A,针对部分饱和度调整范围,比如调整饱和度由Sin至Sout时,其中Δ Y亮度补偿值,可以通过对亮度进行标定实验得到。从而根据标定实验得到的ΔY和输入饱和度Sin、输出饱和度Sout使用最小二乘法拟合求解得到的系参数a1、a2、a3、a4、a5、a6,从而得到该主色调A对应的二次方程函数。When a quadratic function corresponding to each dominant color is determined in advance, for a primary color A of p primary tones, for a partial saturation adjustment range, such as when adjusting saturation from S in to S out , where Δ Y luminance compensation The value can be obtained by calibrating the brightness experiment. Therefore, according to the ΔY and the input saturation S in and the output saturation S out obtained by the calibration experiment, the system parameters a1, a2, a3, a4, a5, and a6 obtained by the least square method are fitted to obtain the corresponding main tone A. Quadratic function.
基于此,在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的。Based on this, in the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance before the jth before adjustment The saturation value of the pixel, the adjusted saturation value of the jth pixel point, and the brightness compensation value are obtained based on a least squares fitting algorithm.
在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
具体的,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6通过如下公式得到:Specifically, in the case where there is no quadratic function corresponding to the H value of the jth pixel point, the a1, a2, a3, a4, a5, and a6 are obtained by the following formula:
Figure PCTCN2016108729-appb-000024
Figure PCTCN2016108729-appb-000024
ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
可选地,本申请实施例中,在基于所述场景信息对所述待显示图像执行缩放,具体可以通过如下方式实现,如图6所示:Optionally, in the embodiment of the present application, performing scaling on the image to be displayed based on the scenario information may be implemented in the following manner, as shown in FIG. 6 :
S610,确定所述待显示图像的缩放倍数,以及根据所述场景信息确定所述待显示图像中周期性图案的最高频率。其中,不同的场景信息对应的图像中周期性图案的最高频率不同。S610. Determine a zoom factor of the image to be displayed, and determine a highest frequency of the periodic pattern in the image to be displayed according to the scene information. The highest frequency of the periodic pattern in the image corresponding to different scene information is different.
待显示图像中周期性图案是指图像中具有预设规则排列的图案,比如日常生活中栏杆的排列图案,屋顶上瓦片的排列图案等等。The periodic pattern in the image to be displayed refers to a pattern having a preset regular arrangement in the image, such as an arrangement pattern of railings in daily life, an arrangement pattern of tiles on the roof, and the like.
周期性图案的最高频率是指将包含周期性图案的待显示图像变换到频域 后,周期性图案内容对应的频率中的最高频率。The highest frequency of the periodic pattern refers to transforming the image to be displayed containing the periodic pattern into the frequency domain After that, the periodic pattern content corresponds to the highest frequency in the frequency.
其中,根据所述场景信息确定所述待显示图像中周期性图案的最高频率,包括:The determining, according to the scenario information, a highest frequency of the periodic pattern in the image to be displayed, including:
根据场景信息确定所述待显示图像中包括周期性图案;Determining, according to the scene information, that the image to be displayed includes a periodic pattern;
将所述待显示图像进行频域变换,并获取频域变换后所述周期性图案对应的频率中的最高频率。Performing frequency domain transformation on the image to be displayed, and acquiring a highest frequency among frequencies corresponding to the periodic pattern after frequency domain transformation.
S620,根据预设的查找表确定所述缩放倍数以及所述待显示图像中周期性图案的最高频率对应的双立方滤波器的抽头数量;以及根据所述缩放倍数、所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数。S620, determining, according to a preset lookup table, the zoom factor and a number of taps of a bicubic filter corresponding to a highest frequency of a periodic pattern in the image to be displayed; and, according to the zoom factor, a period in the image to be displayed The highest frequency of the pattern determines the coefficient of the cubic function corresponding to the bicubic filter.
其中,预设的查找表可以通过如下方式生成:The preset lookup table can be generated as follows:
首先确定缩放倍数与第一滤波器抽头(tap)数之间的对应关系,具体可以通过如下公式表示:First, the correspondence between the scaling factor and the number of taps of the first filter is determined, which can be specifically expressed by the following formula:
tap1=1+(SizeFactor*srcLen+dstLen-1)/dstLen;Tap1=1+(SizeFactor*srcLen+dstLen-1)/dstLen;
其中,tap1表示第一滤波器tap数,srcLen表示原图分辨率,dstLen表示目标图分辨率,
Figure PCTCN2016108729-appb-000025
表示缩放倍数,SizeFactor表示预设缩放系数,具体可以为经验值。然后再基于图像的最高频率fmax微调整第一滤波器tap数tap1得到第二滤波器tap数tap2。具体的,tap2=tap1/(FreqFactor*fmax),FreqFactor表示预设频率系数。
Where tap1 represents the number of first filter taps, srcLen represents the resolution of the original image, and dstLen represents the resolution of the target image.
Figure PCTCN2016108729-appb-000025
Indicates the zoom factor, and SizeFactor represents the preset zoom factor, which can be an empirical value. Then, the first filter tap number tap1 is finely adjusted based on the highest frequency f max of the image to obtain the second filter tap number tap2. Specifically, tap2=tap1/(FreqFactor*f max ), and FreqFactor represents a preset frequency coefficient.
S630,根据所述抽头数量以及确定的三次方程函数对所述待显示图像进行缩放。S630. Scale the image to be displayed according to the number of taps and the determined cubic function.
需要说明的是,抽头数量也就是采样像素点的数量。具体的,查找目标图像dst像素点在源图像src上的对应像素点位置,自适应选取对应像素点周围所述确定的抽头数量tap2个采样像素点(参考点),根据双三次插值函数求出每个采样像素点对应的权重值,根据tap2个采样像素点的权重值和tap2个采样像素点的像素值,加权求和得到缩放后的图像的像素值,最后得到缩放后 的目标图像。It should be noted that the number of taps is also the number of sampled pixels. Specifically, searching for a corresponding pixel position of the target image dst pixel point on the source image src, adaptively selecting the determined number of taps around the corresponding pixel point tap2 sampling pixel points (reference points), and calculating according to the bicubic interpolation function The weight value corresponding to each sampled pixel point is weighted and summed according to the weight value of tap2 sampled pixel points and the pixel value of tap2 sampled pixel points, and the pixel value of the scaled image is obtained by weighting, and finally obtained after scaling Target image.
如果只根据图像缩放倍数而不考虑图像内容中的最高频率进行缩放,会导致滤波器的截止频率和图像内容的最高频率不一定匹配,那么最高频率对应的图像内容就可能出现混叠或者是模糊。因此根据本申请实施例提供的缩放方法,能够基于图像场景信息及图像最高频率进行自适应抗混叠,从而提高了图像的优化效果。If the scaling is performed only according to the image scaling factor without considering the highest frequency in the image content, the cutoff frequency of the filter and the highest frequency of the image content do not necessarily match, and the image content corresponding to the highest frequency may be aliased or blurred. . Therefore, according to the scaling method provided by the embodiment of the present application, adaptive anti-aliasing can be performed based on the image scene information and the highest frequency of the image, thereby improving the image optimization effect.
可选地,所述双立方滤波器对应的三次方程函数满足如下公式:Optionally, the cubic function corresponding to the bicubic filter satisfies the following formula:
Figure PCTCN2016108729-appb-000026
Figure PCTCN2016108729-appb-000026
Figure PCTCN2016108729-appb-000027
Figure PCTCN2016108729-appb-000027
其中,x为输入变量,表示参考点与中心的距离,k(x)表示参考点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常数。这里所述的中心为目标图像dst像素点在源图像src上的对应像素点位置。Where x is an input variable representing the distance between the reference point and the center, k(x) represents the weight value corresponding to the reference point, and dB and dC respectively represent the coefficients of the cubic equation function; B and C are constants. The center described here is the corresponding pixel point position of the target image dst pixel on the source image src.
可选地,根据所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数,具体可以通过如下方式实现:Optionally, determining a coefficient of a cubic function corresponding to the bicubic filter according to a highest frequency of the periodic pattern in the image to be displayed, which may be specifically implemented as follows:
dB=g*fmax*dC;dB=g*f max *dC;
其中,g为常数,fmax表示图像的最高频率。Where g is a constant and f max represents the highest frequency of the image.
基于与方法实施例同样的发明构思,本申请实施例还提供了一种图像显示增强装置,所述装置应用于电子设备,具体应用于终端100,由终端100中的处理器120实现,如图7A所示,所述装置包括:Based on the same inventive concept as the method embodiment, the embodiment of the present application further provides an image display enhancement device, which is applied to an electronic device, and is specifically applied to the terminal 100, and is implemented by the processor 120 in the terminal 100. As shown in 7A, the device includes:
显示增强模块710,用于获取待显示图像的场景信息,所述待显示图像的场景信息是对所述待显示图像进行场景分析后得到的;并基于所述场景信息 对所述待显示图像执行如下至少一种处理:色域映射、颜色管理、对比度增强、锐化或缩放。a display enhancement module 710, configured to acquire scene information of an image to be displayed, where scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; and based on the scene information Performing at least one of the following processes on the image to be displayed: gamut mapping, color management, contrast enhancement, sharpening, or scaling.
显示模块720,用于显示经过所述显示增强模块710处理后的所述待显示图像。The display module 720 is configured to display the image to be displayed processed by the display enhancement module 710.
可选地,所述显示增强模块710还可以包括色域映射模块711、颜色管理模块712以及缩放模块713。Optionally, the display enhancement module 710 may further include a gamut mapping module 711, a color management module 712, and a scaling module 713.
可选地,在基于所述场景信息对所述待显示图像执行色域映射时,具有由所述显示增强模块710包括的色域映射模块711来执行;在基于所述场景信息对所述待显示图像执行颜色管理时,具体可以由颜色管理模块712来执行;在基于所述场景信息对所述待显示图像执行缩放时,具体可以由缩放模块713来执行。Optionally, when the gamut mapping is performed on the to-be-displayed image based on the scenario information, the gamut mapping module 711 included by the display enhancement module 710 is configured to perform the gamut mapping module 711 according to the scenario information. When the display image performs color management, it may be specifically executed by the color management module 712; when performing scaling on the image to be displayed based on the scene information, the zooming module 713 may be specifically executed.
在图像显示增强装置应用在通过应用浏览图像的场景下时,以安卓系统为例,该场景的安卓系统框架中在包括显示增强模块710以及显示模块720的情况下,还包括,如图7B所示:应用模块730。用户触发待显示图像的图像信号送入应用模块730进行浏览时,首先应用模块730将待显示图像的图像数据流送入场景信息解析模块740;场景信息解析模块740对所述图像数据流进行解析处理得到所述待显示图像的场景信息。其中,待显示图像包括的场景信息可以是在通过摄像头拍摄所述图像时写入所述待显示图像的。在将所述场景信息写入获取的图像时,可以将所述场景信息写入所述图像的可交换图像文件(英文:Exchangeable Image File,简称:EXIF)EXIF数据区域的厂商注释(MakerNote)字段中。另外,场景信息解析模块740可以基于场景分类算法提取图像的场景信息,场景分类算法可以采用现有技术中公开的算法,本申请实施例对此不作具体限定。When the image display enhancement device is applied to the scene in which the image is viewed by the application, the Android system is used as an example. In the case that the display enhancement module 710 and the display module 720 are included in the Android system framework of the scene, the method further includes, as shown in FIG. 7B. Show: application module 730. When the image signal of the image to be displayed is sent to the application module 730 for browsing, the application module 730 first sends the image data stream of the image to be displayed to the scene information analysis module 740; the scene information analysis module 740 analyzes the image data stream. Processing the scene information of the image to be displayed. The scene information included in the image to be displayed may be written in the image to be displayed when the image is captured by the camera. When the scene information is written into the acquired image, the scene information may be written into a MakerNote field of an EXIF data area of an exchangeable image file (English: Exchangeable Image File, EXIF) of the image. in. In addition, the scene information parsing module 740 may extract the scene information of the image based on the scene classification algorithm, and the scene classification algorithm may adopt an algorithm disclosed in the prior art, which is not specifically limited in the embodiment of the present application.
场景信息解析模块740在从待显示图像分析得到场景信息后,将场景信息发送给显示增强模块710,场景信息解析模块740还可以将场景信息发送给应用模块730,从而应用模块730在接收场景信息后将图像数据流发送给显示增强模块710中,从而显示增强模块710接收到来自场景信息解析模块740 的场景信息以及应用模块730发送的图像数据流后,基于所述场景信息对图像数据流进行显示增强处理,包括但不限于色域映射、颜色管理、对比度增强、锐化、缩放等,显示增强模块710将处理后的图像数据流发送给显示模块720显示。场景信息解析模块740从待显示图像分析得到场景信息后,将场景信息以及待显示图像的图像数据流一块发送给显示增强模块710,从而显示增强模块710基于所述场景信息对图像数据流进行显示增强处理。The scene information parsing module 740, after the scene information is obtained from the image to be displayed, sends the scene information to the display enhancement module 710. The scene information parsing module 740 can also send the scene information to the application module 730, so that the application module 730 receives the scene information. The image data stream is then sent to the display enhancement module 710, so that the display enhancement module 710 receives the from the scene information parsing module 740. After the scene information and the image data stream sent by the application module 730, the image data stream is subjected to display enhancement processing based on the scene information, including but not limited to gamut mapping, color management, contrast enhancement, sharpening, zooming, etc., display enhancement Module 710 sends the processed image data stream to display module 720 for display. After the scene information analysis module 740 obtains the scene information from the image to be displayed, the scene information and the image data stream of the image to be displayed are sent to the display enhancement module 710, so that the display enhancement module 710 displays the image data stream based on the scene information. Enhanced processing.
应用模块730可以但不限于包括图像浏览类应用、社交类应用以及网购类应用等等。The application module 730 can be, but is not limited to, an image browsing application, a social application, an online shopping application, and the like.
所述装置还可以包括:图像数据解码模块。具体的,应用模块730将待显示图像的图像数据流送入场景信息解析模块740之前,先将图像数据流发送给图像数据解码模块进行图像数据流解码,将解码后的图像数据发送给场景信息解析模块740。图像数据解码模块在对图像数据进行解码时,具体可以调用skia库进行图像数据解码。Skia库是个2D(2维)向量图像处理函数库,图像中包含的字型、坐标转换,以及点阵图都有通过调用Skia库高效能且简洁的表现出来。The apparatus may further include: an image data decoding module. Specifically, the application module 730 sends the image data stream of the image to be displayed to the scene information parsing module 740, and then sends the image data stream to the image data decoding module to decode the image data stream, and sends the decoded image data to the scene information. Parsing module 740. When decoding the image data, the image data decoding module may specifically call the skia library to decode the image data. The Skia library is a 2D (2-dimensional) vector image processing function library. The fonts, coordinate transformations, and bitmaps contained in the image are all efficiently and concisely expressed by calling the Skia library.
可选地,在基于所述场景信息对所述待显示图像执行色域映射时,所述显示增强模块710具体用于:Optionally, when the gamut mapping is performed on the image to be displayed based on the scenario information, the display enhancement module 710 is specifically configured to:
确定所述场景信息对应的第一三维查找表以及第二三维查找表,所述第一三维查找表用于保持对所述待显示图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待显示图像中无需保护的颜色进行调整;确定所述待显示图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。Determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed Similarly, the second three-dimensional lookup table is configured to adjust a color that is not required to be protected in the image to be displayed; and determine that a color of an i-th pixel point among the N pixels included in the image to be displayed corresponds to the first a first color value in a three-dimensional lookup table, and determining that a color of the i-th pixel point corresponds to a second color value in the second three-dimensional lookup table; the first color is based on a predetermined fusion weight The value and the second color value are fused to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
可选地,所述显示增强模块710,还用于:Optionally, the display enhancement module 710 is further configured to:
在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时, 基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值;或者在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding to the color of the ith pixel point is not included in the first three-dimensional lookup table, Obtaining a color corresponding color value of the i-th pixel point based on interpolating a color included in the first three-dimensional lookup table by a three-dimensional interpolation algorithm; or not including the i-th pixel point in the second three-dimensional lookup table When the color corresponds to the color value, the color corresponding color value of the i-th pixel point is obtained based on interpolating the color included in the second three-dimensional lookup table by the three-dimensional interpolation algorithm.
所述显示增强模块710,还用于通过如下公式确定所述融合权重:The display enhancement module 710 is further configured to determine the fusion weight by using the following formula:
Figure PCTCN2016108729-appb-000028
Figure PCTCN2016108729-appb-000028
其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
所述显示增强模块710,在基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值时,具体用于:The display enhancement module 710 is configured to: when the first color value and the second color value are fused to obtain the color value of the ith pixel point after the gamut mapping based on the predetermined fusion weight, specifically:
通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:The color value of the ith pixel point after the gamut mapping is determined by:
Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
所述显示增强模块710,在基于所述场景信息对所述待显示图像执行颜色管理时,具体用于:The display enhancement module 710 is specifically configured to: when performing color management on the image to be displayed based on the scenario information:
获取所述场景信息对应的调整所述待显示图像的像素点饱和度的调整策略;针对所述待显示图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下调整:保持第j个像素点色调H值不变,基于所述调整策略调整第j 个像素点的饱和度S值;将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。Acquiring an adjustment strategy for adjusting the saturation of the pixel point of the image to be displayed corresponding to the scene information; and adjusting the brightness saturation tone YSH value of the jth pixel in the image to be displayed as follows: maintaining the jth The pixel H-value of the pixel is unchanged, and the j-th adjustment is adjusted based on the adjustment strategy a saturation S value of the pixel; the S value of the jth pixel before the adjustment, the adjusted S value, and the brightness Y of the jth pixel are input to the predetermined jth pixel The quadratic function corresponding to the H value obtains the compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
其中,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:Wherein, the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
Figure PCTCN2016108729-appb-000029
Figure PCTCN2016108729-appb-000029
其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
所述显示增强模块710,还用于通过如下方式确定所述二次方程函数系数:The display enhancement module 710 is further configured to determine the quadratic function function coefficients by:
在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述显示增强模块710,具体用于通过如下公式确定a1、a2、a3、a4、a5、a6:In the case where there is no quadratic function corresponding to the H value of the jth pixel, the display enhancement module 710 is specifically configured to determine a1, a2, a3, a4, a5, a6 by the following formula:
Figure PCTCN2016108729-appb-000030
Figure PCTCN2016108729-appb-000030
ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度 值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k denotes the kth coefficient in a1, a2, a3, a4, a5, a6; H denotes the saturation value of the jth pixel point, and H p1 and H p2 denote the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
可选地,所述显示增强模块710,在基于所述场景信息对所述待显示图像执行缩放时,具体用于:Optionally, the display enhancement module 710 is specifically configured to: when performing scaling on the image to be displayed based on the scene information,
确定所述待显示图像的缩放倍数,以及根据所述场景信息确定所述待显示图像中周期性图案的最高频率;根据预设的查找表确定所述缩放倍数以及所述待显示图像中周期性图案的最高频率对应的双立方滤波器的抽头数量;以及根据所述缩放倍数、所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数;根据所述抽头数量以及确定的三次方程函数对所述待显示图像进行缩放。Determining a zoom factor of the image to be displayed, and determining a highest frequency of the periodic pattern in the image to be displayed according to the scene information; determining the zoom factor and periodicity in the image to be displayed according to a preset lookup table a number of taps of the bicubic filter corresponding to the highest frequency of the pattern; and determining a coefficient of a cubic function corresponding to the bicubic filter according to the scaling factor and a highest frequency of the periodic pattern in the image to be displayed; according to the tap The number and the determined cubic function function scale the image to be displayed.
其中,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件:Wherein, the cubic function corresponding to the bicubic filter satisfies the condition shown by the following formula:
Figure PCTCN2016108729-appb-000031
Figure PCTCN2016108729-appb-000031
Figure PCTCN2016108729-appb-000032
Figure PCTCN2016108729-appb-000032
其中,x表示输入变量,k(x)表示输入变量x对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常数。Where x represents the input variable, k(x) represents the weight value corresponding to the input variable x, and dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本申请的实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner. The functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module. The above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售 或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,一种终端设备等)或处理器(例如图1所示的处理器120)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated module is implemented in the form of a software functional module and sold as a standalone product Or when used, it can be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium. Including a number of instructions for causing a computer device (which may be a personal computer, a terminal device, etc.) or a processor (such as processor 120 shown in FIG. 1) to perform all or part of the methods described in various embodiments of the present application. step. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
本申请实施例还提供了一种图像色域映射方法,该方法包括:An embodiment of the present application further provides an image gamut mapping method, where the method includes:
获取待处理图像;确定所述待处理图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;所述第一三维查找表用于保持对所述待处理图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待处理图像中无需保护的颜色进行调整;基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。Obtaining a to-be-processed image; determining, by the image of the to-be-processed image, a color of an i-th pixel point corresponding to a first color value in the first three-dimensional lookup table, and determining the i-th pixel point a color corresponding to the second color value in the second three-dimensional lookup table; the first three-dimensional lookup table is configured to keep the color gamut mapping of the color to be protected in the image to be processed after being the same as before the gamut mapping The second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be processed; and the first color value and the second color value are merged according to the predetermined fusion weight to obtain a gamut-mapped a color value of the i-th pixel, the i taking a positive integer not greater than N.
本申请的色域映射方法基于双三维查找表实现调整色和保护色的区分处理,能够实现色域映射、色偏校正的同时保护肤色、灰度色、记忆色,并保证从调整色到保护色过渡不出现伪轮廓。The gamut mapping method of the present application is based on a dual three-dimensional lookup table to realize the distinguishing process of the adjustment color and the protection color, and can realize the gamut mapping and the color chromaticity correction while protecting the skin color, the gray color, the memory color, and ensuring the transition from the adjustment color to the protection color. No false contours appear.
在一种可能的设计中,所述方法还包括:In one possible design, the method further includes:
在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值;或者When the color corresponding color value of the i-th pixel point is not included in the first three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm. The color corresponds to the color value; or
在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding color value of the i-th pixel point is not included in the second three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
在一种可能的设计中,所述融合权重通过如下公式确定: In one possible design, the fusion weight is determined by the following formula:
Figure PCTCN2016108729-appb-000033
Figure PCTCN2016108729-appb-000033
其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
在一种可能的设计中,基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,包括:In a possible design, the first color value and the second color value are fused based on the predetermined fusion weight to obtain the color value of the ith pixel point after the gamut mapping, including:
通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:The color value of the ith pixel point after the gamut mapping is determined by:
Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
本申请实施例还提供了一种颜色管理方法,该方法可以由电子设备实现。该方法包括:The embodiment of the present application further provides a color management method, which may be implemented by an electronic device. The method includes:
获取待处理图像,并确定调整所述待处理器图像的像素点饱和度的调整策略;Obtaining an image to be processed, and determining an adjustment strategy for adjusting pixel saturation of the image to be processed by the processor;
针对所述待处理图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下调整:The brightness saturation hue YSH values of the jth pixel in the image to be processed are respectively adjusted as follows:
保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值;Keeping the jth pixel dot hue H value unchanged, adjusting the saturation S value of the jth pixel point based on the adjustment strategy;
将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。 Obtaining a quadratic function function corresponding to the S value of the jth pixel point before adjustment, the adjusted S value, and the brightness Y of the jth pixel point corresponding to the H value of the predetermined jth pixel point The compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
本申请实施例中提供的上述颜色管理方式,能够解决基于YSH颜色空间进行饱和度调整时亮度变化及由亮度变化导致的显示效果问题,且方式简单。The color management method provided in the embodiment of the present application can solve the problem of brightness change and brightness display change caused by brightness change based on the YSH color space, and the method is simple.
在一种可能的设计中,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:In a possible design, the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
Figure PCTCN2016108729-appb-000034
Figure PCTCN2016108729-appb-000034
其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
在一种可能的设计中,所述二次方程函数系数通过如下方式确定:In one possible design, the quadratic function coefficients are determined as follows:
在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
在一种可能的设计中,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6通过如下公式得到:In one possible design, in the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are obtained by the following formula:
Figure PCTCN2016108729-appb-000035
Figure PCTCN2016108729-appb-000035
ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。 a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
本申请实施例还提供了一种图像缩放方法,该方法可以由电子设备实现,该方法包括:The embodiment of the present application further provides an image scaling method, which may be implemented by an electronic device, and the method includes:
确定所述待处理图像的缩放倍数以及所述待处理图像的最高频率;Determining a zoom factor of the image to be processed and a highest frequency of the image to be processed;
根据预设的查找表确定所述缩放倍数以及所述待处理图像的最高频率对应的双立方滤波器的抽头数量;以及根据所述待处理图像的最高频率确定双立方滤波器对应的三次方程函数的系数;Determining, according to a preset lookup table, the zoom factor and the number of taps of the bicubic filter corresponding to the highest frequency of the image to be processed; and determining a cubic function corresponding to the bicubic filter according to the highest frequency of the image to be processed Coefficient of
根据所述抽头数量以及确定的三次方程函数对所述待处理图像进行缩放。The image to be processed is scaled according to the number of taps and the determined cubic function.
本申请实施例提供的缩放方法,能够基于图像场景信息及图像最高频率进行自适应抗混叠,从而提高了图像的优化效果。The scaling method provided by the embodiment of the present application can perform adaptive anti-aliasing based on image scene information and the highest frequency of the image, thereby improving the image optimization effect.
在一种可能的设计中,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件:In one possible design, the cubic function corresponding to the bicubic filter satisfies the conditions shown in the following formula:
Figure PCTCN2016108729-appb-000036
Figure PCTCN2016108729-appb-000036
Figure PCTCN2016108729-appb-000037
Figure PCTCN2016108729-appb-000037
其中,x表示基于抽头数量所采样选择的采样像素点的位置,k(x)表示采样像素点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常数。Where x represents the position of the sampled pixel selected based on the number of taps, k(x) represents the weight value corresponding to the sampled pixel point, and dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。 Those skilled in the art will appreciate that embodiments of the present application can be provided as a method, system, or computer program product. Thus, the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。While the preferred embodiment of the present application has been described, those skilled in the art can make further changes and modifications to these embodiments once they are aware of the basic inventive concept. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and the modifications and
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。 It is apparent that those skilled in the art can make various changes and modifications to the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, it is intended that the present invention cover the modifications and variations of the embodiments of the present invention.

Claims (39)

  1. 一种图像显示增强方法,其特征在于,包括:An image display enhancement method, comprising:
    获取待显示图像的场景信息,所述待显示图像的场景信息是对所述待显示图像进行场景分析后得到的;Obtaining scene information of the image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed;
    基于所述场景信息对所述待显示图像执行如下至少一种处理:色域映射、颜色管理、对比度增强、锐化或缩放;Performing at least one of the following processes on the image to be displayed based on the scene information: gamut mapping, color management, contrast enhancement, sharpening, or scaling;
    显示经过处理后的所述待显示图像。The processed image to be displayed is displayed.
  2. 如权利要求1所述的方法,其特征在于,所述场景信息包括在所述待显示图像的元数据Metadata中、或者可交换图像文件EXIF数据区域中、或者厂商注释makernotes字段中。The method of claim 1, wherein the scene information is included in metadata Metadata of the image to be displayed, or in an exchangeable image file EXIF data area, or in a vendor note makernotes field.
  3. 如权利要求1或2所述的方法,其特征在于,所述场景信息包括以下至少一项:感光度ISO值、光圈值、快门时间或曝光EV值。The method according to claim 1 or 2, wherein the scene information comprises at least one of the following: sensitivity ISO value, aperture value, shutter time or exposure EV value.
  4. 如权利要求1至3任一项所述的方法,其特征在于,所述基于所述场景信息对所述待显示图像执行色域映射,包括:The method according to any one of claims 1 to 3, wherein the performing gamut mapping on the image to be displayed based on the scene information comprises:
    确定所述场景信息对应的第一三维查找表以及第二三维查找表,所述第一三维查找表用于保持对所述待显示图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待显示图像中无需保护的颜色进行调整;Determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed Similarly, the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed;
    确定所述待显示图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;Determining, that the color of the i-th pixel point among the N pixel points included in the image to be displayed corresponds to a first color value in the first three-dimensional lookup table, and determining that the color of the i-th pixel point corresponds to Describe a second color value in the second three-dimensional lookup table;
    基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。The first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
  5. 如权利要求3所述的方法,其特征在于,所述方法还包括:The method of claim 3, wherein the method further comprises:
    在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i 个像素点的颜色对应颜色值;或者When the color corresponding to the color of the i-th pixel point is not included in the first three-dimensional lookup table, the i-th is obtained by interpolating colors included in the first three-dimensional lookup table by a three-dimensional interpolation algorithm The color of the pixel corresponds to the color value; or
    在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding color value of the i-th pixel point is not included in the second three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
  6. 如权利要求4或5所述的方法,其特征在于,所述融合权重通过如下公式确定:The method according to claim 4 or 5, wherein the fusion weight is determined by the following formula:
    Figure PCTCN2016108729-appb-100001
    Figure PCTCN2016108729-appb-100001
    其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
  7. 如权利要求4至6任一项所述的方法,其特征在于,基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,包括:The method according to any one of claims 4 to 6, wherein the first color value and the second color value are fused based on the predetermined fusion weight to obtain the ith pixel after gamut mapping The color value of the point, including:
    通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:The color value of the ith pixel point after the gamut mapping is determined by:
    Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
    其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
  8. 如权利要求1至7任一项所述的方法,其特征在于,所述基于所述场景信息对所述待显示图像执行颜色管理,包括:The method according to any one of claims 1 to 7, wherein the performing color management on the image to be displayed based on the scene information comprises:
    获取所述场景信息对应的调整所述待显示图像的像素点饱和度的调整策略; Acquiring an adjustment strategy for adjusting pixel saturation of the image to be displayed corresponding to the scene information;
    针对所述待显示图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下调整:The brightness saturation hue YSH values of the jth pixel points in the image to be displayed are respectively adjusted as follows:
    保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值;Keeping the jth pixel dot hue H value unchanged, adjusting the saturation S value of the jth pixel point based on the adjustment strategy;
    将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。Obtaining a quadratic function function corresponding to the S value of the jth pixel point before adjustment, the adjusted S value, and the brightness Y of the jth pixel point corresponding to the H value of the predetermined jth pixel point The compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
  9. 如权利要求8所述的方法,其特征在于,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:The method according to claim 8, wherein the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
    Figure PCTCN2016108729-appb-100002
    Figure PCTCN2016108729-appb-100002
    其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
  10. 如权利要求9所述的方法,其特征在于,所述二次方程函数系数通过如下方式确定:The method of claim 9 wherein said quadratic function coefficients are determined as follows:
    在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
    在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  11. 如权利要求10所述的方法,其特征在于,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6通过如下公 式得到:The method according to claim 10, wherein in the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are as follows Public Get:
    Figure PCTCN2016108729-appb-100003
    Figure PCTCN2016108729-appb-100003
    ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
  12. 如权利要求1至11任一项所述的方法,其特征在于,所述基于所述场景信息对所述待显示图像执行缩放,包括:The method according to any one of claims 1 to 11, wherein the performing zooming on the image to be displayed based on the scene information comprises:
    确定所述待显示图像的缩放倍数,以及根据所述场景信息确定所述待显示图像中周期性图案的最高频率;Determining a zoom factor of the image to be displayed, and determining a highest frequency of the periodic pattern in the image to be displayed according to the scene information;
    根据预设的查找表确定所述缩放倍数以及所述待显示图像中周期性图案的最高频率对应的双立方滤波器的抽头数量;以及根据所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数;Determining, according to a preset lookup table, the zoom factor and the number of taps of the bicubic filter corresponding to the highest frequency of the periodic pattern in the image to be displayed; and determining the double according to the highest frequency of the periodic pattern in the image to be displayed The coefficient of the cubic equation function corresponding to the cubic filter;
    根据所述抽头数量以及确定的三次方程函数对所述待显示图像进行缩放。The image to be displayed is scaled according to the number of taps and the determined cubic function.
  13. 如权利要求12所述的方法,其特征在于,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件:The method according to claim 12, wherein the cubic function corresponding to the bicubic filter satisfies the condition shown by the following formula:
    Figure PCTCN2016108729-appb-100004
    Figure PCTCN2016108729-appb-100004
    Figure PCTCN2016108729-appb-100005
    Figure PCTCN2016108729-appb-100005
    其中,x表示基于抽头数量所采样选择的采样像素点的位置,k(x)表示采样像素点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常 数。Where x represents the position of the sampled pixel selected based on the number of taps, k(x) represents the weight value corresponding to the sampled pixel point, and dB and dC represent the coefficients of the cubic equation function; B and C are constants. number.
  14. 一种图像显示增强装置,其特征在于,包括:An image display enhancement device, comprising:
    显示增强模块,用于获取待显示图像的场景信息,所述待显示图像的场景信息是对所述待显示图像进行场景分析后得到的;并基于所述场景信息对所述待显示图像执行如下至少一种处理:色域映射、颜色管理、对比度增强、锐化或缩放;a display enhancement module, configured to acquire scene information of an image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; and performing, according to the scene information, the image to be displayed At least one type of processing: gamut mapping, color management, contrast enhancement, sharpening, or scaling;
    显示模块,用于显示经过所述显示增强模块处理后的所述待显示图像。And a display module, configured to display the image to be displayed after being processed by the display enhancement module.
  15. 如权利要求14所述的装置,其特征在于,所述场景信息包括在所述待显示图像的元数据Metadata中、或者可交换图像文件EXIF数据区域中、或者厂商注释makernotes字段中。The apparatus according to claim 14, wherein said scene information is included in metadata Metadata of said image to be displayed, or in an exchangeable image file EXIF data area, or in a vendor comment makernotes field.
  16. 如权利要求14或15所述的装置,其特征在于,所述场景信息包括以下至少一项:感光度ISO值、光圈值、快门时间或曝光EV值。The apparatus according to claim 14 or 15, wherein the scene information comprises at least one of the following: sensitivity ISO value, aperture value, shutter time or exposure EV value.
  17. 如权利要求16所述的装置,其特征在于,所述显示增强模块,在基于所述场景信息对所述待显示图像执行色域映射,具体用于:The device according to claim 16, wherein the display enhancement module performs gamut mapping on the image to be displayed based on the scene information, specifically for:
    确定所述场景信息对应的第一三维查找表以及第二三维查找表,所述第一三维查找表用于保持对所述待显示图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待显示图像中无需保护的颜色进行调整;Determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed Similarly, the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed;
    确定所述待显示图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;Determining, that the color of the i-th pixel point among the N pixel points included in the image to be displayed corresponds to a first color value in the first three-dimensional lookup table, and determining that the color of the i-th pixel point corresponds to Describe a second color value in the second three-dimensional lookup table;
    基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。The first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
  18. 如权利要求17所述的装置,其特征在于,所述显示增强模块,还用于:The device according to claim 17, wherein the display enhancement module is further configured to:
    在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i 个像素点的颜色对应颜色值;或者When the color corresponding to the color of the i-th pixel point is not included in the first three-dimensional lookup table, the i-th is obtained by interpolating colors included in the first three-dimensional lookup table by a three-dimensional interpolation algorithm The color of the pixel corresponds to the color value; or
    在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding color value of the i-th pixel point is not included in the second three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
  19. 如权利要求17或18所述的装置,其特征在于,所述显示增强模块,还用于通过如下公式确定所述融合权重:The device according to claim 17 or 18, wherein the display enhancement module is further configured to determine the fusion weight by the following formula:
    Figure PCTCN2016108729-appb-100006
    Figure PCTCN2016108729-appb-100006
    其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
  20. 如权利要求17至19任一项所述的装置,其特征在于,所述显示增强模块,在基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值时,具体用于:The apparatus according to any one of claims 17 to 19, wherein the display enhancement module merges the first color value and the second color value to obtain a gamut mapping based on a predetermined fusion weight When the color value of the i-th pixel is specifically used for:
    通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:The color value of the ith pixel point after the gamut mapping is determined by:
    Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
    其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
  21. 如权利要求16至20任一项所述的装置,其特征在于,所述显示增强模块,在基于所述场景信息对所述待显示图像执行颜色管理时,具体用于:The device according to any one of claims 16 to 20, wherein the display enhancement module is configured to: when performing color management on the image to be displayed based on the scene information, specifically:
    获取所述场景信息对应的调整所述待显示图像的像素点饱和度的调整策略; Acquiring an adjustment strategy for adjusting pixel saturation of the image to be displayed corresponding to the scene information;
    针对所述待显示图像中的第j个像素点的亮度饱和度色调YSH值分别进行如下调整:The brightness saturation hue YSH values of the jth pixel points in the image to be displayed are respectively adjusted as follows:
    保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值;Keeping the jth pixel dot hue H value unchanged, adjusting the saturation S value of the jth pixel point based on the adjustment strategy;
    将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。Obtaining a quadratic function function corresponding to the S value of the jth pixel point before adjustment, the adjusted S value, and the brightness Y of the jth pixel point corresponding to the H value of the predetermined jth pixel point The compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
  22. 如权利要求21所述的装置,其特征在于,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:The apparatus according to claim 21, wherein the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
    Figure PCTCN2016108729-appb-100007
    Figure PCTCN2016108729-appb-100007
    其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
  23. 如权利要求22所述的装置,其特征在于,所述显示增强模块,还用于通过如下方式确定所述二次方程函数系数:The apparatus according to claim 22, wherein said display enhancement module is further configured to determine said quadratic function coefficients by:
    在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
    在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  24. 如权利要求23所述的装置,其特征在于,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述显示增强模块,具体用于通过如下公 式确定a1、a2、a3、a4、a5、a6:The apparatus according to claim 23, wherein in the case where there is no quadratic function corresponding to the H value of the jth pixel point, the display enhancement module is specifically used for the following Determine a1, a2, a3, a4, a5, a6:
    Figure PCTCN2016108729-appb-100008
    Figure PCTCN2016108729-appb-100008
    ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
  25. 如权利要求16至24任一项所述的装置,其特征在于,所述显示增强模块,在基于所述场景信息对所述待显示图像执行缩放时,具体用于:The device according to any one of claims 16 to 24, wherein the display enhancement module is configured to: when performing scaling on the image to be displayed based on the scene information, specifically:
    确定所述待显示图像的缩放倍数,以及根据所述场景信息确定所述待显示图像中周期性图案的最高频率;Determining a zoom factor of the image to be displayed, and determining a highest frequency of the periodic pattern in the image to be displayed according to the scene information;
    根据预设的查找表确定所述缩放倍数以及所述待显示图像中周期性图案的最高频率对应的双立方滤波器的抽头数量;以及根据所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数;Determining, according to a preset lookup table, the zoom factor and the number of taps of the bicubic filter corresponding to the highest frequency of the periodic pattern in the image to be displayed; and determining the double according to the highest frequency of the periodic pattern in the image to be displayed The coefficient of the cubic equation function corresponding to the cubic filter;
    根据所述抽头数量以及确定的三次方程函数对所述待显示图像进行缩放。The image to be displayed is scaled according to the number of taps and the determined cubic function.
  26. 如权利要求25所述的装置,其特征在于,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件:The apparatus according to claim 25, wherein the cubic function corresponding to said bicubic filter satisfies the condition shown by the following formula:
    Figure PCTCN2016108729-appb-100009
    Figure PCTCN2016108729-appb-100009
    Figure PCTCN2016108729-appb-100010
    Figure PCTCN2016108729-appb-100010
    其中,x表示基于抽头数量所采样选择的采样像素点的位置,k(x)表示采样像素点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常 数。Where x represents the position of the sampled pixel selected based on the number of taps, k(x) represents the weight value corresponding to the sampled pixel point, and dB and dC represent the coefficients of the cubic equation function; B and C are constants. number.
  27. 一种终端,其特征在于,包括:A terminal, comprising:
    处理器,用于获取待显示图像的场景信息,所述待显示图像的场景信息是对所述待显示图像进行场景分析后得到的;基于所述场景信息对所述待显示图像执行如下至少一种处理:色域映射、颜色管理、对比度增强、锐化或缩放;a processor, configured to acquire scene information of an image to be displayed, where the scene information of the image to be displayed is obtained by performing scene analysis on the image to be displayed; and performing at least one of the following on the image to be displayed based on the scene information Processing: gamut mapping, color management, contrast enhancement, sharpening or scaling;
    显示器,用于显示经过所述处理处理后的所述待显示图像。And a display for displaying the image to be displayed after the processing.
  28. 如权利要求27所述的终端,其特征在于,所述场景信息包括在所述待显示图像的元数据Metadata中、或者可交换图像文件EXIF数据区域中、或者厂商注释makernotes字段中。The terminal according to claim 27, wherein the scene information is included in metadata Metadata of the image to be displayed, or in an exchangeable image file EXIF data area, or in a vendor comment makernotes field.
  29. 如权利要求27或28所述的终端,其特征在于,所述场景信息包括以下至少一项:感光度ISO值、光圈值、快门时间或曝光EV值。The terminal according to claim 27 or 28, wherein the scene information comprises at least one of the following: sensitivity ISO value, aperture value, shutter time or exposure EV value.
  30. 如权利要求27至29任一项所述的终端,其特征在于,所述处理器,在基于所述场景信息对所述待显示图像执行色域映射,具体用于:The terminal according to any one of claims 27 to 29, wherein the processor performs gamut mapping on the image to be displayed based on the scene information, specifically for:
    确定所述场景信息对应的第一三维查找表以及第二三维查找表,所述第一三维查找表用于保持对所述待显示图像中需保护的颜色进行色域映射后与色域映射前相同,所述第二三维查找表用于对所述待显示图像中无需保护的颜色进行调整;Determining a first three-dimensional lookup table corresponding to the scene information, and a second three-dimensional lookup table, where the first three-dimensional lookup table is configured to maintain color gamut mapping and color gamut mapping before the color to be protected in the image to be displayed Similarly, the second three-dimensional lookup table is configured to adjust a color that is not protected in the image to be displayed;
    确定所述待显示图像包括的N个像素点中第i个像素点的颜色对应在所述第一三维查找表中的第一颜色值,并确定所述第i个像素点的颜色对应在所述第二三维查找表中的第二颜色值;Determining, that the color of the i-th pixel point among the N pixel points included in the image to be displayed corresponds to a first color value in the first three-dimensional lookup table, and determining that the color of the i-th pixel point corresponds to Describe a second color value in the second three-dimensional lookup table;
    基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值,所述i取遍不大于N的正整数。The first color value and the second color value are fused based on the predetermined fusion weight to obtain a color value of the ith pixel point after the gamut mapping, and the i takes a positive integer not greater than N.
  31. 如权利要求30所述的终端,其特征在于,所述处理器,还用于:The terminal according to claim 30, wherein the processor is further configured to:
    在所述第一三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第一三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值;或者 When the color corresponding color value of the i-th pixel point is not included in the first three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the first three-dimensional lookup table by using a three-dimensional interpolation algorithm. The color corresponds to the color value; or
    在所述第二三维查找表中不包括所述第i个像素点的颜色对应颜色值时,基于通过三维插值算法对第二三维查找表中包括的颜色进行插值得到所述第i个像素点的颜色对应颜色值。When the color corresponding color value of the i-th pixel point is not included in the second three-dimensional lookup table, the i-th pixel point is obtained by interpolating colors included in the second three-dimensional lookup table by using a three-dimensional interpolation algorithm The color corresponds to the color value.
  32. 如权利要求30或31所述的终端,其特征在于,所述融合权重通过如下公式确定:The terminal according to claim 30 or 31, wherein the fusion weight is determined by the following formula:
    Figure PCTCN2016108729-appb-100011
    Figure PCTCN2016108729-appb-100011
    其中,Wblending表示所述融合权重,Tblending表示用于控制从调整色到保护色过渡平滑程度的阈值,Δd表示第i个像素点的颜色对应在三维颜色空间的颜色结点与由第一三维查找表中各个颜色结点在三维颜色空间构成的凸包之间的最短距离;Δd<0表示第i个像素点的颜色结点位于所述凸包内部;Δd=0表示第i个像素点的颜色结点位于所述凸包上;Δd>0表示第i个像素点的颜色结点位于所述凸包外部。Wherein, W blending represents the fusion weight, T blending represents a threshold for controlling smooth transition from the adjusted color to the protective color, and Δd represents that the color of the i-th pixel corresponds to the color node in the three-dimensional color space and is determined by the first three-dimensional Find the shortest distance between the convex hups formed by the color nodes in the three-dimensional color space; Δd<0 indicates that the color node of the ith pixel is located inside the convex hull; Δd=0 indicates the ith pixel The color node is located on the convex hull; Δd>0 indicates that the color node of the ith pixel is located outside the convex hull.
  33. 如权利要求30至32任一项所述的终端,其特征在于,所述处理器,在基于预确定的融合权重将所述第一颜色值以及第二颜色值融合得到经过色域映射后的所述第i个像素点的颜色值时,具体用于:The terminal according to any one of claims 30 to 32, wherein the processor fuses the first color value and the second color value to obtain a gamut-map based on a predetermined fusion weight The color value of the i-th pixel point is specifically used for:
    通过如下方式确定经过色域映射后的所述第i个像素点的颜色值:The color value of the ith pixel point after the gamut mapping is determined by:
    Aout=A1*Wblending+A2*(1-Wblending);A out =A 1 *W blending +A 2 *(1-W blending );
    其中,Aout表示经过色域映射后的所述第i个像素点的颜色值,A1表示所述第一颜色值,A2表示所述第二颜色值,Wblending表示所述融合权重。Wherein A out represents a color value of the i-th pixel point after gamut mapping, A 1 represents the first color value, A 2 represents the second color value, and W blending represents the fusion weight.
  34. 如权利要求27至33任一项所述的终端,其特征在于,所述处理器,在基于所述场景信息对所述待显示图像执行颜色管理,具体用于:The terminal according to any one of claims 27 to 33, wherein the processor performs color management on the image to be displayed based on the scene information, specifically for:
    获取所述场景信息对应的调整所述待显示图像的像素点饱和度的调整策略;Acquiring an adjustment strategy for adjusting pixel saturation of the image to be displayed corresponding to the scene information;
    针对所述待显示图像中的第j个像素点的亮度饱和度色调YSH值分别进 行如下调整:The brightness saturation tone YSH value of the jth pixel point in the image to be displayed is respectively The line is adjusted as follows:
    保持第j个像素点色调H值不变,基于所述调整策略调整第j个像素点的饱和度S值;Keeping the jth pixel dot hue H value unchanged, adjusting the saturation S value of the jth pixel point based on the adjustment strategy;
    将调整前的所述第j个像素点的S值、调整后的S值以及所述第j个像素点的亮度Y输入预确定的第j个像素点的H值对应的二次方程函数得到经过补偿后的所述第j个像素点的亮度Y值;所述j取遍不大于N的正整数。Obtaining a quadratic function function corresponding to the S value of the jth pixel point before adjustment, the adjusted S value, and the brightness Y of the jth pixel point corresponding to the H value of the predetermined jth pixel point The compensated luminance Y value of the jth pixel; the j takes a positive integer not greater than N.
  35. 如权利要求34所述的终端,其特征在于,所述第j个像素点的H值对应的二次方程函数满足如下公式所述的条件:The terminal according to claim 34, wherein the quadratic function corresponding to the H value of the jth pixel point satisfies the condition described in the following formula:
    Figure PCTCN2016108729-appb-100012
    Figure PCTCN2016108729-appb-100012
    其中,Yout表示输出的所述第j个像素点的亮度值,Yin表示输入的所述第j个像素点的亮度值,ΔY表示亮度补偿值,a1、a2、a3、a4、a5、a6分别表示二次方程函数系数,Sin表示调整前的所述第j个像素点的饱和度值,Sout表示经过调整后的所述第j个像素点的饱和度值。Wherein, Y out represents the j-th output luminance values of pixels, Y in represents the input luminance value of the j-th pixel point, ΔY represents the luminance compensation value, a1, a2, a3, a4 , a5, A6 represents a quadratic function function coefficient, S in represents a saturation value of the j-th pixel point before adjustment, and S out represents a saturation value of the adjusted j-th pixel point.
  36. 如权利要求35所述的终端,其特征在于,所述处理器,还用于通过如下方式确定所述二次方程函数系数:The terminal according to claim 35, wherein said processor is further configured to determine said quadratic function coefficients by:
    在存在所述第j个像素点的H值对应的二次方程函数的情况下,所述a1、a2、a3、a4、a5、a6是预先分别使用调整前的所述第j个像素点的饱和度值、经过调整后的所述第j个像素点的饱和度值以及所述亮度补偿值,基于最小二乘拟合算法求解得到的;或者,In the case where there is a quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are respectively used in advance using the jth pixel before adjustment a saturation value, an adjusted saturation value of the jth pixel point, and the brightness compensation value, which are obtained based on a least squares fitting algorithm; or
    在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述a1、a2、a3、a4、a5、a6是通过与所述第j个像素点的H值最近邻的两个色调值分别对应的二次方程函数的系数插值得到。In the case where there is no quadratic function corresponding to the H value of the jth pixel, the a1, a2, a3, a4, a5, a6 are closest to the H value of the jth pixel The two tonal values are respectively obtained by interpolating the coefficients of the quadratic function corresponding to each.
  37. 如权利要求36所述的终端,其特征在于,在不存在所述第j个像素点的H值对应的二次方程函数情况下,所述处理器,还用于通过如下公式确定所述a1、a2、a3、a4、a5、a6: The terminal according to claim 36, wherein in the case where there is no quadratic function corresponding to the H value of the jth pixel, the processor is further configured to determine the a1 by the following formula , a2, a3, a4, a5, a6:
    Figure PCTCN2016108729-appb-100013
    Figure PCTCN2016108729-appb-100013
    ak表示a1、a2、a3、a4、a5、a6中第k个系数;H表示第j个像素点的饱和度值,Hp1、Hp2表示所述第j个像素点的H值最近邻的两个色调值,Hp1<H<Hp2,a(p1,k)表示Hp1对应的二次方程函数的第k个系数,a(p2,k)表示Hp2对应的二次方程函数的第k个系数。a k represents the kth coefficient in a1, a2, a3, a4, a5, a6; H represents the saturation value of the jth pixel point, and H p1 and H p2 represent the nearest neighbor of the H value of the jth pixel point The two tonal values, H p1 <H<H p2 , a (p1, k) represents the kth coefficient of the quadratic function corresponding to H p1 , and a (p2, k) represents the quadratic function corresponding to H p2 The kth coefficient.
  38. 如权利要求27至37任一项所述的终端,其特征在于,所述处理器,还基于所述场景信息对所述待显示图像执行缩放,具体用于:The terminal according to any one of claims 27 to 37, wherein the processor further performs scaling on the image to be displayed based on the scene information, specifically for:
    确定所述待显示图像的缩放倍数,以及根据所述场景信息确定所述待显示图像中周期性图案的最高频率;Determining a zoom factor of the image to be displayed, and determining a highest frequency of the periodic pattern in the image to be displayed according to the scene information;
    根据预设的查找表确定所述缩放倍数以及所述待显示图像中周期性图案的最高频率对应的双立方滤波器的抽头数量;以及根据所述待显示图像中周期性图案的最高频率确定双立方滤波器对应的三次方程函数的系数;Determining, according to a preset lookup table, the zoom factor and the number of taps of the bicubic filter corresponding to the highest frequency of the periodic pattern in the image to be displayed; and determining the double according to the highest frequency of the periodic pattern in the image to be displayed The coefficient of the cubic equation function corresponding to the cubic filter;
    根据所述抽头数量以及确定的三次方程函数对所述待显示图像进行缩放。The image to be displayed is scaled according to the number of taps and the determined cubic function.
  39. 如权利要求38所述的终端,其特征在于,所述双立方滤波器对应的三次方程函数满足如下公式所示的条件:The terminal according to claim 38, wherein the cubic function corresponding to the bicubic filter satisfies the condition shown by the following formula:
    Figure PCTCN2016108729-appb-100014
    Figure PCTCN2016108729-appb-100014
    Figure PCTCN2016108729-appb-100015
    Figure PCTCN2016108729-appb-100015
    其中,x表示基于抽头数量所采样选择的采样像素点的位置,k(x)表示采样像素点对应的权重值,dB、dC分别表示三次方程函数的系数;B、C均为常数。 Where x represents the position of the sampled pixel selected based on the number of taps, k(x) represents the weight value corresponding to the sampled pixel point, and dB and dC represent the coefficients of the cubic function, respectively; B and C are constants.
PCT/CN2016/108729 2016-10-17 2016-12-06 Method and device for enhancing image display WO2018072270A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201680080613.8A CN108701351B (en) 2016-10-17 2016-12-06 Image display enhancement method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610903016.2 2016-10-17
CN201610903016 2016-10-17

Publications (1)

Publication Number Publication Date
WO2018072270A1 true WO2018072270A1 (en) 2018-04-26

Family

ID=62018080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108729 WO2018072270A1 (en) 2016-10-17 2016-12-06 Method and device for enhancing image display

Country Status (2)

Country Link
CN (1) CN108701351B (en)
WO (1) WO2018072270A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114512094A (en) * 2020-11-16 2022-05-17 华为技术有限公司 Screen color adjusting method, device, terminal and computer readable storage medium
CN114518828A (en) * 2020-11-16 2022-05-20 华为技术有限公司 Screen color adjusting method, device, terminal and computer readable storage medium
CN115359762A (en) * 2022-08-16 2022-11-18 广州文石信息科技有限公司 Ink screen display control method and device based on drive compensation
CN116703791A (en) * 2022-10-20 2023-09-05 荣耀终端有限公司 Image processing method, electronic device and readable medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112530382B (en) * 2019-09-19 2022-05-13 华为技术有限公司 Method and device for adjusting picture color of electronic equipment
CN112015417B (en) * 2020-09-01 2023-08-08 中国银行股份有限公司 Method and device for determining theme colors of application programs
CN114584752B (en) * 2020-11-30 2024-02-02 华为技术有限公司 Image color restoration method and related equipment
CN113613007B (en) * 2021-07-19 2024-03-05 青岛信芯微电子科技股份有限公司 Three-dimensional color lookup table generation method and display device
CN115631250B (en) * 2021-08-10 2024-03-29 荣耀终端有限公司 Image processing method and electronic equipment
CN117116186B (en) * 2023-10-25 2024-01-16 深圳蓝普视讯科技有限公司 Ultra-high definition image display color gamut adjustment method, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1957371A (en) * 2004-05-31 2007-05-02 诺基亚公司 Method and system for viewing and enhancing images
CN103647958A (en) * 2013-12-23 2014-03-19 联想(北京)有限公司 Image processing method and device and electronic device
CN105450923A (en) * 2014-09-25 2016-03-30 索尼公司 Image processing method, image processing device and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005210495A (en) * 2004-01-23 2005-08-04 Konica Minolta Photo Imaging Inc Image processing apparatus, method, and program
CN100573651C (en) * 2007-09-14 2009-12-23 北京中视中科光电技术有限公司 A kind of color domain mapping real-time and real-time treatment circuit
US9852499B2 (en) * 2013-12-13 2017-12-26 Konica Minolta Laboratory U.S.A., Inc. Automatic selection of optimum algorithms for high dynamic range image processing based on scene classification
CN104299196A (en) * 2014-10-11 2015-01-21 京东方科技集团股份有限公司 Image processing device and method and display device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1957371A (en) * 2004-05-31 2007-05-02 诺基亚公司 Method and system for viewing and enhancing images
CN103647958A (en) * 2013-12-23 2014-03-19 联想(北京)有限公司 Image processing method and device and electronic device
CN105450923A (en) * 2014-09-25 2016-03-30 索尼公司 Image processing method, image processing device and electronic device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114512094A (en) * 2020-11-16 2022-05-17 华为技术有限公司 Screen color adjusting method, device, terminal and computer readable storage medium
CN114518828A (en) * 2020-11-16 2022-05-20 华为技术有限公司 Screen color adjusting method, device, terminal and computer readable storage medium
CN114512094B (en) * 2020-11-16 2023-03-24 华为技术有限公司 Screen color adjusting method, device, terminal and computer readable storage medium
CN115359762A (en) * 2022-08-16 2022-11-18 广州文石信息科技有限公司 Ink screen display control method and device based on drive compensation
CN116703791A (en) * 2022-10-20 2023-09-05 荣耀终端有限公司 Image processing method, electronic device and readable medium
CN116703791B (en) * 2022-10-20 2024-04-19 荣耀终端有限公司 Image processing method, electronic device and readable medium

Also Published As

Publication number Publication date
CN108701351B (en) 2022-03-29
CN108701351A (en) 2018-10-23

Similar Documents

Publication Publication Date Title
WO2018072270A1 (en) Method and device for enhancing image display
US11302286B2 (en) Picture obtaining method and apparatus and picture processing method and apparatus
CN111225150B (en) Method for processing interpolation frame and related product
US10629167B2 (en) Display apparatus and control method thereof
WO2017152762A1 (en) Interface color adjustment method and apparatus, and webpage color adjustment method and apparatus
TW201403582A (en) Display control method and apparatus for power saving and non-transitory computer-readable recoding medium
WO2020063030A1 (en) Theme color adjusting method and apparatus, storage medium, and electronic device
US8830251B2 (en) Method and system for creating an image
US10475170B2 (en) Rendering information into images
TW201824244A (en) Display control for transparent display and related method
EP3993383A1 (en) Method and device for adjusting image quality, and readable storage medium
US9053568B2 (en) Applying a realistic artistic texture to images
US20160275366A1 (en) Image processing apparatus, image processing system, non-transitory computer readable medium, and image processing method
CN109065001B (en) Image down-sampling method and device, terminal equipment and medium
WO2019184017A1 (en) Control display method and electronic device
CN113282212A (en) Interface display method, interface display device and electronic equipment
US20110310111A1 (en) Method for providing texture effect and display apparatus applying the same
CN108604367B (en) Display method and handheld electronic device
WO2023151214A1 (en) Image generation method and system, electronic device, storage medium, and product
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
JP6155349B2 (en) Method, apparatus and computer program product for reducing chromatic aberration in deconvolved images
WO2018036526A1 (en) Display method and device
CN114512094B (en) Screen color adjusting method, device, terminal and computer readable storage medium
US9807315B1 (en) Lookup table interpolation in a film emulation camera system
CN108769520B (en) Electronic device, image processing method, and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16919351

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16919351

Country of ref document: EP

Kind code of ref document: A1