CN116489377A - Image processing method and electronic device - Google Patents

Image processing method and electronic device Download PDF

Info

Publication number
CN116489377A
CN116489377A CN202310422250.3A CN202310422250A CN116489377A CN 116489377 A CN116489377 A CN 116489377A CN 202310422250 A CN202310422250 A CN 202310422250A CN 116489377 A CN116489377 A CN 116489377A
Authority
CN
China
Prior art keywords
image
standard
target
determining
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310422250.3A
Other languages
Chinese (zh)
Inventor
陈杰
梅祯
王雨
杨晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310422250.3A priority Critical patent/CN116489377A/en
Publication of CN116489377A publication Critical patent/CN116489377A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/98Adaptive-dynamic-range coding [ADRC]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and electronic equipment, and belongs to the technical field of image processing. The specific scheme comprises the following steps: acquiring a first image, wherein the first image is an image of a first standard; performing target processing on the first image to obtain a second image, wherein the second image is an image of a second standard; wherein the target process comprises at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range.

Description

Image processing method and electronic device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and electronic equipment.
Background
With advances and developments in screen display technology, high dynamic range (High Dynamic Range, HDR) video is becoming the dominant display trend.
In the related art, in order for standard dynamic range (Standard Dynamic Range, SDR) video to exhibit the display effect of HDR video, the SDR video may be subjected to contrast enhancement and saturation enhancement.
However, although the contrast enhancement processing can make the bright place in the picture of the SDR video brighter, the standard highest brightness of the HDR video is still far lower, and the color saturation of the SDR video after the saturation enhancement processing is not far as high as that of the HDR video, so that the display effect of the SDR video after the processing is poor.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and electronic equipment, which can solve the problem of poor SDR video display effect.
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring a first image, wherein the first image is an image of a first standard; performing target processing on the first image to obtain a second image, wherein the second image is an image of a second standard; wherein the target process comprises at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises an acquisition module and a processing module; the acquisition module is used for acquiring a first image, wherein the first image is an image of a first standard; the processing module is used for carrying out target processing on the first image to obtain a second image, wherein the second image is an image of a second standard; wherein the target process comprises at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first image is acquired, wherein the first image is an image of a first standard; performing target processing on the first image to obtain a second image, wherein the second image is an image of a second standard; wherein the target process comprises at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range. By the scheme, the image can be processed from the image of the first standard to the image of the second standard, and the display brightness range of the second standard is larger than that of the first standard, so that the brightness and the saturation of the image can be improved, and the display effect of the image is improved.
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of color coordinates in different color gamuts provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of different f-function curves provided by embodiments of the present application;
FIG. 4 is a graph of mapping for different sub-ranges provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic hardware diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
The execution subject of the image processing method provided in the embodiment of the present application may be an electronic device or a functional module or a functional entity capable of implementing the image processing method in the electronic device, where the electronic device in the embodiment of the present application includes, but is not limited to, a mobile phone, a tablet computer, a camera, a wearable device, and the like, and the image processing method provided in the embodiment of the present application is described below by taking the electronic device as an execution subject.
As shown in fig. 1, an embodiment of the present application provides an image processing method, which may include steps 101 to 102:
step 101, acquiring a first image.
Wherein the first image is an image of a first standard.
Optionally, the first image may be a picture, or may also be a video, or may be a moving picture, which may specifically be determined according to an actual use requirement, which is not limited in the embodiment of the present application.
Alternatively, the image of the first standard may be a standard dynamic range (Standard Dynamic Range, SDR) image.
And 102, performing target processing on the first image to obtain a second image.
Wherein the second image is an image of a second standard, and the target processing may include at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range.
Alternatively, in the case where the image of the first standard is an SDR image, the image of the second standard may be a high dynamic range (High Dynamic Range, HDR) image.
It should be noted that, the display luminance range of the SDR image is usually (0-100) nits, the display luminance range of the HDR image is usually (0-10000) nits, and after the luminance mapping (tone mapping), the maximum luminance of the HDR image displayed on the screen is usually not lower than 1000nits.
Note that, the gamut range of the SDR image is within the bt.709 gamut, the gamut range of the HDR image is within the bt.2020 gamut, and the gamut range of the image displayed on the screen is within the Display P3 gamut, which is limited by the current screen Display technology.
Alternatively, the above-described target process may include a color gamut mapping process. The electronic device may map a gamut range of the first image from a gamut range of the first standard to a gamut range of the second standard based on a target lookup table.
Alternatively, the target lookup Table may be a 3D display lookup Table (LUT). The target lookup table may include a mapping of a gamut range of the first standard to a gamut range of the second standard.
Based on the above scheme, since the color gamut range of the first image can be mapped from the color gamut range of the first standard to the color gamut range of the second standard based on the target lookup table, the saturation of the image can be improved, thereby improving the display effect of the image.
Optionally, the electronic device may determine the target lookup table before mapping the gamut range of the first image from the gamut range of the first standard to the gamut range of the second standard based on the target lookup table. That is, the electronic device may determine a first luminance and a first chromaticity coordinate of the pixel point according to a first color coding of the pixel point in the first image under the first standard; determining a second chromaticity coordinate of the pixel point under the second standard according to the first brightness and the first chromaticity coordinate; determining a second color code of the pixel point under the second standard according to the first brightness and the second chromaticity coordinate; the target lookup table is determined based on the first color coding and the second color coding.
Specifically, the electronic device may acquire a set of RGB data of a pixel point in the first image under the first standard, and then linearize the RGB data by an electro-optical conversion function (electro-optical transfer function, EOTF) to obtain R 'G' B ', where EOTF is an exponential 2.2 function, i.e., R' =r 2.2 . And finally, converting the linearized R ' G ' B ' data into CIE 1931 XYZ color space through a 3X 3 color conversion matrix to obtain a first color code XYZ of the pixel point under a first standard. The process may be represented by the following function:
linearizing RGB data through EOTF to obtain
The R ' G ' B ' data is converted to CIE 1931 XYZ color space by the following equation (1) to yield a first color coded XYZ:
after determining the first color-coding XYZ, the electronic device may determine a first luminance Y and a first chromaticity coordinate a (u ', v') of the pixel point based on the first color-coding XYZ, wherein,
Y=m 21 *R′+m 22 *G′m 23 *B′;
after determining the first luminance Y and the first chromaticity coordinate a (u ', v'), the electronic device may determine a second chromaticity coordinate b (u 'of the pixel point under the second standard according to the first luminance Y and the first chromaticity coordinate a (u', v ')' 1 ,v′ 1 ) The method comprises the steps of carrying out a first treatment on the surface of the And according to the first luminance Y and the second chromaticity coordinate b (u' 1 ,v′ 1 ) Determining a second color code of the pixel under a second standard, namely obtaining XYZ tristimulus values through an inverse form of the formula (1), multiplying the XYZ tristimulus values by a 3x3 color space conversion matrix corresponding to a color gamut of the second standard to determine RGB values, and finally determining a nonlinear second color code through an opto-electronic transfer function (OETF), wherein the OETF is an exponential 1/2.2 function, namely R' =R 1/2.2
After determining the second color code, the electronic device may determine a set of mapping values in the target lookup table based on the first color code and the second color code.
Alternatively, the size of the target lookup table may be 17x17x17, that is, the coordinate axis of the three-dimensional RGB space R, G, B is equally divided into 17 parts within the range of [0,1], the step size is 0.0624857, and 4913 sets of RGB map values in total.
Based on the above scheme, since the target lookup table can be determined based on the first color coding and the second color coding, a basis can be provided for the gamut expansion of the first image.
Optionally, the electronic device determines, according to the first luminance and the first chromaticity coordinates, a second chromaticity coordinate of the pixel point under the second standard, and may specifically include: the electronic device determines a color gamut boundary of a first standard and a color gamut boundary of a second standard corresponding to the first brightness; determining a first intersection point coordinate of a target line and a color gamut boundary of the first standard and a second intersection point coordinate of the target line and a color gamut boundary of the second standard, wherein the target line is an extension line of a connecting line of a white point coordinate and the first chromaticity coordinate; and determining the second chromaticity coordinate according to the white point coordinate, the first chromaticity coordinate, the first intersection point coordinate and the second intersection point coordinate.
Specifically, since the color gamut space is a three-dimensional space composed of chromaticity (two-dimensional plane) and luminance parameters, the shape of the color gamut space may also change with a change in the luminance parameter axis, for example, may be changed from a triangle to a quadrangle or pentagon, or the like. Before and after the color gamut expansion, the brightness of the color needs to be kept unchanged, so that the mapped chromaticity coordinates cannot exceed the color gamut boundary corresponding to the current brightness.
The gamut boundary of the first standard and the gamut boundary of the second standard may be determined according to the following equation, i.e. at m 21 、m 22 、m 23 And in the case of Y determination, the value of c can be determined according to the following 12 equations, if the value of c is in the range of (0, 1), converting the corresponding coordinate point to uv coordinates, and then connecting all the converted coordinate points to obtain a color gamut boundary, namely, when the value of c is in the range of (0, 1), the corresponding coordinate point is the node coordinate of the color gamut boundary, and if the value of c is not in the range of (0, 1), discarding the corresponding coordinate point, wherein c is a gray scale value.
In the case where Y is the same, the symbols are differentQuasi-gamut corresponds to different m 21 、m 22 And m 23 Therefore, the color gamut boundary of the first standard and the color gamut boundary of the second standard corresponding to the first luminance can be determined by the above equation.
After determining the color gamut boundary of the first standard and the color gamut boundary of the second standard, letting the brightness be 1, the electronic device may determine the white point coordinate w, as shown in fig. 2, and extend the connection line between the white point coordinate w and the first chromaticity coordinate a to the color gamut boundary of the first standard to obtain the first intersection point coordinate c A Extending to the color gamut boundary of the second standard to obtain a second intersection point coordinate c B . Then, according to the white point coordinate w, the first chromaticity coordinate a and the first intersection point coordinate c A The second intersection point coordinate c B And determining the second chromaticity coordinate b.
Based on the above scheme, since the second chromaticity coordinates can be determined according to the white point coordinates, the first chromaticity coordinates, the first intersection coordinates and the second intersection coordinates, a basis can be provided for determining the second color coding of the pixel point under the second standard.
Optionally, the electronic device determines the second chromaticity coordinate according to the white point coordinate, the first chromaticity coordinate, the first intersection coordinate, and the second intersection coordinate, which may specifically include: the electronic device determines a first distance between the white point coordinates and the first chromaticity coordinates; determining a first vector between the white point coordinates and the first intersection point coordinates, and determining a second distance between the white point coordinates and the first intersection point coordinates according to the first vector; substituting the first distance, the second distance and the first vector into a first mapping function to obtain a second vector between the white point coordinates and the second chromaticity coordinates; determining the second chromaticity coordinates from the second vector and the white point coordinates; wherein the second intersection point coordinate is used to limit the second chromaticity coordinate within a color gamut boundary of the second standard.
Specifically, the electronic device may determine a second vector between the white point coordinate w and the second chromaticity coordinate b according to the first mapping function:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the second vector, ++>For a first distance, +>For the first vector, ++>For a second distance, +>Is the vector between the white point coordinates and the second intersection point coordinates.
Determining a second vectorThereafter, the second chromaticity coordinate b may be determined from the white point coordinates.
It should be noted that, as shown in fig. 3, the function f in the first mapping function may include multiple expression forms, the convex function emphasizes and expands the color of the high saturation portion, the expansion result of the linear function is more bright and uniform, and by setting a section of linear section with a slope of 1 through the piecewise function, the chromaticity coordinates of the low saturation region can be kept unchanged before and after mapping, so as to ensure the consistency of the skin color picture expression. And in particular, may include actual usage requirement determination, which is not limited in this embodiment of the present application.
Based on the above scheme, since the second chromaticity coordinates can be determined from the second vector and the white point coordinates, a basis for determining the target lookup table can be provided.
Alternatively, the above-described target processing may include a luminance mapping processing; the electronic device may map the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard based on the target mapping curve.
Alternatively, the target mapping curve may include a mapping relationship between the display brightness of the first standard and the display brightness of the second standard.
Based on the above scheme, since the display brightness of the first image can be mapped from the display brightness of the first standard to the display brightness of the second standard based on the target mapping curve, the brightness range of the image can be improved, thereby improving the display effect of the image.
Alternatively, the electronic device may determine the target mapping curve before mapping the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard based on the target mapping curve. That is, the electronic device may determine a second mapping curve of the first standard to the second standard from a first mapping curve of the second standard to the first standard; counting the proportion of pixel points in the first image in different gray scale ranges; and determining curve parameters according to the proportion, substituting the curve parameters into a curve formula of the second mapping curve, and obtaining the target mapping curve.
It should be noted that, in the case where the image of the first standard is an SDR image and the image of the second standard is an HDR image, the first mapping curve is a inverse tonemapping curve, which can be determined by referring to the tone mapping formula in Method-C in the published standard document bt.2446-1.
Specifically, the electronic device may calculate an inverse form of the first mapping curve, and add the offset parameter offset, so that the luminance range Ysdr of the first standard is [0, 100], and the luminance range Yhdr of the second standard is [0, 1000], where a curve formula of the second mapping curve is formula (2):
the values of k1, k3 and ysdr_ip can be flexibly adjusted according to the proportion of the pixel points in the first image in different gray scale ranges, yhdr_ip=ysdr_ip/k 1, k2=k1 (1-k 3) yhdr_ip, k4=k1×yhdr_ip-k2 (1-k 3), offset=100/(k2×log (1000/yhdr_ip-k 3) +k4).
Alternatively, the electronic device may divide the gray scale range into a plurality of sub-ranges, for example, into 3 sub-ranges of low, medium, and high. As shown in fig. 4, different sub-ranges may correspond to different mapping curves, where curve 1 is a low gray phase, curve 2 is a medium gray phase, curve 3 is a high gray phase, and different mapping curves may correspond to a set of different k1, k3, and ysdr_ip values, respectively.
It should be noted that, the low gray level ratio indicates that the whole picture content is darker, the corresponding k1 and k3 values are smaller, and the low portion Ysdr in the curve is higher than the corresponding Yhdr. The high gray phase ratio indicates that the whole picture content is brighter, the corresponding k1 and k3 values are larger, and the lower part Ysdr in the curve is lower than the corresponding Yhdr. The middle ash stage is located between the two stages.
Afterwards, the electronic device may count the proportion a: b: c of the pixel points in the first image in different gray scale ranges, where a+b+c=1. Then determining curve parameters according to the ratio a:b:c:
k1=a*k1_low+b*k1_middle+c*k1_high;
k3=a*k3_low+b*k3_middle+c*k3_high;
Ysdr_ip=a*Ysdr_ip_low+b*Ysdr_ip_middle+c*Ysdr_ip_high。
and finally, substituting the determined curve parameters into a curve formula of the second mapping curve to obtain the target mapping curve.
Based on the above-described scheme, since the target mapping curve can be determined, a reference basis can be provided for mapping the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard.
Optionally, before mapping the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard based on a target mapping curve, the electronic device may determine a target display luminance of a pixel having the largest RGB value in the first image; determining second display brightness according to the first display brightness of the pixel points in the first image and the target display brightness; and determining a third display brightness corresponding to the second display brightness in the target mapping curve, wherein the third display brightness is the display brightness of the second standard.
Specifically, the electronic device may perform normalization processing on a gray level value of a pixel in the first image, then convert the gray level value of the pixel into a linear domain through an EOTF function, then the electronic device may determine a first standard Luminance sdr_luminence, then determine a first display Luminance of the pixel according to the sdr_luminence and the gray level value of the pixel in the first image, then determine a maximum RGB value of the pixel according to a target display Luminance of a pixel with the maximum RGB value in the first image and the sdr_luminence, and then determine a second display Luminance according to the maximum RGB value and the first display Luminance of the pixel.
Optionally, the first display brightness of the pixel point in the first image may be: y=sdr_luminence (m×r+n×g+k×b), where m, n, k are scaling coefficients when converting the RGB space of the second standard into XYZ space, and the maximum RGB value of the pixel point may be: y_max rgb=sdr_luminance max (R, G, B), the second display Luminance may be: y_ sdr =a1×y+b1×y_maxrgb, where a1+b1=1.
It should be noted that, the values of a1 and b1 can be flexibly adjusted according to the display effect desired by the user, the larger the value of b1, the more prominent the highlight and high saturation region in the picture, and the larger the value of a1, the more natural the picture will be subjective.
Alternatively, the value of SDR_Luminance may be 100nits. When the RGB space corresponding to the second standard is the Display P3 color gamut, m may be 0.229, n may be 0.692, and k may be 0.079.
Optionally, after determining the second display luminance y_ sdr, the electronic device may determine a third display luminance y_hdr corresponding to the second display luminance y_ sdr in the target mapping curve.
Based on the above scheme, since the second display luminance can be determined and the corresponding third display luminance of the second display luminance in the target mapping curve can be determined, mapping of the display luminance of the first standard to the display luminance of the second standard can be achieved.
Optionally, after mapping the first display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard based on a target mapping curve, the electronic device may determine a target gain value according to the second display luminance and the third display luminance; and amplifying the chromaticity value of the pixel point in the first image according to the target gain value.
Specifically, after the mapping process of the Display brightness and the color gamut is implemented, in order to keep the color coordinates of the pixels in the first image unchanged, the electronic device may determine the target gain value ratio according to ratio=y_hdr/y_ SDR, and then the electronic device may first convert the first image from the RGB space of the Display P3 color gamut to the XYZ space, that is, multiply the linear spatial RGB value of the first image pixel by the color conversion matrix corresponding to the P3 color gamut and then multiply by sdr_luminance, to obtain X, Y, Z. Then, X, Y, Z in the XYZ space is multiplied by the ratio values, respectively, thereby realizing the amplification processing of the chromaticity values of the pixel points in the first image.
Optionally, in the case that the image of the second standard is an HDR image, the electronic device may further convert the pixel value of the XYZ space after the enlargement process to an RGB space of the BT2020 color gamut, that is, multiply the pixel value of the XYZ space after the equal magnification process by a color conversion matrix corresponding to the bt.2020 color gamut, so as to obtain the second image.
Optionally, the color conversion matrix corresponding to the P3 color gamut may be [0.487,0.266,0.198;0.229,0.692,0.079;0,0.045,1.044 the color conversion matrix corresponding to the bt.2020 color gamut may be [1.717, -0.357, -0.254; -0.668,1.617,0.016;0.018, -0.043,0.942].
Alternatively, in order to enable the second image to be displayed in the second standard, the electronic device may convert the second image into a format corresponding to the second standard, for example, in the case that the image of the second standard is an HDR image, the electronic device may multiply the RGB values of the second image by an OETF function and then by 1023, thereby obtaining an HLG format of HDR, that is, a non-linearly encoded RGB value with color coordinates within the P3 color gamut.
According to the scheme, the chromaticity value of the pixel point in the first image can be amplified according to the target gain value, so that the color coordinates of the pixel in the first image can be kept unchanged.
Optionally, before determining the second display brightness according to the first display brightness of the pixel point in the first image and the target display brightness, the electronic device may acquire a fourth display brightness of the third image under the first standard and a fifth display brightness of the first image under the first standard; determining an average value of the fourth display brightness and the fifth display brightness as the first display brightness; wherein the third image is a previous image frame of the first image.
Alternatively, the electronic device may identify a fifth display luminance of the first image and retrieve a fourth display luminance of the third image from the memory. After determining the first display brightness, the electronic device may store a fifth display brightness. When the electronic device processes the next image frame of the first image, the electronic device may obtain a fifth display luminance of the first image from the memory and identify a display luminance of the next image frame under the first standard. Thus, the speed of the electronic equipment for acquiring the fourth display brightness can be improved, and the data processing efficiency is improved.
Illustratively, the third image is taken as a first image frame, and the first image is taken as an example of a second image frame of the next image frame adjacent to the first image frame. After the electronic device determines the display brightness y_ sdr of the first image frame under the first standard, the electronic device may determine the display brightness y_hdr of the first image frame under the second standard not only through the display brightness y_ sdr of the first image frame under the first standard and the target mapping curve, but also may store the display brightness y_ sdr of the first image frame under the first standard in a buffer register with a size of 1×1, after that, after the electronic device determines the display brightness y_ sdr of the second image frame under the first standard, the electronic device may obtain the y_ sdr of the first image frame from the buffer register, and then average the y_ sdr of the second image frame with the y_ sdr of the first image frame to obtain a new y_ sdr of the second image frame, where the new y_ sdr is the first display brightness of the second image frame, and finally, the electronic device may determine the display brightness y_ sdr of the second image frame under the second standard, that is the third display brightness of the second image frame according to the first display brightness and the target mapping curve of the second image frame.
Based on the above scheme, since the average value of the fourth display brightness and the fifth display brightness can be determined as the display brightness of the pixel point in the first image, the flicker problem caused by the large difference of the adjacent frame picture contents in the video can be avoided.
In the embodiment of the application, the image can be processed from the image of the first standard to the image of the second standard, and the display brightness range of the second standard is larger than the display brightness range of the first standard, so that the brightness and the saturation of the image can be improved, and the display effect of the image is improved.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus provided in the embodiment of the present application will be described by taking an example in which the image processing apparatus executes an image processing method.
As shown in fig. 5, an embodiment of the present application further provides an image processing apparatus 500, including: an acquisition module 501 and a processing module 502. The acquiring module 501 is configured to acquire a first image, where the first image is an image of a first standard; the processing module 502 is configured to perform target processing on the first image to obtain a second image, where the second image is an image of a second standard; wherein the target process comprises at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range.
Optionally, the target process includes the gamut mapping process; the processing module 502 is configured to map a gamut range of the first image from a gamut range of the first standard to a gamut range of the second standard based on a target lookup table.
Optionally, the processing module 502 is further configured to determine a first luminance and a first chromaticity coordinate of a pixel point in the first image according to a first color coding of the pixel point under the first standard; determining a second chromaticity coordinate of the pixel point under the second standard according to the first brightness and the first chromaticity coordinate; determining a second color code of the pixel point under the second standard according to the first brightness and the second chromaticity coordinate; the target lookup table is determined based on the first color coding and the second color coding.
Optionally, the processing module 502 is configured to determine a gamut boundary of the first standard and a gamut boundary of the second standard corresponding to the first brightness; determining a first intersection point coordinate of a target line and a color gamut boundary of the first standard and a second intersection point coordinate of the target line and a color gamut boundary of the second standard, wherein the target line is an extension line of a connecting line of a white point coordinate and the first chromaticity coordinate; and determining the second chromaticity coordinate according to the white point coordinate, the first chromaticity coordinate, the first intersection point coordinate and the second intersection point coordinate.
Optionally, the processing module 502 is configured to determine a first distance between the white point coordinates and the first chromaticity coordinates; determining a first vector between the white point coordinates and the first intersection point coordinates, and determining a second distance between the white point coordinates and the first intersection point coordinates according to the first vector; substituting the first distance, the second distance and the first vector into a first mapping function to obtain a second vector between the white point coordinates and the second chromaticity coordinates; determining the second chromaticity coordinates from the second vector and the white point coordinates; wherein the second intersection point coordinate is used to limit the second chromaticity coordinate within a color gamut boundary of the second standard.
Optionally, the target process includes the luminance mapping process; the processing module 502 is configured to map, based on a target mapping curve, a display luminance of the first image from a display luminance of the first standard to a display luminance of the second standard.
Optionally, the processing module 502 is further configured to determine a second mapping curve from the first standard to the second standard according to a first mapping curve from the second standard to the first standard; counting the proportion of pixel points in the first image in different gray scale ranges; and determining curve parameters according to the proportion, substituting the curve parameters into a curve formula of the second mapping curve, and obtaining the target mapping curve.
Optionally, the processing module 502 is configured to determine a target display brightness of a pixel with the largest RGB value in the first image; determining second display brightness according to the first display brightness of the pixel points in the first image and the target display brightness; and determining a third display brightness corresponding to the second display brightness in the target mapping curve, wherein the third display brightness is the display brightness of the second standard.
Optionally, the processing module 502 is configured to determine a target gain value according to the second display brightness and the third display brightness; and amplifying the chromaticity value of the pixel point in the first image according to the target gain value.
Optionally, the acquiring module 501 is configured to acquire a fourth display brightness of the third image under the first standard and a fifth display brightness of the first image under the first standard;
the processing module 502 is configured to determine, as the first display luminance, a mean value of the fourth display luminance and the fifth display luminance; wherein the third image is a previous image frame of the first image.
In the embodiment of the application, the image can be processed from the image of the first standard to the image of the second standard, and the display brightness range of the second standard is larger than the display brightness range of the first standard, so that the brightness and the saturation of the image can be improved, and the display effect of the image is improved.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing apparatus provided in this embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to fig. 4, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 6, the embodiment of the present application further provides an electronic device 600, including a processor 601 and a memory 602, where the memory 602 stores a program or an instruction that can be executed on the processor 601, and the program or the instruction implements each step of the embodiment of the image processing method when executed by the processor 601, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 1010 is configured to acquire a first image, where the first image is an image of a first standard; performing target processing on the first image to obtain a second image, wherein the second image is an image of a second standard; wherein the target process comprises at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range.
In the embodiment of the application, the image can be processed from the image of the first standard to the image of the second standard, and the display brightness range of the second standard is larger than the display brightness range of the first standard, so that the brightness and the saturation of the image can be improved, and the display effect of the image is improved.
Optionally, the target process includes the gamut mapping process; a processor 1010 for mapping a gamut range of the first image from a gamut range of the first standard to a gamut range of the second standard based on a target lookup table.
In the embodiment of the application, since the color gamut range of the first image can be mapped from the color gamut range of the first standard to the color gamut range of the second standard based on the target lookup table, the saturation of the image can be improved, and the display effect of the image can be improved.
Optionally, the processor 1010 is further configured to determine a first luminance and a first chromaticity coordinate of a pixel point in the first image according to a first color coding of the pixel point under the first standard; determining a second chromaticity coordinate of the pixel point under the second standard according to the first brightness and the first chromaticity coordinate; determining a second color code of the pixel point under the second standard according to the first brightness and the second chromaticity coordinate; the target lookup table is determined based on the first color coding and the second color coding.
In the embodiment of the application, since the target lookup table can be determined based on the first color code and the second color code, a basis can be provided for color gamut expansion of the first image.
Optionally, a processor 1010 is configured to determine a gamut boundary of the first standard and a gamut boundary of the second standard corresponding to the first luminance; determining a first intersection point coordinate of a target line and a color gamut boundary of the first standard and a second intersection point coordinate of the target line and a color gamut boundary of the second standard, wherein the target line is an extension line of a connecting line of a white point coordinate and the first chromaticity coordinate; and determining the second chromaticity coordinate according to the white point coordinate, the first chromaticity coordinate, the first intersection point coordinate and the second intersection point coordinate.
In the embodiment of the application, the second chromaticity coordinates can be determined according to the white point coordinates, the first chromaticity coordinates, the first intersection point coordinates and the second intersection point coordinates, so that a basis can be provided for determining the second color coding of the pixel point under the second standard.
Optionally, a processor 1010 for determining a first distance between the white point coordinates and the first chromaticity coordinates; determining a first vector between the white point coordinates and the first intersection point coordinates, and determining a second distance between the white point coordinates and the first intersection point coordinates according to the first vector; substituting the first distance, the second distance and the first vector into a first mapping function to obtain a second vector between the white point coordinates and the second chromaticity coordinates; determining the second chromaticity coordinates from the second vector and the white point coordinates; wherein the second intersection point coordinate is used to limit the second chromaticity coordinate within a color gamut boundary of the second standard.
In the embodiment of the application, since the second chromaticity coordinates can be determined according to the second vector and the white point coordinates, the basis for determining the target lookup table can be provided.
Optionally, the target process includes the luminance mapping process; a processor 1010 for mapping the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard based on a target mapping curve.
In the embodiment of the application, since the display brightness of the first image can be mapped from the display brightness of the first standard to the display brightness of the second standard based on the target mapping curve, the brightness range of the image can be improved, and the display effect of the image can be improved.
Optionally, the processor 1010 is further configured to determine a second mapping curve from the first standard to the second standard according to a first mapping curve from the second standard to the first standard; counting the proportion of pixel points in the first image in different gray scale ranges; and determining curve parameters according to the proportion, substituting the curve parameters into a curve formula of the second mapping curve, and obtaining the target mapping curve.
In the embodiment of the present application, since the target mapping curve may be determined, a reference basis may be provided for mapping the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard.
Optionally, a processor 1010 is configured to determine a target display brightness of a pixel with the largest RGB value in the first image; determining second display brightness according to the first display brightness of the pixel points in the first image and the target display brightness; and determining a third display brightness corresponding to the second display brightness in the target mapping curve, wherein the third display brightness is the display brightness of the second standard.
In the embodiment of the application, since the second display brightness can be determined and the third display brightness corresponding to the second display brightness in the target mapping curve can be determined, mapping from the display brightness of the first standard to the display brightness of the second standard can be realized.
Optionally, a processor 1010 is configured to determine a target gain value according to the second display luminance and the third display luminance; and amplifying the chromaticity value of the pixel point in the first image according to the target gain value.
In the embodiment of the application, the chromaticity value of the pixel point in the first image can be amplified according to the target gain value, so that the color coordinates of the pixel in the first image can be kept unchanged.
Optionally, the processor 1010 is configured to obtain a fourth display brightness of the third image under the first standard and a fifth display brightness of the first image under the first standard; determining an average value of the fourth display brightness and the fifth display brightness as the first display brightness; wherein the third image is a previous image frame of the first image.
In the embodiment of the present application, since the average value of the fourth display brightness and the fifth display brightness can be determined as the first display brightness of the pixel point in the first image, the flicker problem occurring in the video due to the large difference between the contents of the adjacent frames can be avoided.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image processing method described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (12)

1. An image processing method, comprising:
acquiring a first image, wherein the first image is an image of a first standard;
performing target processing on the first image to obtain a second image, wherein the second image is an image of a second standard;
wherein the target process comprises at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range.
2. The image processing method according to claim 1, wherein the target process includes the color gamut mapping process; the performing target processing on the first image includes:
the gamut range of the first image is mapped from the gamut range of the first standard to the gamut range of the second standard based on a target lookup table.
3. The image processing method according to claim 2, characterized in that the method further comprises:
determining a first brightness and a first chromaticity coordinate of a pixel point in the first image according to a first color code of the pixel point under the first standard;
Determining a second chromaticity coordinate of the pixel point under the second standard according to the first brightness and the first chromaticity coordinate;
determining a second color code of the pixel point under the second standard according to the first brightness and the second chromaticity coordinate;
the target lookup table is determined based on the first color coding and the second color coding.
4. The image processing method according to claim 3, wherein the determining the second chromaticity coordinates of the pixel point under the second standard from the first luminance and the first chromaticity coordinates includes:
determining a gamut boundary of the first standard and a gamut boundary of the second standard corresponding to the first luminance;
determining a first intersection point coordinate of a target line and a color gamut boundary of the first standard and a second intersection point coordinate of the target line and a color gamut boundary of the second standard, wherein the target line is an extension line of a connecting line of a white point coordinate and the first chromaticity coordinate;
and determining the second chromaticity coordinate according to the white point coordinate, the first chromaticity coordinate, the first intersection point coordinate and the second intersection point coordinate.
5. The image processing method of claim 4, wherein said determining said second chromaticity coordinates from said white point coordinates, said first chromaticity coordinates, said first intersection coordinates, and said second intersection coordinates comprises:
determining a first distance between the white point coordinates and the first chromaticity coordinates;
determining a first vector between the white point coordinates and the first intersection point coordinates, and determining a second distance between the white point coordinates and the first intersection point coordinates according to the first vector;
substituting the first distance, the second distance and the first vector into a first mapping function to obtain a second vector between the white point coordinates and the second chromaticity coordinates;
determining the second chromaticity coordinates from the second vector and the white point coordinates;
wherein the second intersection point coordinate is used to limit the second chromaticity coordinate within a color gamut boundary of the second standard.
6. The image processing method according to claim 1, wherein the target process includes the luminance mapping process; the performing target processing on the first image includes:
the display luminance of the first image is mapped from the display luminance of the first standard to the display luminance of the second standard based on a target mapping curve.
7. The image processing method according to claim 6, characterized in that the method further comprises:
determining a second mapping curve from the first standard to the second standard according to a first mapping curve from the second standard to the first standard;
counting the proportion of pixel points in the first image in different gray scale ranges;
and determining curve parameters according to the proportion, substituting the curve parameters into a curve formula of the second mapping curve, and obtaining the target mapping curve.
8. The image processing method according to claim 6, wherein before the mapping of the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard based on the target mapping curve, the method further comprises:
determining target display brightness of a pixel with the largest RGB value in the first image;
determining second display brightness according to the first display brightness of the pixel points in the first image and the target display brightness;
the mapping the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard based on the target mapping curve includes:
And determining a third display brightness corresponding to the second display brightness in the target mapping curve, wherein the third display brightness is the display brightness of the second standard.
9. The image processing method according to claim 8, wherein after the mapping of the display luminance of the first image from the display luminance of the first standard to the display luminance of the second standard based on the target mapping curve, the method further comprises:
determining a target gain value according to the second display brightness and the third display brightness;
and amplifying the chromaticity value of the pixel point in the first image according to the target gain value.
10. The image processing method according to claim 8, wherein before the second display luminance is determined according to the first display luminance of the pixel point in the first image and the target display luminance, the method further comprises:
acquiring fourth display brightness of a third image under the first standard and fifth display brightness of the first image under the first standard;
determining an average value of the fourth display brightness and the fifth display brightness as the first display brightness;
Wherein the third image is a previous image frame of the first image.
11. An image processing apparatus, comprising: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring a first image, wherein the first image is an image of a first standard;
the processing module is used for carrying out target processing on the first image to obtain a second image, wherein the second image is an image of a second standard;
wherein the target process comprises at least one of: the second standard display luminance range is larger than the first standard display luminance range, and the second standard color gamut range is larger than the first standard color gamut range.
12. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implements the image processing method of any of claims 1-10.
CN202310422250.3A 2023-04-19 2023-04-19 Image processing method and electronic device Pending CN116489377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310422250.3A CN116489377A (en) 2023-04-19 2023-04-19 Image processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310422250.3A CN116489377A (en) 2023-04-19 2023-04-19 Image processing method and electronic device

Publications (1)

Publication Number Publication Date
CN116489377A true CN116489377A (en) 2023-07-25

Family

ID=87220732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310422250.3A Pending CN116489377A (en) 2023-04-19 2023-04-19 Image processing method and electronic device

Country Status (1)

Country Link
CN (1) CN116489377A (en)

Similar Documents

Publication Publication Date Title
KR20080045132A (en) Hardware-accelerated color data processing
JP5810803B2 (en) Method, apparatus and system for adjusting whiteboard image
CN113507598B (en) Video picture display method, device, terminal and storage medium
JP2018511193A (en) Method and apparatus for inverse tone mapping of pictures
WO2019101005A1 (en) Pixel compensation method and apparatus, and terminal device
CN109118436B (en) Image tone adaptation method, corresponding electronic device and storage medium
CN112750086A (en) Image processing method and device, electronic equipment and storage medium
WO2023056950A1 (en) Image processing method and electronic device
CN112071267A (en) Brightness adjusting method, brightness adjusting device, terminal equipment and storage medium
JP2012181261A (en) Image processing apparatus, image processing program, and image processing method
KR20110137629A (en) Method for providing texture effect and display apparatus applying the same
CN116489377A (en) Image processing method and electronic device
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
WO2022083081A1 (en) Image rendering method and apparatus, and device and storage medium
TWI327868B (en) Image processing method
US8630488B2 (en) Creating a duotone color effect using an ICC profile
US9807315B1 (en) Lookup table interpolation in a film emulation camera system
US20190130851A1 (en) Image processing method and device thereof
CN109685859B (en) Three-dimensional color automatic adjustment method based on 3D lookup table
CN110310232B (en) System and method for expanding and enhancing digital image color gamut
CN101685537B (en) Region gain is utilized to correct with the method strengthening image
KR20230010842A (en) Image processing device, image processing method, and recording medium recording image processing program
CN116485665A (en) Image processing method, device, acquisition card, electronic equipment and storage medium
CN117011124A (en) Gain map generation method and device, electronic equipment and medium
CN117952875A (en) Image contrast enhancement method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination