CN118138903A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN118138903A
CN118138903A CN202410322240.7A CN202410322240A CN118138903A CN 118138903 A CN118138903 A CN 118138903A CN 202410322240 A CN202410322240 A CN 202410322240A CN 118138903 A CN118138903 A CN 118138903A
Authority
CN
China
Prior art keywords
brightness
image
pixel point
interval
luminance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410322240.7A
Other languages
Chinese (zh)
Inventor
王嗣舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202410322240.7A priority Critical patent/CN118138903A/en
Publication of CN118138903A publication Critical patent/CN118138903A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses an image processing method and a device thereof, belonging to the technical field of electronic equipment, wherein the method comprises the following steps: dividing a pixel point of a first image into N brightness intervals, wherein the N brightness intervals are divided based on N key pixel points in the first image, the N key pixel points comprise pixel points with the largest brightness value in the first image, and the first image is a high dynamic range imaging HDR image; respectively carrying out brightness compression on pixel points in the corresponding brightness intervals based on the brightness compression parameters corresponding to each brightness interval to obtain a second image, wherein each brightness interval corresponds to one brightness compression parameter, and each brightness compression parameter is used for representing the brightness compression degree of the corresponding brightness interval; and performing color gamut compression processing on the second image to obtain a third image, wherein N is an integer greater than 1.

Description

Image processing method and device
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to an image processing method and an image processing device.
Background
With the widespread use of electronic equipment photographing, users have increasingly pursued beauty. In order to obtain an image with a better display effect, the electronic device can obtain a high-dynamic imaging (HIGH DYNAMIC RANGE, HDR) image with a larger dynamic range by combining a plurality of images with different exposure values in the same scene.
However, in processing an HDR image to obtain an image of standard dynamic range (STANDARD DYNAMIC RANGE, SDR), the processing typically needs to be performed manually. If brightness, contrast, etc. of the HDR image need to be manually adjusted, the processing procedure of the HDR image is complicated.
Disclosure of Invention
An object of an embodiment of the present application is to provide an image processing method and apparatus, which can simplify the processing procedure of an HDR image.
In a first aspect, an embodiment of the present application provides an image processing method, including: dividing a pixel point of a first image into N brightness intervals, wherein the N brightness intervals are divided based on N key pixel points in the first image, the N key pixel points comprise pixel points with the largest brightness value in the first image, and the first image is a high dynamic range imaging HDR image; respectively carrying out brightness compression on pixel points in the corresponding brightness intervals based on the brightness compression parameters corresponding to each brightness interval to obtain a second image, wherein each brightness interval corresponds to one brightness compression parameter, and each brightness compression parameter is used for representing the brightness compression degree of the corresponding brightness interval; and performing color gamut compression processing on the second image to obtain a third image, wherein N is an integer greater than 1.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: a processing module; the processing module is used for dividing the pixel points of the first image into N brightness intervals, the N brightness intervals are divided based on N key pixel points in the first image, the N key pixel points comprise the pixel point with the largest brightness value in the first image, and the first image is a high dynamic range imaging HDR image; the processing module is further used for respectively carrying out brightness compression on the pixel points in the corresponding brightness intervals based on the brightness compression parameters corresponding to each brightness interval to obtain a second image, each brightness interval corresponds to one brightness compression parameter, and each brightness compression parameter is used for representing the brightness compression degree of the corresponding brightness interval; and the processing module is also used for carrying out color gamut compression processing on the second image to obtain a third image, wherein N is an integer greater than 1.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the pixel point of the first image is divided into N brightness intervals, the N brightness intervals are divided based on N key pixel points in the first image, the N key pixel points comprise the pixel point with the largest brightness value in the first image, and the first image is a high dynamic range imaging HDR image; respectively carrying out brightness compression on pixel points in the corresponding brightness intervals based on the brightness compression parameters corresponding to each brightness interval to obtain a second image, wherein each brightness interval corresponds to one brightness compression parameter, and each brightness compression parameter is used for representing the brightness compression degree of the corresponding brightness interval; and performing color gamut compression processing on the second image to obtain a third image, wherein N is an integer greater than 1. By the scheme, when the SDR image is generated based on the HDR image, the HDR image can be partitioned to compress the brightness, so that the obtained image is more in line with the nonlinear perception of the brightness by human eyes. In addition, the color gamut compression processing can be performed on the image subjected to the brightness compression, so that the obtained image still keeps normal colors, the HDR image is not required to be manually processed, the SDR image with better image effect can be obtained, and the processing process of the HDR image is simplified.
Drawings
FIG. 1 is one of the flowcharts of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of image brightness compression according to an embodiment of the present application;
FIG. 3 is a second flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 is a graph illustrating a mapping of a target luminance value of a pixel according to an embodiment of the present application;
FIG. 5 is a second mapping chart of a pixel target brightness value according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a pixel falling outside a color gamut boundary according to an embodiment of the present application;
FIG. 7 is a second flowchart of an image processing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of pixel point compression back into a color gamut boundary according to an embodiment of the present application;
Fig. 9 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
Fig. 11 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, for example, the first object may be one or more. In addition, "and/or" in the specification means at least one of the connected objects, and the character "/", generally means a relationship in which the associated objects are one kind of "or".
The terms "at least one", and the like in the description of the present application mean that they encompass any one, any two, or a combination of two or more of the objects. For example, at least one of a, b, c (item) may represent: "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The image processing method and the device provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The image processing method and the device thereof provided by the embodiment of the application can be suitable for processing the HDR image into the scene of the SDR image, in particular to the scene of generating the sample pair containing the HDR image and the SDR image for training the first neural network model in batches.
With the widespread use of electronic equipment photographing, users have increasingly pursued beauty. However, how to define the "beauty" of an image has been a difficult problem, and it is difficult to objectively express the "beauty" so that most of the traditional algorithm-based computational photography methods are dull in color. Thus, more and more algorithms begin to seek AI to express beauty, HDR being one of the most representative algorithms.
In the related art, the HDR algorithm may first capture frames of multiple frames with different exposures in the same scene, synthesize the frames to obtain a high-bandwidth image containing more image information, and then generate an SDR image through the tone mapping (Tonemapping, TM) algorithm. That is, the TM algorithm ultimately determines the color effect of the image.
In general, the TM algorithm may act on an image of high bandwidth to adaptively adjust the brightness and color of the image, ultimately producing an SDR image that may be displayed on the screen of an electronic device. It will be appreciated that when adjusting the brightness and color of an image by an AI model, the AI model requires training learning of sample pairs, and typically the sample pairs are generated by taking an HDR image as input, manually post-processing the HDR image to generate an optimal SDR map as a reference real value (ground truth, GT), so that the resulting HDR-GT sample pairs are used to learn the process of filling the AI model to simulate the process from the HDR image to the SDR image.
However, although the HDR-GT based algorithm simulates the operation of the conventional algorithm to the greatest extent and retains much scene information, the following two problems are encountered when making wide-gamut training data based on an HDR image: 1. the difficulty of performing related post-processing based on the HDR image is extremely high, for example, the HDR image of a high dynamic scene is basically in a full black state, standard gamma operation on the ultra-high bit width HDR image which can reach 22 bits at the highest is not feasible, and only a large amount of manual post-operation can be performed to obtain the correct brightness color effect. 2. When the AI model performs training learning, standard operations such as a color conversion matrix (Color Correction Matrix, CCM) are required, and the standard operations are difficult to realize based on some ultra-high dynamic HDR images, so that the requirements of wide color gamut image generation are more difficult to meet, and the obtained training sample pair needs a large amount of later debugging to be matched with the color standard display of the screen of the electronic equipment.
Thus, the training samples need to manually process the HDR image to obtain the SDR image needed by training the AI model. Therefore, the processing of the HDR image is complicated.
According to the image processing method and the device provided by the embodiment of the application, when the SDR image is generated based on the HDR image, the HDR image can be partitioned to carry out brightness compression, so that the obtained image is more in line with the nonlinear perception of brightness by human eyes. In addition, the color gamut compression processing can be performed on the image subjected to the brightness compression, so that the obtained image still keeps normal colors, the HDR image is not required to be manually processed, the SDR image with better image effect can be obtained, and the processing process of the HDR image is simplified.
The execution subject of the image processing method provided by the embodiment of the application can be an image processing device. The image processing apparatus may be an electronic device or a component in the electronic device, such as an integrated circuit or a chip, for example. An image processing method provided by an embodiment of the present application will be exemplarily described below using an electronic device as an example.
An embodiment of the present application provides an image processing method, and fig. 1 shows a flowchart of the image processing method provided by the embodiment of the present application, where the method may be applied to an electronic device. As shown in fig. 1, the image processing method provided by the embodiment of the present application may include the following steps 101 to 103.
Step 101, the electronic device divides the pixel point of the first image into N brightness intervals.
The N luminance sections are divided based on N key pixels in the first image, where the N key pixels include pixels with the largest luminance value in the first image, and the first image may be a high dynamic range imaging HDR image, and N is an integer greater than 1.
In some embodiments of the present application, the first image may be an HDR image to be processed.
In some embodiments of the present application, the first image may include a variety of image contents.
Illustratively, the first image may include one or more of image content of a person, animal, scenery, item, etc. The embodiment of the present application is not particularly limited.
In some embodiments of the present application, the electronic device may determine N key pixels from the first image, determine N luminance intervals according to the N key pixels, and divide the pixels of the first image into the N luminance intervals.
In some embodiments of the present application, the N key pixel points in the first image may be key points in the first image that conform to human eye gamma perception.
For example, the N key pixel points may be N uniformly distributed key pixel points determined from the pixel points of the first image according to a distribution interval of the brightness values of the pixel points in the first image.
For example, assuming that the luminance value distribution interval of the pixel points in the first image is 0 to 240, four pixel points having luminance values of 60, 120, 180, and 240 may be determined as key pixel points, respectively. The key pixel with the luminance value of 240 is the pixel with the maximum luminance value in the first image, and the key pixels with the luminance values of 60, 120 and 180 are the key pixels with more sensitive human eye perception and more uniform luminance value distribution in the first image.
The luminance value of the pixel may be the brightness of the pixel, and the unit of the luminance value is candela per square meter (cd/m 2, which may also be referred to as nit (nit)).
It can be understood that the N key pixel points include the pixel point with the largest brightness value in the first image, so that when the first image is brightness compressed, all the pixel points of the first image can be ensured to be compressed, and omission is avoided.
In some embodiments of the present application, the upper limit of each brightness interval may correspond to the brightness value of one key pixel.
For example, assuming that n=3, N key pixel points are a key pixel point a, a key pixel point B, and a key pixel point C, respectively, the luminance value of the pixel point with the largest luminance value in the first image is 0-180, the luminance value of the key pixel point a is 60, the luminance value of the key pixel point B is 120, and the luminance value of the key pixel point C is 180. Then the electronic device may determine that brightness interval a is 0-60, brightness interval B is 61-120, and brightness interval C is 121-180.
In some embodiments of the present application, after determining the N brightness intervals, the electronic device may divide, according to the brightness values of the pixels in the first image, the pixels in the first image that match the brightness range of one of the N brightness intervals into the brightness intervals.
Illustratively, it is assumed that n=3, i.e., the luminance section includes a luminance section a, a luminance section B, a luminance section C, and a luminance section D, and the luminance section a is 0 to 60, the luminance section B is 61 to 120, the luminance section C is 121 to 180, and the luminance section D is 181 to 240. Then, the electronic device may divide the pixels in the first image with luminance values between 0 and 60 into a luminance interval a, divide the pixels in the first image with luminance values between 61 and 120 into a luminance interval B, divide the pixels in the first image with luminance values between 121 and 180 into a luminance interval C, and divide the pixels in the first image with luminance values between 181 and 240 into a luminance interval D.
Step 102, the electronic device performs luminance compression on the pixel points in the corresponding luminance intervals based on the luminance compression parameters corresponding to each luminance interval, so as to obtain a second image.
Each brightness interval may correspond to a brightness compression parameter, and each brightness compression parameter may be used to represent a brightness compression degree of the corresponding brightness interval.
In some embodiments of the present application, the above-described luminance compression may be understood as compressing the luminance range of an image. That is, after the electronic device performs luminance compression on the pixel points in the luminance section corresponding to each luminance section based on the luminance compression parameter corresponding to each luminance section, the luminance range of the obtained second image is smaller than the luminance range of the first image.
For example, assuming that the luminance range of the first image is 0 to 200, after the electronic device performs luminance compression on N luminance sections of the first image, a second image having a smaller luminance range may be obtained. For example, the brightness of the second image may range from 0 to 180
In some embodiments of the present application, the luminance compression parameter corresponding to each luminance interval may be set by a user according to requirements for different luminance intervals.
For example, if the user desires to obtain an image with brighter dark areas, the user may set the brightness compression parameter corresponding to the brightness interval with smaller brightness value to a larger value, so that the brightness interval with smaller brightness value maintains higher brightness after brightness compression. In other words, the compression degree of the luminance section having a smaller luminance value is smaller.
In some embodiments of the present application, the luminance compression parameter corresponding to each luminance interval may also be generated by the electronic device. The embodiment of the present application is not particularly limited.
In some embodiments of the present application, the luminance compression parameters corresponding to different luminance intervals may be the same or different. The embodiment of the present application is not particularly limited.
In compressing an image, it is common to compress the image directly with respect to the bit width, that is, compress the brightness and color of the image together. As shown in fig. 2, fig. 2 shows a mapping relationship before and after an image is compressed, an abscissa shown in fig. 2 is a bit width value before image compression, an ordinate is a bit width value after image compression, a straight line 21 represents an image before compression, and a curve 22 represents an image after compression. It can be seen that the maximum value of the image bit width before compression is 22 bits, and the maximum value of the image bit width after compression becomes 14 bits. That is, compression of the image is achieved. However, since the curve 22 is too simple and the human eye perceives the brightness as non-linearity, the compression method described above is not suitable for some ultra-high dynamic scenes.
In some embodiments of the present application, in order to obtain a compressed image with better quality, the image partition may be subjected to brightness compression, so that the obtained compressed image better accords with the perception of brightness by human eyes.
In some embodiments of the present application, as shown in fig. 3 in conjunction with fig. 1, the above step 102 may include steps 102a and 102b described below.
Step 102a, the electronic device obtains a target brightness value of a first key pixel point corresponding to an upper limit of a first brightness interval based on a brightness compression parameter corresponding to the first brightness interval.
Wherein the first brightness interval is one of N brightness intervals.
In some embodiments of the present application, the upper limit of each brightness interval may be the brightness value of one key pixel.
In some embodiments of the present application, the target luminance value of the first key pixel may be a luminance value after luminance compression of the first luminance section, where the luminance value corresponds to the first key pixel.
In some embodiments of the present application, the step 102a may include the following step 102a1 or step 102a2.
In step 102a1, the electronic device uses the luminance compression parameter corresponding to the first luminance section as the target luminance value of the first key pixel when the first luminance section is the target luminance section.
The key pixel point corresponding to the upper limit of the target brightness interval is the pixel point with the maximum brightness value in the first image.
It can be understood that, since the key pixel corresponding to the upper limit of the target brightness interval is the pixel with the largest brightness value in the first image, the brightness compression degree of the target brightness interval may represent the overall brightness compression degree of the first image.
In some embodiments of the present application, the luminance compression parameter corresponding to the target luminance interval may be a fixed value. That is, the user may directly set the target brightness value of the key pixel point corresponding to the upper limit of the target brightness interval after brightness compression, so as to better control the overall brightness compression degree of the first image.
In some embodiments of the present application, the user may further set the luminance compression parameter corresponding to the target luminance interval by setting the bit width of the key pixel corresponding to the upper limit of the target luminance interval after luminance compression, so as to achieve the effect of controlling the overall luminance compression degree of the first image. The embodiment of the present application is not particularly limited.
In step 102a2, the electronic device calculates, when the first luminance section is other than the target luminance section among the N luminance sections, the target luminance value of the first key pixel corresponding to the upper limit of the first luminance section based on the first value between the number of pixels in the first luminance section and the total number of pixels in the first image and the luminance compression parameter corresponding to the first luminance section.
In some embodiments of the present application, the first value is a ratio between a number of pixels in the first brightness interval and a total number of pixels in the first image.
For example, assuming that the number of pixels in the first luminance section is 300 and the total number of pixels in the first image is 1000, the first value is 300/1000=0.3.
In some embodiments of the present application, when the first luminance section is other than the target luminance section among the N luminance sections, the electronic device may count the histogram distribution of the first image according to the luminance values of the N key pixel points in the first image, and determine the histogram ratio of the luminance value in the first luminance section in the first image and the luminance compression parameter corresponding to the first luminance section, so as to calculate the target luminance value of the key pixel point corresponding to the upper limit of the first luminance section.
It should be noted that, the histogram ratio may represent a ratio of the number of pixels having luminance values within the first luminance interval to the total number of pixels of the first image in the histogram distribution.
In some embodiments of the present application, the electronic device may calculate the target luminance value of the first key pixel corresponding to the upper limit of the first luminance interval through the following formula (1).
PHDRA=(K*percentage0_A+M) (1)
Wherein, a may represent a luminance value of the key pixel point a, 0_A may represent a luminance range of the first luminance section; PHDR A may be the target luminance value of the first key pixel; the perfect 0_a may be a first value; k and M may be luminance compression parameters corresponding to the first luminance interval.
In some embodiments of the present application, K and M in equation (1) above may be tunable superparameters.
Illustratively, assuming that K is 30, m is 10, the number of pixels in the bright region a is 40, and the total number of pixels in the first image is 100. Then PHDR A = (30 x 0.4+10) =22. That is, the target luminance value of the section upper limit key pixel point a of the bright section a is 22.
It is understood that the corresponding luminance compression parameters K and M may take different values for different luminance intervals. The embodiment of the present application is not particularly limited.
It should be noted that, for the key pixel point corresponding to the upper limit of each brightness interval, the target brightness value corresponding to the key pixel point may be obtained through the above steps. To avoid repetition, no further description is provided here.
Illustratively, taking n=3, the N key pixels include a key pixel a having a luminance value of 60, a key pixel B having a luminance value of 120, a key pixel C having a luminance value of 180, and a key pixel D having a luminance value of 240, the first image includes 1000 pixels as an example. For a brightness interval A with the brightness range of 0-60, the electronic equipment determines that the number of pixel points in the brightness interval A is 300; for a brightness interval B with the brightness range of 61-120, the electronic equipment determines that the number of pixel points in the brightness interval B is 500; for a brightness interval C with the brightness range of 121-180, the electronic equipment determines that the number of pixels in the brightness interval C is 300; for a brightness interval D ranging from 181-240, the electronic device determines the number of pixels within brightness interval D to be 100. In this way, the electronic device may determine percentage0_60=300/1000=0.3,percentage61_120=500/1000=0.5,percentage161_180=300/1000=0.3,percentage181_240=100/1000=0.1. so that the electronic device may calculate the target luminance value of the key pixel point corresponding to the interval upper limit of each luminance interval through the above formula (1). I.e. :PHDRA=(KA*0.3+MA),PHDRB=(KB*0.5+MB),PHDRC=(KC*0.3+MC),PHDRD=(KD*0.1+MD).
In this way, for the brightness interval in which the key pixel point corresponding to the upper limit of the interval is the pixel point with the largest brightness value in the first image, the electronic device can directly determine the target brightness value corresponding to the key pixel point according to the brightness compression parameter, so that the brightness compression degree of the first image can be better controlled; for the brightness interval that the key pixel point corresponding to the interval upper limit is not the pixel point with the maximum brightness value in the first image, the electronic device can calculate the target brightness value corresponding to the key pixel point according to the ratio between the number of the pixel points in the first brightness interval and the total number of the pixel points in the first image, so that the brightness compression degree of different brightness intervals can be controlled more flexibly. In this way, the effect of luminance compression of the image is improved.
Step 102b, the electronic device performs brightness compression on the pixel points in the first brightness interval based on the target brightness value of the first key pixel point, and obtains a second image after completing brightness compression on the pixel points in each brightness interval.
In some embodiments of the present application, the electronic device may perform luminance compression on other pixels in the first luminance interval based on the target luminance value of the first key pixel, so as to ensure that after luminance compression, the pixels in the first luminance interval may still retain color and brightness rules closer to the original first image.
Note that, for each luminance section, the luminance compression may be performed on the pixel points in each luminance section through the above steps. To avoid repetition, no further description is provided here.
Therefore, the target brightness value of the key pixel point can be determined first, and then brightness compression is carried out on other pixel points in the brightness interval, so that brightness compression can be carried out on the first image partition, and the obtained second image has a good brightness compression effect.
In some embodiments of the present application, the "performing luminance compression on the pixel points in the first luminance interval based on the target luminance value of the first key pixel point" in the step 102b may include the following steps 102b1 to 102b4.
Step 102b1, the electronic device obtains a second value.
The second value may be a difference between an original luminance value of the first key pixel and a target luminance value of the first key pixel.
In some embodiments of the present application, the original luminance value of the first key pixel may be a luminance value of the first key pixel in the first image.
In some embodiments of the present application, the target luminance value of the first key pixel may represent a luminance value of the first key pixel in the second image. I.e. the desired luminance value in the luminance compressed image for the first key pixel.
In some embodiments of the present application, the difference between the original luminance value of the first key pixel and the target luminance value of the first key pixel may represent a difference between changes in luminance values of the first key pixel before and after luminance compression.
Step 102b2, the electronic device reduces the brightness value of each pixel point in the first brightness interval by a second value, and then calculates the average brightness value of the reduced pixels in the first brightness interval.
In some embodiments of the present application, the electronic device may change the luminance value of each pixel in the first luminance interval based on the difference value of the change in luminance value of the first key pixel, so that the changed average luminance value of the pixels in the first luminance interval.
Illustratively, assume that the original luminance of the first key pixel is 35 and the target luminance value of the first key pixel is 30. I.e. the second value is 35-30=5. The electronic device may decrease the luminance value of each pixel point in the first luminance interval having a luminance range of 0-35 by 5.
Illustratively, assume that the original luminance of the first key pixel is 35 and the target luminance value of the first key pixel is 38. I.e. the second value is 35-38 = -3. The electronic device may decrease the luminance value of each pixel point in the first luminance interval having a luminance range of 0-35 by-3, i.e., increase by 3.
In the process of decreasing the luminance value of each pixel in the first luminance section, if the original luminance value of one pixel is less than or equal to the second value, the luminance value of the one pixel may be directly adjusted to 0.
In some embodiments of the present application, after the brightness value of each pixel in the first brightness interval is reduced, the electronic device may calculate the reduced average brightness value of the pixel in the first brightness interval to calculate the target brightness value of the first pixel.
Step 102b3, the electronic device calculates the target luminance value of the first pixel point based on the average luminance value, the original luminance value of the first pixel point, and the luminance distribution factor corresponding to the first luminance interval.
The first pixel point may be a pixel point in the first brightness interval except for the first key pixel point.
In some embodiments of the present application, the original luminance value of the first pixel point may be a luminance value of the first pixel point in the first image.
In some embodiments of the present application, the luminance distribution factor corresponding to the first luminance interval may represent a uniformity degree of distribution of pixels with different luminance values in the first luminance interval.
In some embodiments of the present application, the luminance distribution factor corresponding to the first luminance interval is represented by a normalized variance of a histogram distribution of pixel points in the first luminance interval after the luminance value is reduced.
In some embodiments of the present application, the electronic device may calculate the target luminance value of the first pixel point through the following formula (2).
PHDRij=HDRij+Beta0_PHDRA*(PHDRij-Average0_PHDRA) (2)
Wherein i and j represent the position coordinates of the first pixel point in the first image; PHDR ij may represent a target luminance value for the first pixel point; HDR ij may represent the original luminance value of the first pixel point; beta0_ PHDR A may represent the normalized variance of the histogram distribution of the pixel points in the first luminance interval after the luminance value is reduced; average0_ PHDR A may represent an Average luminance value.
It can be understood that, if the distribution of pixels with different brightness values in the first brightness interval in the first image is more uniform, the larger the normalized variance of the histogram distribution of pixels in the first brightness interval is, the smaller the contrast of the first image is. And generally, after the user expects to process the first image, an image with larger contrast can be obtained, so that when the contrast of the first brightness interval in the first image is smaller, the contrast of the first brightness interval in the second image calculated by the formula (2) is larger by adding the histogram distribution normalization variance in the formula (2), and the brightness compression effect of the first image can be improved.
It should be noted that, for each pixel point in other brightness intervals in the first image, the target brightness value corresponding to the pixel point may be calculated through the above steps. To avoid repetition, no further description is provided here.
Step 102b4, the electronic device adjusts the original brightness value of the second pixel point in the first brightness interval to the target brightness value corresponding to the second pixel point.
The second pixel point is one pixel point in the first brightness interval.
In some embodiments of the present application, after calculating the target luminance values of all the pixels in the first interval, the electronic device may adjust the original luminance value of each pixel except the first key pixel in the first luminance interval to its corresponding target luminance value, so as to obtain a first luminance interval in which luminance compression is completed.
For example, assume that n= 5,N key pixels include a key pixel a, a key pixel B, a key pixel C, a key pixel D, and a key pixel E, the luminance interval a corresponding to the key pixel a includes 150 pixels, the luminance interval B corresponding to the key pixel B includes 350 pixels, the luminance interval C corresponding to the key pixel C includes 200 pixels, the luminance interval D corresponding to the key pixel D includes 100 pixels, and the luminance interval E corresponding to the key pixel E includes 200 pixels.
Firstly, the electronic equipment can calculate a target brightness value of a key pixel point A based on a brightness compression parameter corresponding to the brightness interval A and a ratio of 0.15 between the number of pixel points in the brightness interval A and the total number of pixel points in the first image; calculating a target brightness value of the key pixel point B based on a brightness compression parameter corresponding to the brightness interval B and a ratio of 0.35 between the number of the pixel points in the brightness interval B and the total number of the pixel points in the first image; calculating a target brightness value of the key pixel point C based on a brightness compression parameter corresponding to the brightness interval C and a ratio of 0.2 between the number of the pixel points in the brightness interval C and the total number of the pixel points in the first image; calculating a target brightness value of the key pixel point D based on a brightness compression parameter corresponding to the brightness interval D and a ratio of 0.1 between the number of the pixel points in the brightness interval D and the total number of the pixel points in the first image; and taking the brightness compression parameter corresponding to the brightness interval E as a target brightness value of the key pixel point E.
Then, the electronic device may determine the difference between the original luminance values of the 4 key pixel points and the target luminance values thereof, and reduce the luminance value of each pixel point in the luminance interval corresponding to each key pixel point by a corresponding difference, as shown in fig. 4, so as to obtain a broken line 41 according to the reduced luminance value of the pixel point. Then, the electronic device may calculate the average luminance value of the pixel points in each luminance interval after the decrease, and calculate the target luminance value of each pixel point in each luminance interval by the above formula (2), as shown in fig. 5, and may obtain a curve 51 made according to the luminance value of each pixel point in the second image. It can be seen that the luminance compression curve shown by the curve 51 is more consistent with the nonlinear perception curve of luminance by human eyes, so that the obtained luminance compression effect of the second image is better.
Therefore, the electronic equipment can calculate the target brightness value of the pixel point based on the average brightness value of the pixel point in the brightness interval in the image after the reduction, the original brightness value of the pixel point in the brightness interval and the brightness distribution factor corresponding to the brightness interval, so that an image with normal overall brightness and more details is reserved, the target brightness value after brightness compression is more in line with the nonlinear perception of human eyes on the brightness, and the brightness compression effect of the obtained second image is better.
And 103, the electronic equipment performs color gamut compression processing on the second image to obtain a third image.
In some embodiments of the present application, after the first image is subjected to brightness compression to obtain the second image, there may be a color overflow phenomenon in the second image. As shown in fig. 6, the boundary 61 may represent a color gamut boundary corresponding to the second image, and after the first image is subjected to brightness compression, a part of pixels in the second image that are present in the second image fall outside the color gamut boundary 61.
Therefore, the electronic device can perform color gamut compression processing on the second image, compress the pixel points falling outside the color gamut boundary back into the color gamut boundary, and obtain a processed third image.
In some embodiments of the present application, the third image may be an image similar to the SDR image, which has the advantages of easy post-processing of the SDR image, standardized color operation, and the like, but is obtained without complicated manual processing and adaptive adjustment operations.
The embodiment of the application provides an image processing method, which can perform rapid brightness compression and color gamut compression processing on an HDR image to obtain a required third image when generating the SDR image based on the HDR image, thereby saving a great deal of labor cost and material resource cost and simplifying the processing process of the HDR image.
In some embodiments of the present application, as shown in fig. 7 in conjunction with fig. 1, the step 103 may include the following steps 103a and 103b.
Step 103a, the electronic device obtains a brightness compression ratio between the second image and the first image.
Step 103b, the electronic device performs color gamut compression processing on the third pixel point in the second image based on the brightness compression ratio, so as to obtain a third image.
The third pixel point may be a pixel point in the second image that is outside the color gamut corresponding to the second image.
In some embodiments of the present application, the electronic device may use a ratio of a bit width value of the second image to a bit width value of the first image as a luminance compression ratio between the second image and the first image.
In some embodiments of the present application, the electronic device may compress, according to the brightness compression ratio, pixels in the second image that are outside the color gamut corresponding to the second image back into the color gamut corresponding to the second image, so as to obtain the third image.
Therefore, the electronic equipment can compress the pixel points in the second image, which are outside the color gamut corresponding to the second image, back into the color gamut according to the brightness compression ratio between the second image and the first image, so that the problems of color fault, brightness inversion, color overflow and the like of the obtained image can be avoided.
In some embodiments of the present application, "performing the color gamut compression process on the third pixel point in the second image based on the luminance compression ratio" in the above step 103b may include the following steps 103b1 and 103b2.
Step 103b1, the electronic device calculates a target color value of the third pixel point based on the luminance compression ratio and a distance between the third pixel point in the second image and a domain boundary of the color domain corresponding to the second image.
Step 103b2, the electronic device adjusts the original color value of the third pixel point to the target color value of the third pixel point.
In some embodiments of the present application, the electronic device may calculate the target color value of the third pixel point through the following formula (3).
Wherein PCHDR ij may represent the target color value of the third pixel; CB is a standard Color gamut Boundary (CB); cratio may represent a luminance compression ratio between the second image and the first image, PHDR ij D may represent a vertical distance of the third pixel point to CB.
In some trials of the present application, the electronic device may traverse all the pixels in the second image through the above formula (3) to obtain a third image after color gamut compression.
In some embodiments of the present application, after the electronic device adjusts the original color value of the third pixel to the target color value of the third pixel, as shown in fig. 8, the obtained third image does not have any more pixels outside the corresponding color gamut. That is, pixels that would otherwise fall outside the color gamut are compressed back into the gamut.
Therefore, the electronic device can compress the pixel points in the second image, which are outside the color gamut corresponding to the second image, back into the color gamut according to the brightness compression ratio between the second image and the first image, so that the obtained third image has better color effect.
In some embodiments of the present application, after the step 103, the image processing method provided in the embodiment of the present application may further include the following steps 104 to 106.
Step 104, the electronic device composes the first image and the third image into a group of sample pairs.
Step 105, the electronic device trains a first neural network model based on a set of sample pairs.
Step 106, the electronic device inputs the first HDR image into the trained first neural network model, and outputs an SDR image corresponding to the first HDR image.
It will be appreciated that since standard dynamic range (STANDARD DYNAMIC RANGE, SDR) displays are installed in some electronic devices and SDR displays cannot normally display HDR images, SDR images conforming to SDR displays may be generated by tone mapping the HDR images. In particular, in tone mapping an HDR image, the mapping may typically be performed by an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) model. The AI model needs training and learning a large number of sample pairs containing an HDR image and an SDR image, so that tone mapping can be well performed on the HDR image, and the SDR image is output.
However, the training sample pair described above requires manual processing of the HDR image to obtain the SDR image required to train the AI model. Thus, the processing procedure of the HDR image is complicated.
In some embodiments of the application, the electronic device may use the first image and the resulting third image as a set of pairs of samples for training learning of the neural network model. Therefore, the trained neural network model can output SDR images with good processing effects.
In some embodiments of the present application, after obtaining the third image, the electronic device may further perform post-processing operations on various training samples on the third image, such as adaptive contrast enhancement, to obtain sample pairs with different processing effects, so that the neural network model may learn multiple different processing effects.
In this way, the electronic device can obtain the third image by performing the fast luminance compression and color gamut compression processing on the original HDR image. And then post-processing is carried out on the third image to obtain HDR: PCHDR-GT training sample pairs greatly improve the manufacturing efficiency and effect of the wide color gamut training sample pairs, and ensure the consistency of the effect of the image output by the algorithm and GT when the image is displayed in the wide color gamut.
The above embodiments of the method, or various possible implementation manners in the embodiments of the method, may be executed separately, or may be executed in any two or more combinations with each other, and may specifically be determined according to actual use requirements, which is not limited by the embodiments of the present application.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
Fig. 9 shows a schematic diagram of one possible configuration of an image processing apparatus involved in an embodiment of the present application. As shown in fig. 9, the image processing apparatus 90 may include: a processing module 91.
The processing module 91 is configured to divide pixels of the first image into N luminance sections, where the N luminance sections are divided based on N key pixels in the first image, and the N key pixels include pixels with a maximum luminance value in the first image, and the first image is a high dynamic range imaging HDR image; and the brightness compression parameters are used for representing the brightness compression degree of the corresponding brightness interval; and the method is also used for carrying out color gamut compression processing on the second image to obtain a third image, wherein N is an integer greater than 1.
In a possible implementation manner, the processing module 91 is further configured to perform a color gamut compression process on the second image to obtain a third image, and then combine the first image and the third image into a set of sample pairs; and training a first neural network model based on a set of sample pairs; and the method is also used for inputting the first HDR image into the trained first neural network model and outputting an SDR image corresponding to the first HDR image.
In one possible implementation manner, an upper limit of each brightness interval corresponds to a brightness value of one key pixel point;
the processing module 91 is specifically configured to:
Acquiring a target brightness value of a first key pixel point corresponding to the upper limit of a first brightness interval based on a brightness compression parameter corresponding to the first brightness interval;
performing brightness compression on the pixel points in the first brightness interval based on the target brightness value of the first key pixel points, and obtaining a second image after completing brightness compression on the pixel points in each brightness interval;
the first brightness interval is one of N brightness intervals.
In one possible implementation manner, the processing module 91 is specifically configured to:
Taking the brightness compression parameter corresponding to the first brightness interval as the target brightness value of the first key pixel point under the condition that the first brightness interval is the target brightness interval;
when the first luminance section is other than the target luminance section among the N luminance sections, calculating a target luminance value of a first key pixel point corresponding to an upper limit of the first luminance section based on a first value between the number of pixel points in the first luminance section and the total number of pixel points in the first image and a luminance compression parameter corresponding to the first luminance section;
the key pixel point corresponding to the upper limit of the target brightness interval is the pixel point with the maximum brightness value in the first image.
In one possible implementation manner, the processing module 91 is specifically configured to:
Acquiring a second value, wherein the second value is a difference value between an original brightness value of the first key pixel point and a target brightness value of the first key pixel point;
After the brightness value of each pixel point in the first brightness interval is reduced by a second value, calculating the average brightness value of the reduced pixel points in the first brightness interval;
Calculating a target brightness value of a first pixel point based on the average brightness value, an original brightness value of the first pixel point and a brightness distribution factor corresponding to a first brightness interval, wherein the first pixel point is one pixel point except for a first key pixel point in the first brightness interval;
and adjusting the original brightness value of the second pixel point in the first brightness interval to be a target brightness value corresponding to the second pixel point, wherein the second pixel point is one pixel point in the first brightness interval.
In one possible implementation manner, the processing module 91 is specifically configured to:
acquiring the brightness compression ratio between the second image and the first image;
Performing color gamut compression processing on a third pixel point in the second image based on the brightness compression ratio to obtain a third image;
The third pixel point is a pixel point in the second image, which is outside the color gamut corresponding to the second image.
In one possible implementation manner, the processing module 91 is specifically configured to:
Calculating a target color value of a third pixel point in the second image based on the brightness compression ratio and the distance between the third pixel point in the second image and the domain boundary of the color domain corresponding to the second image;
And adjusting the original color value of the third pixel point to be the target color value of the third pixel point.
The embodiment of the application provides an image processing device, which can perform rapid brightness compression and color gamut compression processing on an HDR image to obtain a required third image when generating the SDR image based on the HDR image, thereby saving a great deal of labor cost and material resource cost and simplifying the processing process of the HDR image.
The image processing device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided by the embodiment of the application can realize the processes realized by the embodiment of the image processing method, achieve the same technical effects, and are not repeated here for avoiding repetition.
Optionally, as shown in fig. 10, the embodiment of the present application further provides an electronic device 1000, including a processor 1001 and a memory 1002, where the memory 1002 stores a program or an instruction that can be executed on the processor 1001, and the program or the instruction implements each step of the embodiment of the image processing method when executed by the processor 1001, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, and processor 1110.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1110 by a power management system, such as to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than illustrated, or may combine some components, or may be arranged in different components, which are not described in detail herein.
The processor 1110 is configured to divide pixels of the first image into N luminance sections, where the N luminance sections are divided based on N key pixels in the first image, the N key pixels include pixels with a maximum luminance value in the first image, and the first image is a high dynamic range imaging HDR image; and the brightness compression parameters are used for representing the brightness compression degree of the corresponding brightness interval; and the method is also used for carrying out color gamut compression processing on the second image to obtain a third image, wherein N is an integer greater than 1.
In a possible implementation manner, the processor 1110 is further configured to, after performing a color gamut compression process on the second image to obtain a third image, combine the first image and the third image into a set of sample pairs; and training a first neural network model based on a set of sample pairs; and the method is also used for inputting the first HDR image into the trained first neural network model and outputting an SDR image corresponding to the first HDR image.
In one possible implementation manner, an upper limit of each brightness interval corresponds to a brightness value of one key pixel point;
The processor 1110 is specifically configured to:
Acquiring a target brightness value of a first key pixel point corresponding to the upper limit of a first brightness interval based on a brightness compression parameter corresponding to the first brightness interval;
performing brightness compression on the pixel points in the first brightness interval based on the target brightness value of the first key pixel points, and obtaining a second image after completing brightness compression on the pixel points in each brightness interval;
the first brightness interval is one of N brightness intervals.
In one possible implementation, the processor 1110 is specifically configured to:
Taking the brightness compression parameter corresponding to the first brightness interval as the target brightness value of the first key pixel point under the condition that the first brightness interval is the target brightness interval;
when the first luminance section is other than the target luminance section among the N luminance sections, calculating a target luminance value of a first key pixel point corresponding to an upper limit of the first luminance section based on a first value between the number of pixel points in the first luminance section and the total number of pixel points in the first image and a luminance compression parameter corresponding to the first luminance section;
the key pixel point corresponding to the upper limit of the target brightness interval is the pixel point with the maximum brightness value in the first image.
In one possible implementation, the processor 1110 is specifically configured to:
Acquiring a second value, wherein the second value is a difference value between an original brightness value of the first key pixel point and a target brightness value of the first key pixel point;
After the brightness value of each pixel point in the first brightness interval is reduced by a second value, calculating the average brightness value of the reduced pixel points in the first brightness interval;
Calculating a target brightness value of a first pixel point based on the average brightness value, an original brightness value of the first pixel point and a brightness distribution factor corresponding to a first brightness interval, wherein the first pixel point is one pixel point except for a first key pixel point in the first brightness interval;
and adjusting the original brightness value of the second pixel point in the first brightness interval to be a target brightness value corresponding to the second pixel point, wherein the second pixel point is one pixel point in the first brightness interval.
In one possible implementation, the processor 1110 is specifically configured to:
acquiring the brightness compression ratio between the second image and the first image;
Performing color gamut compression processing on a third pixel point in the second image based on the brightness compression ratio to obtain a third image;
The third pixel point is a pixel point in the second image, which is outside the color gamut corresponding to the second image.
In one possible implementation, the processor 1110 is specifically configured to:
Calculating a target color value of a third pixel point in the second image based on the brightness compression ratio and the distance between the third pixel point in the second image and the domain boundary of the color domain corresponding to the second image;
And adjusting the original color value of the third pixel point to be the target color value of the third pixel point.
The embodiment of the application provides electronic equipment, which can perform rapid brightness compression and color gamut compression processing on an HDR image to obtain a required third image when generating the SDR image based on the HDR image, so that a great amount of labor cost and material resource cost can be saved, and the processing process of the HDR image is simplified.
It should be appreciated that in embodiments of the present application, the input unit 1104 may include a graphics processor (Graphics Processing Unit, GPU) 11041 and a microphone 11042, the graphics processor 11041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. The touch panel 11071 is also referred to as a touch screen. The touch panel 11071 may include two parts, a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 1109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1110 may include one or more processing units; optionally, the processor 1110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1110.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described image processing method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. An image processing method, the method comprising:
Dividing a pixel point of a first image into N brightness intervals, wherein the N brightness intervals are divided based on N key pixel points in the first image, the N key pixel points comprise pixel points with the largest brightness values in the first image, and the first image is a high dynamic range imaging HDR image;
Respectively carrying out brightness compression on pixel points in the corresponding brightness intervals based on the brightness compression parameters corresponding to each brightness interval to obtain a second image, wherein each brightness interval corresponds to one brightness compression parameter, and each brightness compression parameter is used for representing the brightness compression degree of the corresponding brightness interval;
and performing color gamut compression processing on the second image to obtain a third image, wherein N is an integer greater than 1.
2. The method of claim 1, wherein after performing the color gamut compression process on the second image to obtain a third image, the method further comprises:
forming a set of pairs of samples from the first image and the third image;
Training a first neural network model based on the set of sample pairs;
and inputting a first HDR image into the trained first neural network model, and outputting an SDR image corresponding to the first HDR image.
3. The method of claim 1, wherein an upper limit of each of the brightness intervals corresponds to a brightness value of one key pixel;
And respectively carrying out brightness compression on pixel points in the brightness interval corresponding to the brightness compression parameters based on the brightness compression parameters corresponding to each brightness interval to obtain a second image, wherein the method comprises the following steps:
Acquiring a target brightness value of a first key pixel point corresponding to the upper limit of the first brightness interval based on the brightness compression parameter corresponding to the first brightness interval;
performing brightness compression on the pixel points in the first brightness interval based on the target brightness value of the first key pixel point, and obtaining the second image after completing brightness compression on the pixel points in each brightness interval;
Wherein the first brightness interval is one of the N brightness intervals.
4. A method according to claim 2 or 3, wherein said performing luminance compression on pixels within said first luminance interval based on the target luminance value of said first key pixel comprises:
acquiring a second value, wherein the second value is a difference value between an original brightness value of the first key pixel point and a target brightness value of the first key pixel point;
reducing the brightness value of each pixel point in the first brightness interval by the second value, and then calculating the reduced average brightness value of the pixel points in the first brightness interval;
calculating a target brightness value of a first pixel point based on the average brightness value, an original brightness value of the first pixel point and a brightness distribution factor corresponding to the first brightness interval, wherein the first pixel point is one pixel point except for the first key pixel point in the first brightness interval;
And adjusting the original brightness value of a second pixel point in the first brightness interval to be a target brightness value corresponding to the second pixel point, wherein the second pixel point is one pixel point in the first brightness interval.
5. The method of claim 1, wherein performing the color gamut compression on the second image to obtain a third image comprises:
acquiring a brightness compression ratio between the second image and the first image;
performing color gamut compression processing on a third pixel point in the second image based on the brightness compression ratio to obtain the third image;
The third pixel point is a pixel point in the second image, which is outside the color gamut corresponding to the second image.
6. An image processing apparatus, characterized in that the apparatus comprises: a processing module;
The processing module is configured to divide pixels of a first image into N luminance intervals, where the N luminance intervals are divided based on N key pixels in the first image, the N key pixels include pixels with a maximum luminance value in the first image, and the first image is a high dynamic range imaging HDR image;
the processing module is further configured to perform luminance compression on pixel points in the luminance interval corresponding to each luminance interval based on the luminance compression parameter corresponding to the luminance interval, so as to obtain a second image, where each luminance interval corresponds to one luminance compression parameter, and each luminance compression parameter is used to represent a luminance compression degree of the corresponding luminance interval;
The processing module is further configured to perform color gamut compression processing on the second image to obtain a third image, where N is an integer greater than 1.
7. The apparatus of claim 6, wherein the processing module is further configured to, after performing a color gamut compression process on the second image to obtain a third image, combine the first image and the third image into a set of sample pairs;
The processing module is further configured to train a first neural network model based on the set of sample pairs;
The processing module is further configured to input a first HDR image into the trained first neural network model, and output an SDR image corresponding to the first HDR image.
8. The apparatus of claim 6, wherein an upper limit of each of the brightness intervals corresponds to a brightness value of one key pixel;
the processing module is specifically configured to:
Acquiring a target brightness value of a first key pixel point corresponding to the upper limit of the first brightness interval based on the brightness compression parameter corresponding to the first brightness interval;
performing brightness compression on the pixel points in the first brightness interval based on the target brightness value of the first key pixel point, and obtaining the second image after completing brightness compression on the pixel points in each brightness interval;
Wherein the first brightness interval is one of the N brightness intervals.
9. The apparatus according to claim 7 or 8, wherein the processing module is specifically configured to:
acquiring a second value, wherein the second value is a difference value between an original brightness value of the first key pixel point and a target brightness value of the first key pixel point;
reducing the brightness value of each pixel point in the first brightness interval by the second value, and then calculating the reduced average brightness value of the pixel points in the first brightness interval;
calculating a target brightness value of a first pixel point based on the average brightness value, an original brightness value of the first pixel point and a brightness distribution factor corresponding to the first brightness interval, wherein the first pixel point is one pixel point except for the first key pixel point in the first brightness interval;
And adjusting the original brightness value of a second pixel point in the first brightness interval to be a target brightness value corresponding to the second pixel point, wherein the second pixel point is one pixel point in the first brightness interval.
10. The apparatus of claim 6, wherein the processing module is specifically configured to:
acquiring a brightness compression ratio between the second image and the first image;
performing color gamut compression processing on a third pixel point in the second image based on the brightness compression ratio to obtain the third image;
The third pixel point is a pixel point in the second image, which is outside the color gamut corresponding to the second image.
CN202410322240.7A 2024-03-20 2024-03-20 Image processing method and device Pending CN118138903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410322240.7A CN118138903A (en) 2024-03-20 2024-03-20 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410322240.7A CN118138903A (en) 2024-03-20 2024-03-20 Image processing method and device

Publications (1)

Publication Number Publication Date
CN118138903A true CN118138903A (en) 2024-06-04

Family

ID=91240370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410322240.7A Pending CN118138903A (en) 2024-03-20 2024-03-20 Image processing method and device

Country Status (1)

Country Link
CN (1) CN118138903A (en)

Similar Documents

Publication Publication Date Title
CN109817170B (en) Pixel compensation method and device and terminal equipment
CN111369644A (en) Face image makeup trial processing method and device, computer equipment and storage medium
CN110728633A (en) Multi-exposure high-dynamic-range inverse tone mapping model construction method and device
CN109686342B (en) Image processing method and device
CN108694030B (en) Method and apparatus for processing high dynamic range images
CN109785239B (en) Image processing method and device
WO2021148057A1 (en) Method and apparatus for generating low bit width hdr image, storage medium, and terminal
WO2012015020A1 (en) Method and device for image enhancement
US8164650B2 (en) Image processing apparatus and method thereof
CN111275648B (en) Face image processing method, device, equipment and computer readable storage medium
CN116843566A (en) Tone mapping method, tone mapping device, display device and storage medium
US20170163852A1 (en) Method and electronic device for dynamically adjusting gamma parameter
CN115293994B (en) Image processing method, image processing device, computer equipment and storage medium
WO2023103813A1 (en) Image processing method and apparatus, device, storage medium, and program product
US20060274937A1 (en) Apparatus and method for adjusting colors of a digital image
CN111462158A (en) Image processing method and device, intelligent device and storage medium
CN118138903A (en) Image processing method and device
CN107105167B (en) Method and device for shooting picture during scanning question and terminal equipment
Kuang et al. iCAM06, HDR, and image appearance
CN110913195A (en) White balance automatic adjusting method, device and computer readable storage medium
CN111031301A (en) Method for adjusting color gamut space, storage device and display terminal
CN114363507A (en) Image processing method and device
CN116528053A (en) Image exposure method and device and electronic equipment
CN117952875A (en) Image contrast enhancement method and device, storage medium and electronic equipment
CN117097855A (en) Video playing method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination