Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides an OLED screen sub-pixel brightness extraction method based on an imaging brightness meter, which improves the accuracy of shooting and drawing through an iterative method in the processes of shooting, drawing and modeling, avoids the generation of moire fringes and provides a foundation for the final DeMura realization.
The invention provides an OLED screen sub-pixel brightness extraction method based on an imaging brightness meter, which comprises the following steps of:
s1: adjusting the focal length, the position and the exposure time of an imaging brightness meter, and shooting a picture output by a display screen to be tested to enable the maximum brightness value obtained by the imaging brightness meter to be within a preset brightness range;
s2: obtaining a spatial sampling multiplying power K according to the image shot in S1;
s3: according to the spatial sampling multiplying power K obtained through calculation of S2, dividing the image shot in S1 into a plurality of sub-pixel clusters, wherein each sub-pixel cluster corresponds to one sub-pixel of the display screen to be detected, and performing surface fitting on one or a plurality of sub-pixel clusters by using a two-dimensional diffusion model to obtain a diffusion coefficient or an average value thereof;
the sub-pixels are optical pattern units generated by a light emitting unit of the display screen to be detected, and because chromatic aberration and other phenomena exist, a circle of confusion can be formed in the shooting equipment;
the sub-pixel clusters represent a sampling point set obtained by a shooting device collecting an optical pattern unit, and each sub-pixel cluster comprises a plurality of sampling points;
the two-dimensional diffusion model is a two-dimensional distribution model of preset diffusion circle brightness, the two-dimensional distribution model of the preset diffusion circle brightness can be a two-dimensional normal distribution model or other two-dimensional distribution models, and the two-dimensional diffusion model can be set by a person skilled in the art according to needs;
the diffusion coefficient is a parameter of the two-dimensional diffusion model.
S4: simulating by using a two-dimensional diffusion model according to the diffusion coefficient or the average value thereof obtained in S3 to obtain a simulated diffusion circle, and calculating corresponding simulated sampling cliques for different sampling positions, wherein the simulated sampling cliques are relative brightness values of the simulated diffusion circle at each sampling point of a sub-pixel clique, and the sampling positions are positions of the sampling points of the sub-pixel clique relative to the center of the simulated diffusion circle;
s5: and matching the analog sampling cliques obtained by calculation in the step S4 with the actual data of the sub-pixel cliques to obtain the sampling positions of the sub-pixel cliques with the highest fitting degree, and calculating the fitting brightness value of the center of the analog diffusion circle corresponding to each sub-pixel clique according to the sampling position fitting to serve as the sub-pixel brightness value of the display screen to be detected.
Further, the two-dimensional diffusion model in S3 and S4 is a two-dimensional normal distribution model, the simulated diffusion circle conforms to the two-dimensional normal distribution formula (1) after normalization,
where f (x, y) is the normalized luminance value at coordinates (x, y), σ1、σ2Respectively transverse and longitudinal dispersion coefficients, rho is a correlation parameter, and mu 1 and mu 2 are central position parameters;
the unit of the coordinate (x, y) is the unit distance of the sub-pixels of the display screen to be measured;
let us note σ if the imaging system is isotropic1=σ2=σ,ρ=0;
When the center of the simulated circle of confusion is the origin, μ 1 = μ 2 =0, equation (1) reduces to equation (2):
in practical computing applications, the units of x and y can be taken as
σ), the formula is further reduced to formula (3):
wherein A is a pre-exponential factor, when A is 1/(2 pi sigma)2) Then, the function is a normalized function; when A takes other values, A represents the brightness value or relative brightness value of the center of the circle of confusion to simplify the calculation.
Further, in S4, the sampling position is obtained as follows:
according to spatial sampling multiplying power K, in the range of 0 to 1/K, a plurality of sampling phase values are taken to form a set { phi }, and one value in the set { phi } is taken for a horizontal sampling phase and a vertical sampling phase to form a phase combination (phi)x,Φy) As sampling locations, the phase combinations (Φ)x,Φy) The coordinate position (x, y) in the formula (2) of the sampling point closest to the center of the simulated diffusion circle at the lower right of the center of the simulated diffusion circle in the sub-pixel cluster is shown.
Further, the sampling phase values are taken to form a set { Φ }, specifically:
taking 0, 1/(nK), 2/(nK), … …, (n-1)/(nK) forms the set { Φ }, where n is a positive integer.
Further, in S5, the matching between the analog sample blob obtained in S4 and the actual data of the sub-pixel blob to obtain the sample position of each sub-pixel blob with the highest degree of fitting specifically is:
and matching the analog sampling cliques obtained by calculation in the step S4 with actual data of a first sub-pixel clique to obtain a sampling position with the highest fitting degree of the first sub-pixel clique, comparing the spatial position periodic distribution rule of the sub-pixel clique with the spatial position periodic distribution rule of the sampling points to obtain a recursion formula, and obtaining the sampling position with the highest fitting degree of other sub-pixel cliques by recursion.
The periodic distribution rule of the spatial positions of the sub-pixel clusters can be obtained from the physical structure information of the display screen to be detected, and the periodic distribution rule of the spatial positions of the sampling points can be obtained from the physical structure information of the imaging luminance meter. The above two are compared to obtain a recurrence formula, which is a common technical means of those skilled in the art and will not be described in detail herein.
Further, the method specifically comprises the following steps:
matching the analog sampling clique calculated in the step S4 with the actual data of a first sub-pixel clique to obtain the sampling position (phi) with the highest degree of fitting of the first sub-pixel cliquex1,Φy1);
Sample position (phi) with highest fitness of other sub-pixel clustersxs,Φyt) Satisfies formula (4) and formula (5):
wherein K is a spatial sampling multiplying factor, s and t are integers, and (s-1) and (t-1) represent the transverse and longitudinal relative position relations between other sub-pixel clusters and the first sub-pixel cluster,
to satisfy
The largest integer of > 0 is selected from the group,
to satisfy
Maximum integer > 0.
Further, step S2 is specifically:
finding the edge of the display screen in the image data shot according to the S1, and calculating the number of sampling points occupied by the length and the width of the display screen in the image;
and dividing the number of sampling points occupied by the length and the width of the display screen in the image by the number of sub-pixels corresponding to the resolution of the display screen to be detected respectively, and taking the average value of one or both of the sub-pixels as the spatial sampling multiplying power K.
Further, before S1, the method further includes:
s0: and calibrating the linearity of the imaging brightness meter, and after the imaging brightness meter shoots a picture of a certain brightness gray scale, calibrating the obtained brightness gray scale by using a calibration curve.
Further, before S0, the method further includes:
and (3) adjusting the exposure time of the imaging luminance meter by using a uniform integrating sphere, shooting images at different exposure times respectively, and obtaining a calibration curve according to the relationship between the luminance statistic value of the shot images and the exposure time.
Further, in S1, the adjusting the position of the imaging luminance meter specifically includes:
the fractional part of the spatial sampling magnification K calculated in S2 is made smaller than 0.2 or larger than 0.8.
Further, in S1, the upper limit of the preset luminance range is not more than 90% of the upper limit of the luminance range of the imaging luminance meter
The invention has the following beneficial effects:
moire generation is caused by different phases of sampling, and different phases correspond to different imaging modes, so when the same statistical calculation formula is used in drawing, the obtained result generates periodic deviation, which causes the generation of Moire, and the linearity of the imaging brightness also influences the final result.
The invention carries out mode matching on different sampling positions, and simultaneously applies different fitting according to different sampling positions to restore the brightness at the center of the sub-pixel of the display screen to be measured, thereby avoiding the generation of periodic deviation, essentially solving the problem of moire, avoiding the generation of moire and improving the measurement precision.
On the other hand, if the two-bit diffusion model of each sub-pixel cluster is fitted, because the fitting parameters are more and the calculated amount is larger, the brightness of each sub-pixel center of the display screen to be tested can be restored only by selecting one or a plurality of sub-pixel clusters for fitting and only selecting one sub-pixel cluster for matching, and the calculated amount is greatly saved.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The existing DeMura protocol includes the following steps:
1. shooting
The sub-pixel display of the target display is photographed using a high resolution Mono (monochrome brightness) camera, here a strictly optically calibrated Mono camera, called an imaging luminance meter. Typically, several monochrome gray-scale pictures are taken, such as 32, 64, 96, 160, 196, 224 gray-scales of red. The green and blue colors also capture these gray levels, typically around 20.
The sub-pixels of each display are required to be clearly photographed. Typically, 4-25 sub-pixels of the imaging luminance meter are used to capture the sub-pixels of a display.
2. Picture frame
And after shooting is finished, performing sub-pixel resolution on the shot image, finding out a sub-pixel cluster formed by the sub-pixels of each display on an imaging brightness meter, and obtaining the real brightness value of the sub-pixel cluster through calculation. Thereby obtaining the brightness value of real display of each sub-pixel under different gray scales.
3. Modeling
And modeling the display brightness value of each sub-pixel under each measured gray scale to obtain a model of the whole display without DeMura calibration. And obtaining the offset needing to be calibrated through a standard display model, and obtaining calibration data of DeMura.
4. Compression
The amount of calibration data is typically around 500 mbytes (1080 × 2160 resolution), but the spatial capacity of the IC is limited, requiring that the calibration data be compressed to within 2 mbytes.
5. Display driver chip (driverIC) algorithmic processing
Inside the display driving chip, DeMura real-time processing is carried out on each sub-pixel through data compression and decompression.
From a microscopic perspective, the actual moire is due to periodic errors. The lack of alignment, errors in lifting the map, and errors in quantification may all contribute to moire generation, which, if left untreated, would be reflected in the final DeMura effect. The method for removing the moire fringes is the simplest and most direct method for setting a filter for filtering according to the spatial frequency of the moire fringes, which is a common mode in the prior art, is simple and effective, but loses the measurement precision.
Moire generation is caused by different phases of sampling, and different phases correspond to different imaging modes, so when the same statistical calculation formula is used in drawing, the obtained result generates periodic deviation, which causes the generation of Moire, and the linearity of the imaging brightness also influences the final result.
Example one
This example specifically illustrates the process of moire generation, which is implemented as follows:
firstly, for a sub-pixel (simplified to a point light source) on a display screen to be measured, the sub-pixel is imaged on a sensor of an imaging brightness meter through an optical system (a lens group and the like) of the imaging brightness meter, and focusing is adjusted, so that the imaging can be clear to the maximum extent. However, all optical systems are not ideal, and therefore, in general, the image is not a point as the light source, but is dispersed into a bright spot. Usually this is a circle of confusion that follows a two-dimensional normal distribution, as shown in fig. 1, and the circle of confusion falls on the sampling points of the sensors of the imaging luminance meter, and the positions may vary. As shown in fig. 2, the imaging luminance meter sampling area 1 is an area actually sampled by a sampling point of the imaging luminance meter, and the relative positions of the sampling area and the circle of confusion are respectively different, that is, the sampling position or the sampling phase.
As can be seen from fig. 2 to 4, the sampling points of the sensor of the imaging luminance meter have different areas of the diffusion circles and different brightness and luminous flux for different sampling positions or sampling phases. For example, in fig. 2, if the relative brightness value of the center of the circle of confusion is 1, the relative brightness values of all 25 imaging luminance meter sampling areas 1 in the graph are shown in table 1:
TABLE 1
In fig. 3, if the relative brightness value of the center of the circle of confusion is 1, the relative brightness values of all 25 imaging brightness meter sampling areas 1 in the graph are shown in table 2 (the sampling point corresponding to the data in the third row and the third column in table 2 is not at the exact center of the circle of confusion, so its relative brightness value is less than 1):
TABLE 2
In fig. 4, if the relative brightness value of the center of the circle of confusion is 1, the relative brightness values of all 25 imaging brightness meter sampling areas 1 in the graph are shown in table 3 (the sampling points corresponding to the data in the third row, the third column, the fourth row and the fourth column in table 3 are not at the center of the circle of confusion, so their relative brightness values are all less than 1):
TABLE 3
According to the data described in the above three tables, the results of the summation calculation are: 8.958069, 8.880799, 8.673776. It can be seen that the sampling positions or sampling phases are different, the patterns are different, and the results obtained statistically in the usual manner are different. Since the sampling position and the sampling phase are periodically changed, the above difference also has a periodic change rule, i.e. moire is generated.
Factors that influence the actual resulting pattern include the following parameters:
1. spatial sampling magnification
For example, a Mono camera of 101M is used to capture an OLED screen with a resolution of 2160 × 1080, and the spatial sampling rate of one dimension can be up to 5 times, that is, 25 sampling points are used to sample the sub-pixels of one screen. But this magnification is difficult to adjust to exactly 5 times the integer, so the spatial sampling magnification is usually not an integer. In addition, in order to avoid edge distortion and MTF drop, the imaging is not usually allowed to reach the edge of the sensor.
2. Linearity of sensor
Generally, the high-quality imaging brightness meters are subjected to linearity calibration, but a considerable part of the actual shooting equipment used in the current production line is not subjected to linearity calibration. Then these linear errors need to be measured and calibrated out before the test can be done.
3. Exposure time
The common exposure time is determined according to the shot picture and the displayed frame frequency, is usually integral multiple of one frame period, and simultaneously, enough exposure time is ensured, so that the picture with low gray scale can be well sampled.
4. Shooting position
If the distribution of the sub-pixels of the screen is strictly parallel to the distribution of the camera sensor, then direct calculation is possible; otherwise, a rotation transformation is needed to be performed, and the spatial sampling multiplying power in the x direction and the y direction is adjusted.
Example two
The embodiment of the invention provides an imaging luminance meter-based method for extracting the brightness of sub-pixels of an OLED screen, as shown in fig. 5, which includes the following steps:
s1: and adjusting the focal length, the position and the exposure time of the imaging brightness meter, and shooting a picture output by the display screen to be detected, so that the maximum brightness value obtained by the imaging brightness meter is within a preset brightness range. Wherein an upper limit of the preset luminance range is not more than 90% of an upper limit of the imaging luminance range.
In this embodiment, a 101M Mono camera is used to photograph an 2160 × 1080 resolution OLED panel, and first, a pure color image of a green gray scale G224 is displayed on a display panel to be measured, and the focus, position, and exposure time of an imaging luminance meter are adjusted to photograph the image, so as to obtain an image with a length of 10000 pixels (sampling points) and a width of 5000 pixels (sampling points), and when the exposure time is 320ms, the maximum brightness value is 153.
S2: obtaining a spatial sampling multiplying power K according to the image shot in S1; the method specifically comprises the following steps: finding the edge of the display screen in the image data shot according to the S1, and calculating the number of sampling points occupied by the length and the width of the display screen in the image; and dividing the number of sampling points occupied by the length and the width of the display screen in the image by the number of sub-pixels corresponding to the resolution of the display screen to be detected respectively, and taking the average value of one or both of the sub-pixels as the spatial sampling multiplying power K.
In the present embodiment, the spatial sampling magnification 10000/2160=4.63 is calculated.
S3: and according to the spatial sampling multiplying power K obtained by calculation in the step S2, dividing the image shot in the step S1 into a plurality of sub-pixel clusters, wherein each sub-pixel cluster corresponds to one sub-pixel of the display screen to be detected, and performing surface fitting on one or a plurality of sub-pixel clusters by using a two-dimensional diffusion model to obtain the diffusion coefficient or the average value thereof.
Wherein the two-dimensional diffusion model is a two-dimensional normal distribution model, the simulated diffusion circle conforms to a two-dimensional normal distribution formula (1) after normalization,
where f (x, y), i.e. the coordinate, is (x, y)Normalized luminance value of σ1、σ2Respectively transverse and longitudinal dispersion coefficients, rho is a correlation parameter, and mu 1 and mu 2 are central position parameters;
the unit of the coordinate (x, y) is the unit distance of the sub-pixels of the display screen to be measured;
let us note σ if the imaging system is isotropic1=σ2=σ,ρ=0;
When the center of the simulated circle of confusion is the origin, μ 1 = μ 2 =0, equation (1) reduces to equation (2):
in this embodiment, images of each gray scale are captured, the images are divided into sub-pixel clusters, and gaussian surface fitting is performed on one or more sub-pixel clusters.
The image data of one sub-blob of this embodiment is shown in table 4:
TABLE 4
Wherein each data is represented with respect to the relative brightness at the sampling point where the brightness is maximum.
Fitting the data through a two-dimensional normal distribution model to obtain sigma =0.88, and calculating a plurality of sub-pixel groups in the same way to obtain more accurate and concentrated results
。
S4: and according to the dispersion coefficient or the average value thereof obtained in the step S3, simulating by using a two-dimensional dispersion model to obtain a simulated diffusion circle, and calculating corresponding simulated sampling cliques for different sampling positions, wherein the simulated sampling cliques are relative brightness values of the simulated diffusion circle at each sampling point of a sub-pixel clique, and the sampling positions are positions of the sampling points of the sub-pixel clique relative to the center of the simulated diffusion circle.
The method for acquiring the sampling position comprises the following steps:
according to spatial sampling multiplying power K, in the range of 0 to 1/K, a plurality of sampling phase values are taken to form a set { phi }, and one value in the set { phi } is taken for a horizontal sampling phase and a vertical sampling phase to form a phase combination (phi)x,Φy) As sampling locations, the phase combinations (Φ)x,Φy) The coordinate position (x, y) of the sample point in the sub-cluster closest to the center of the simulated diffusion circle to the lower right of the center of the simulated diffusion circle is shown.
Wherein, the sampling phase values are taken to form a set { Φ }, specifically:
taking 0, 1/(nK), 2/(nK), … …, (n-1)/(nK) forms the set { Φ }, where n is a positive integer.
In this embodiment, the
Substituting the function into the formula (2) to obtain the relative brightness distribution function of the simulated diffusion circle. Taking n =5 and setting 5 different sampling phases in each of the horizontal and vertical directions, 25 different combinations are formed in total, and 25 analog sample blobs are further calculated.
S5: and matching the analog sampling cliques obtained by calculation in the step S4 with the actual data of the sub-pixel cliques to obtain the sampling positions of the sub-pixel cliques with the highest fitting degree, and calculating the fitting brightness value of the center of the analog diffusion circle corresponding to each sub-pixel clique according to the sampling position fitting to serve as the sub-pixel brightness value of the display screen to be detected.
The following table shows the relative brightness values of 4 × 6 blue sub-pixels of a screen at 224 gray levels extracted by the above method. (the resolution of this screen is 1125 x 2436, the blue sub-pixels are only half this resolution due to the diamond distribution).
From the extracted relative brightness of all sub-pixels, a statistical histogram of the brightness values can be drawn, as shown by the pre-calibration curve in fig. 6.
The extracted sub-pixel brightness values are calibrated by the prior art (without filtering for eliminating moire), and a statistical histogram of the brightness values displayed by the display screen to be tested after calibration is shown as a calibrated curve in fig. 6.
As can be seen from fig. 6, the second embodiment of the present invention fundamentally avoids the generation of moire.
Matching the analog sampling cliques obtained by calculation in the step S4 with the actual data of the sub-pixel cliques to obtain the sampling positions of the sub-pixel cliques with the highest fitting degree, specifically:
matching the analog sampling clique calculated in the step S4 with the actual data of a first sub-pixel clique to obtain the sampling position (phi) with the highest degree of fitting of the first sub-pixel cliquex1,Φy1);
Sample position (phi) with highest fitness of other sub-pixel clustersxs,Φyt) Satisfies formula (4) and formula (5):
wherein K is a spatial sampling multiplying factor, s and t are integers, and (s-1) and (t-1) represent the transverse and longitudinal relative position relations between other sub-pixel clusters and the first sub-pixel cluster,
to satisfy
The largest integer of > 0 is selected from the group,
to satisfy
Maximum integer > 0.
Replacing the number 1 and the subscript 1 in the formula (4) and the formula (5) with other integers, a recurrence relation of sampling positions between other sub-pixel clusters can be obtained, and in the actual calculation process, the recurrence calculation can be performed by using the recurrence relation.
In an actual application scene, a small deviation exists between the shooting angle of the imaging luminance meter and the placing angle of the display screen to be measured, and then the formula (4), the formula (5) and a recursion relational expression obtained by deducting the formula need to be corrected by rotation transformation of an angle.
The sampling position (phi) can be seenxs,Φyt) And (3) periodically changing along with the change of s and t, namely periodically using different analog sampling cliques to perform fitting calculation on the actual data of each sub-pixel clique, and extracting the corresponding sub-pixel brightness value of the display screen to be detected.
In a preferred embodiment, before S1, the method further includes:
using a uniform integrating sphere to adjust the exposure time of the imaging luminance meter, respectively shooting images at different exposure times, and obtaining a calibration curve according to the relationship between the luminance statistic value of the shot images and the exposure time;
and calibrating the linearity of the imaging brightness meter, and after the imaging brightness meter shoots a picture of a certain brightness gray scale, calibrating the obtained brightness gray scale by using a calibration curve. The method for obtaining the calibration curve and the method for calibrating are both in the prior art.
In a preferred embodiment, in S1, the adjusting the position of the imaging luminance meter includes:
the fractional part of the spatial sampling multiplying factor K calculated in S2 is made smaller than 0.2 or larger than 0.8, and preferably, the closer the spatial sampling multiplying factor K is to an integer, the better.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.