CN110728362B - Module Gamma adjusting method based on LSTM neural network - Google Patents

Module Gamma adjusting method based on LSTM neural network Download PDF

Info

Publication number
CN110728362B
CN110728362B CN201911314100.0A CN201911314100A CN110728362B CN 110728362 B CN110728362 B CN 110728362B CN 201911314100 A CN201911314100 A CN 201911314100A CN 110728362 B CN110728362 B CN 110728362B
Authority
CN
China
Prior art keywords
neural network
lstm neural
gamma adjustment
binding point
binding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911314100.0A
Other languages
Chinese (zh)
Other versions
CN110728362A (en
Inventor
詹东旭
王安妮
张胜森
郑增强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Jingce Electronic Group Co Ltd
Wuhan Jingli Electronic Technology Co Ltd
Original Assignee
Wuhan Jingce Electronic Group Co Ltd
Wuhan Jingli Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Jingce Electronic Group Co Ltd, Wuhan Jingli Electronic Technology Co Ltd filed Critical Wuhan Jingce Electronic Group Co Ltd
Priority to CN201911314100.0A priority Critical patent/CN110728362B/en
Publication of CN110728362A publication Critical patent/CN110728362A/en
Application granted granted Critical
Publication of CN110728362B publication Critical patent/CN110728362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/22Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources
    • G09G3/30Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels
    • G09G3/32Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
    • G09G3/3208Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED] organic, e.g. using organic light-emitting diodes [OLED]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0271Adjustment of the gradation levels within the range of the gradation scale, e.g. by redistribution or clipping
    • G09G2320/0276Adjustment of the gradation levels within the range of the gradation scale, e.g. by redistribution or clipping for the purpose of adaptation to the characteristics of a display device, i.e. gamma correction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Picture Signal Circuits (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

The invention discloses a module Gamma adjusting method based on an LSTM neural network, which comprises the following steps: acquiring a trained LSTM neural network; an input binding point queue of the LSTM neural network comprises a plurality of binding point initial vectors, and the binding point initial vectors comprise input binding points and Gamma adjustment values thereof; the output binding point queue of the LSTM neural network comprises one or more output binding point prediction vectors, and the output binding point prediction vectors comprise output binding points and Gamma adjustment prediction initial values thereof; the method comprises the steps of obtaining a current input binding point queue of a module to be modulated, obtaining a current output binding point queue of the module to be modulated by using a trained LSTM neural network, carrying out Gamma adjustment on the current output binding point of the module to be modulated by using a Gamma adjustment prediction initial value of the current output binding point, and obtaining the Gamma adjustment prediction initial value by using the LSTM neural network, so that the precision of the Gamma adjustment prediction initial value is improved.

Description

Module Gamma adjusting method based on LSTM neural network
Technical Field
The invention belongs to the field of module adjustment, and particularly relates to a module Gamma adjustment method based on an LSTM neural network.
Background
An Organic Light-Emitting Diode (OLED) display is also called an Organic electroluminescent display. With the development of OLED manufacturing process, the mass production scale of OLED is larger and larger, and the total amount of screen bodies in a single batch is increased. Compared with the traditional thin film transistor liquid crystal display (TFT-LCD), the OLED has the advantages of self-luminescence, wide viewing angle, high contrast, low power consumption, high reaction rate, full color, simple manufacturing process and the like. The basic structure of OLED is a sandwich structure composed of a thin and transparent Indium Tin Oxide (ITO) with semiconductor property, which is connected to the positive electrode of power, and another metal cathode. When power is supplied to an appropriate voltage, the hole material and the electron material release holes and electrons, respectively, which combine to produce quantum transitions and concomitantly generate photon groups of a particular wavelength, thereby producing light. The basic RGB primaries can be constructed by selecting different hole materials and electron materials to generate wavelengths corresponding to the RGB primaries. The color gamut of OLEDs is larger than that of LCDs.
After the OLED panel is manufactured, a plurality of detection processes are generally required, and all the detection processes form a detection line, wherein the first process is gamma tuning, and the precision and speed of the calibration directly affect the following processes, such as IRdrop, AOI, Demura, and the like, so that a high-precision and fast-convergence gamma tuning algorithm is of great importance. The goal of Gamma tuning is two: firstly, ensuring that the brightness lv and Gray level curves (abscissa Graylevel, ordinate lv) of the positive center of the OLED conform to an exponential curve with an index of 2.2; and secondly, ensuring that the color coordinates x and y of the central point of the OLED meet the white balance to prevent the display screen from color cast. Gamma tuning typically regulates multiple bands. During Gamma adjustment, the display requirements must be met for each binding point under each Band as requested by the customer. In order to calibrate lv, x and y of each binding point, an IC manufacturer reserves a Gamma calibration module on a screen body IC chip so as to perform Gamma tuning on a screen body with manufacturing differences. For each binding point needing correction, the manufacturer reserves rgb three registers on the IC, and can adjust the brightness and relative proportion of red, green and blue lights by adjusting the three registers so as to finish the correction of the screen bodies lv, x and y.
The current stage initial value prediction algorithm is based on loglog linear interpolation, and performs binding point prediction through two binding points which are adjusted recently, so that the initial value can be predicted very stably, the average prediction precision is considerable, but the upper limit is not high, although the subsequent 7-8 normal gray level binding points which are calculated from the highest gray level (255 gray levels) in the same mode can be predicted stably, the prediction precision of the next low gray level cannot be further improved, and the reason is described as follows:
for the OLED, a large number of engineering practices indicate that, as the gray scale of the adjusted tie point is reduced from high, the gray scale-register curve starts to change from a smooth approximate straight line to a zigzag line in the logarithmic space, fig. 1 and fig. 2 are schematic diagrams of the register Gamma adjustment value of each tie point of the module in the normal coordinate space and the logarithmic space, respectively, where fig. 1 is the normal coordinate space and fig. 2 is the logarithmic space, and it can be seen from fig. 2 that the curve under the high gray scale is very smooth (the upper right side of the curve), and the local curve is approximated to a straight line, and a good effect can be achieved by using a simple interpolation method. However, as the gray scale level gradually decreases, the curve begins to exhibit non-linear characteristics, which results in a decrease in the prediction accuracy of the simple interpolation method at these low gray scale points, and therefore a more precise way to improve the initial value prediction below the low gray scale is required. Experiments prove that if the number of the screen bodies of a pipeline is not particularly large (less than 500 blocks), the traditional method is an effective initial value prediction method, but when the number of the screen bodies is further increased, the precision of the method is not particularly obviously improved. In other words, the conventional initial value algorithm does not utilize the history information of the adjusted screen register to further improve the precision.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides a module Gamma adjustment method based on an LSTM neural network, which improves the precision of a Gamma adjustment prediction initial value by obtaining the LSTM neural network and obtaining the Gamma adjustment prediction initial value by utilizing the LSTM neural network.
To achieve the above object, according to an aspect of the present invention, there is provided a model Gamma adjustment method based on an LSTM neural network, including the steps of:
s1, training an LSTM neural network by using a sample module to obtain a trained LSTM neural network; the input of the LSTM neural network is an input binding point queue consisting of a plurality of binding point initial vectors, and the binding point initial vectors comprise input binding points and Gamma adjustment values thereof; the output binding point queue of the LSTM neural network comprises one or more output binding point prediction vectors, and the output binding point prediction vectors comprise output binding points and Gamma adjustment prediction initial values thereof;
s2, acquiring a current input binding point queue of the module to be modulated, acquiring a current output binding point queue of the module to be modulated by using a trained LSTM neural network, and performing Gamma adjustment on the current output binding point of the module to be modulated by using a Gamma adjustment prediction initial value of the current output binding point to obtain a Gamma adjustment value;
and S3, updating the current input binding point queue by using the current output binding point queue to generate a next input binding point queue, and repeating the step S2 until Gamma adjustment of all binding points of the module to be modulated is completed.
As a further improvement of the invention, in any regulation mode, the LSTM neural network is trained by using the sample module to obtain the LSTM neural network trained in the regulation mode, and the LSTM neural network trained in the regulation mode is used for carrying out initial value prediction on the binding points of the module to be modulated.
As a further refinement of the present invention, the updating of the current input tie queue with the current output tie queue to generate the next input tie queue in step S3 is replaced with: the current input binding queue is updated with the output bindings and their Gamma adjustment values to generate a next input binding queue.
As a further improvement of the invention, a loss function of the current output binding point is generated by using the Gamma adjustment value and the Gamma adjustment prediction initial value of the current output binding point, and the weight coefficient of the LSTM neural network is optimized by using the loss function of the current output binding point, so as to obtain the trained LSTM neural network.
As a further improvement of the invention, a plurality of sample modules are used as a sample group to train the LSTM neural network, and the weighting coefficients of the LSTM neural network are adjusted by using the overall loss function of the sample group, wherein the overall loss function is the sum of the loss functions of all the sample modules of the sample group.
As a further improvement of the present invention, the loss function for the current output binding point is: and the Gamma adjustment value of the binding point and the Euclidean distance value of the initial Gamma adjustment prediction value are output currently.
As a further improvement of the present invention, the loss function for the current output binding point is: the sum of the absolute values of the differences between the Gamma adjustment value and the initial Gamma adjustment value of any register.
As a further improvement of the invention, the modulation sequence of the binding points is as follows: the gray-scale values of the binding points are from high to low, wherein the first N binding points are adjusted by using a conventional Gamma initial value prediction method, and the rest binding points are adjusted by using the module Gamma adjustment method based on the LSTM neural network.
As a further improvement of the invention, the module to be modulated after Gamma adjustment is used as a new sample module, and the weight coefficient of the LSTM neural network is updated.
To achieve the above object, according to another aspect of the present invention, there is provided a terminal device comprising at least one processing unit, and at least one memory unit, wherein the memory unit stores a computer program which, when executed by the processing unit, causes the processing unit to perform the steps of the above method.
Generally, compared with the prior art, the above technical solution conceived by the present invention has the following beneficial effects:
the invention relates to a module Gamma adjusting method based on an LSTM neural network, which obtains a Gamma adjusting and predicting initial value by obtaining a corresponding LSTM neural network and utilizing a single LSTM network, fully utilizes the long-term and short-term selection memory characteristics of the LSTM, inputs recently adjusted binding point historical information into the network, and outputs an accurate predicted value of the current binding point, thereby improving the precision of the Gamma adjusting and predicting initial value, and overcoming the defects that the current initial value predicting scheme is stable, but the upper limit is not high, and the historical information is not fully utilized.
The invention relates to a module Gamma adjustment method based on an LSTM neural network, which obtains the LSTM neural network corresponding to adjustment modes one by one, obtains a Gamma adjustment prediction initial value by utilizing a plurality of LSTM neural networks corresponding to the adjustment modes one by one, fully utilizes the long-term and short-term selection memory characteristics of the LSTM, inputs recently adjusted binding point historical information into the network, and outputs an accurate prediction value of the current binding point, thereby improving the precision of the Gamma adjustment prediction initial value, and overcoming the defects that the current initial value prediction scheme is stable, but the upper limit is not high, and the utilization of the historical information is not sufficient.
The invention discloses a module Gamma adjusting method based on an LSTM neural network, which can carry out online self-learning of the LSTM network by fully utilizing historical screen adjusting information through the LSTM, and for a large batch (experiments show that a threshold value is about 1000), training data of the LSTM network model is enough to supervise and learn an LSTM network model with high-precision prediction capability, so that the information extraction capability which is more efficient than that of the current initial value prediction algorithm based on logarithmic linear interpolation is achieved, meanwhile, modulation data of a module to be modulated after Gamma adjustment are used as a new sample module, the weight coefficient of the LSTM neural network is updated, and the precision of the Gamma adjustment prediction initial value is further improved.
According to the module Gamma adjusting method based on the LSTM neural network, the first N high-gray-scale tie points are preferably adjusted by using a conventional Gamma initial value prediction method, and the rest tie points are adjusted by using the module Gamma adjusting method based on the LSTM neural network, so that the prediction accuracy of the low-gray-scale tie points is improved, the advantage of the conventional Gamma initial value prediction method that the conventional Gamma initial value prediction method is relatively quick is fully utilized, and the purpose of quick and accurate initial value prediction is achieved by combining the two modes.
Drawings
FIG. 1 is a schematic diagram of the register Gamma adjustment value of each binding point of the module in normal coordinate space and logarithm space varying with gray scale;
FIG. 2 is a diagram showing the Gamma adjustment value of the register of each binding point of the module in the normal coordinate space and the logarithm space as a function of gray scale;
fig. 3 is a schematic diagram of a model Gamma adjustment method based on an LSTM neural network according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other. The present invention will be described in further detail with reference to specific embodiments.
LSTM (Long Short-Term Memory network) is an improvement of RNN. The traditional RNN mainly has the problem of gradient disappearance, so that the relevance of information before and after a time sequence queue is far away is cut off. LSTM is an RNN variant proposed to solve the problem of loss of relevance, and is currently widely used in natural language processing and speech processing, and achieves better processing results than conventional algorithms.
Fig. 3 is a schematic diagram of a model Gamma adjustment method based on an LSTM neural network according to an embodiment of the present invention. As shown in fig. 3, a model Gamma adjustment method based on LSTM neural network includes the following steps:
s1, training an LSTM neural network by using a sample module to obtain a trained LSTM neural network; an input binding point queue of the LSTM neural network comprises a plurality of binding point initial vectors, and the binding point initial vectors comprise input binding points and Gamma adjustment values thereof; the output binding point queue of the LSTM neural network comprises one or more output binding point prediction vectors, and the output binding point prediction vectors comprise output binding points and Gamma adjustment prediction initial values thereof; wherein, the Gamma adjustment value of the input binding point is the final value after the Gamma adjustment of the input binding point is carried out by utilizing a common Gamma adjustment mode;
s2, acquiring a current input binding point queue of the module to be modulated, acquiring a current output binding point queue of the module to be modulated by using a trained LSTM neural network, and performing Gamma adjustment on the current output binding point of the module to be modulated by using a Gamma adjustment prediction initial value of the current output binding point to obtain a Gamma adjustment value; as an example, the initial current input tie point queue of the module to be modulated can be obtained by using a conventional Gamma adjustment method;
and S3, updating the current input binding point queue by using the current output binding point queue to generate a next input binding point queue, and repeating the step S2 until Gamma adjustment of all binding points of the module to be modulated is completed. And updating the current input binding point queue by using the current output binding point queue to generate a next input binding point queue, specifically, arranging the binding points in the adjusting mode according to a preset sequence, wherein the number of the binding points in the current output binding point queue is m, removing the previous m binding point data in the current input binding point queue, and adding the binding point data in the current output binding point queue into the current input binding point queue after the data are removed according to the preset sequence to obtain the next input binding point queue.
The LSTM neural network is trained by a conventional neural network training method, the LSTM neural network can be trained by the adopted algorithms such as Stochartic Gradient Dvector (SGD) algorithm, mini-batch SGD, RMSprop algorithm, momentum algorithm and Adam algorithm, the training data of the LSTM network is just the adjusted register value in the previous historical information container, the register value is sorted into a format conforming to the input and output of the LSTM network, and the LSTM network is trained by the optimization algorithm so that the loss value of the LSTM network is gradually reduced. Among the above mentioned optimization algorithms, the SGD mode fluctuates greatly and cannot achieve a good global optimum, and the Adam algorithm combines the respective advantages of the RMSprop and momentum algorithms, so that not only can the learning rate be dynamically adjusted along with the training process, but also the local optimum can be well overcome to achieve a global optimization effect, and the accuracy of the model is increased.
The method is a single LSTM model method, namely all the regulation modes share one well-trained LSTM neural network; for each mode of one screen body, curve relations of the same OLED screen body under 4 different bands (representing 4 regulation modes) are different, if only one LSTM model is used for prediction, all the modes share the parameters of the LSTM model during training, and therefore the finally trained LSTM model predicts the average trend of the curves and cannot reflect the characteristic difference of each mode; therefore, as a preferred scheme, a multi-LSTM model prediction mode may be adopted, that is, in any adjustment mode, the LSTM neural network is trained by using the sample module to obtain the LSTM neural network trained in the adjustment mode, and the tie points of the to-be-modulated module are initially predicted by using the LSTM neural network trained in the adjustment mode. The adjustment mode herein refers to a luminance mode in which the module needs to be adjusted, the module has a plurality of luminance modes, and the luminance modes (adjustment modes) that need Gamma adjustment include a NORMAL mode (NORMAL display mode), an AOD mode (off-screen mode always on display), an HBM mode (high brightness mode) and the like.
Specifically, in a certain adjustment mode, a certain number of tie points are selected as input tie points, the remaining tie points are used as tie points to be predicted, for a sample set, an input tie point queue can be used as input, as an example, the number of tie points in the adjustment mode is N, and an input value is a queue formed by P continuous tie points in a certain adjustment mode of a certain adjusted screen body in historical screen adjustment information, that is:
first binding queue = [ binding 1 vector, … …, binding P vector ]
The number of the output binding point queues of the LSTM neural network can be adjusted according to the requirement, and can be one or more.
As an alternative, the updating of the current input tie queue with the current output tie queue to generate the next input tie queue in step S3 is replaced with: the current input binding queue is updated with the output bindings and their Gamma adjustment values to generate a next input binding queue. Updating the current input binding point queue by using the output binding points and the Gamma adjustment values thereof to generate a next input binding point queue, specifically, arranging the binding points in the adjustment mode according to a preset sequence, wherein the number of the output binding points is m, removing the first m binding point data in the current input binding point queue, and adding the output binding points and the Gamma adjustment value data thereof into the current input binding point queue after the data are removed according to the preset sequence to obtain the next input binding point queue.
As a preferred embodiment, the loss function of the current output binding point can be generated by using the Gamma adjustment value and the Gamma adjustment prediction initial value of the current output binding point, and the weight coefficient of the LSTM neural network is optimized by using the loss function of the current output binding point to obtain the trained LSTM neural network.
For Gamma initial value prediction, when the front P binding points are adjusted, the method is equivalent to a time sequence queue with the length of P, the P binding points are input into an LSTM network, and the output is the rgb register value of the current initial value binding point to be predicted.
As a preferred scheme, the modulation order of the bindings is: the gray-scale values of the binding points are from high to low, wherein the first N binding points are adjusted by using a conventional Gamma initial value prediction method, and the rest binding points are adjusted by using the module Gamma adjustment method based on the LSTM neural network, so that the accuracy of the predicted values can be improved by using the method to predict the low-gray-scale binding points, wherein the conventional Gamma initial value prediction method can be an initial value prediction method based on loglog linear interpolation, and other existing initial value prediction methods can be selected.
As an example, the penalty function for the current output binding is: gamma adjustment value of currently output binding point and Euclidean distance value of initial value of Gamma adjustment prediction, namely
Figure 990780DEST_PATH_IMAGE002
Wherein,r i-predict g i-predict andb i-predict are respectively the firstiThe Gamma of the R, G, B registers of each tie adjusts the predicted initial value,r i-real g i-real andb i-real are respectively the firstiThe Gamma adjustment value of the R, G, B register for each binding point.
As an example, the penalty function for the current output binding is: the sum of the absolute values of the differences between the Gamma adjustment value and the initial Gamma adjustment value for any register, i.e. the sum
Figure 924232DEST_PATH_IMAGE004
Wherein,r i-predict g i-predict andb i-predict are respectively the firstiThe Gamma of the R, G, B registers of each tie adjusts the predicted initial value,r i-real g i-real andb i-real are respectively the firstiThe Gamma adjustment value of the R, G, B register for each binding point.
The above representation of the loss function is only an example, and the loss function can be adjusted accordingly according to the requirement of the optimization algorithm.
As an example, a plurality of sample models are used as a sample set to train the LSTM neural network, and the weighting coefficients of the LSTM neural network are adjusted by using the overall loss function of the sample set, where the overall loss function is the sum of the loss functions of all the sample models of the sample set. The total loss function can be used for reflecting the Gamma adjustment values of all sample modules of a sample group and the distance value of the Gamma adjustment prediction initial value, and the method specifically comprises the following steps:
Figure DEST_PATH_IMAGE005
wherein,loss j is as followsjLoss function of each sample pattern.
As a preferred embodiment, the module to be modulated after Gamma adjustment is used as a new sample module, and the weighting coefficient of the LSTM neural network is updated to obtain the initial value of Gamma adjustment prediction of the next module to be modulated. As an example, a batch of modules to be modulated, which are modulated by the above method, may be updated completely, or one or more modules to be modulated, which are modulated by the above method, may be partially updated with sample modules.
A terminal device comprising at least one processing unit and at least one memory unit, wherein the memory unit stores a computer program which, when executed by the processing unit, causes the processing unit to carry out the steps of the above-mentioned method.
For each mode of a screen body, the curve relations of the same OLED screen body under 4 different bands (representing 4 modes) are different, if only one LSTM model is used for prediction, all the modes share the parameters of the LSTM model during training, the LSTM model thus finally trained predicts in fact the average trend of the sets of curves, but the characteristic difference of each mode can not be reflected, the experimental result also shows that the effect of the multi-LSTM model is stronger than that of the single LSTM model, for Gamma adjustment, multiple modes are generally required to be adjusted, but the screen characteristics of each mode can be different, in order to be able to predict register values in each mode accurately, the idea of training an LSTM model for each band is proposed, compared with the single-mode LSTM, the actual screen adjusting effect shows that the average prediction accuracy of the multi-mode LSTM is stronger than that of the single-mode LSTM.
Respectively calculating error curves of a traditional initial value prediction method, a single LSTM model and a multi LSTM model Gamma adjustment method in the embodiment of the invention, wherein the average RMSE of the error curves is 9.98 and the standard deviation of the RMSE is 1.39 when the error curves of the traditional initial value prediction method (the initial value prediction method based on loglog linear interpolation) are not more than 600 screen bodies; by calculating an error curve of the single LSTM model in the embodiment of the invention when the number of the screen bodies does not exceed 600, the average RMSE of the error curve is 8.91, and the standard deviation of the RMSE is 1.88; by calculating an error curve of the adjusting method of the multi-LSTM model in the embodiment of the invention when the number of the screen bodies does not exceed 600, the average RMSE is 7.78, and the standard deviation of the RMSE is 2.13; it can be seen that the stability of the traditional initial value prediction method is more excellent than that of the LSTM prediction, the stability is reflected by the standard deviation of RMSE, the higher the stability is, the smaller the standard deviation is, and meanwhile, the curve fluctuation condition of the traditional algorithm can also be seen, the traditional algorithm has flat curve and strong stability, the LSTM algorithm has larger fluctuation, when the data volume is not large, the LSTM algorithm has the characteristic of gradually increasing the prediction precision, the multi-LSTM model has higher prediction precision than the single LSTM network (the prediction precision is reflected by the average value of RMSE, the multi-LSTM model is 7.78, and the three are the highest), and meanwhile, the curve trend can also show that the single LSTM model and the multi-LSTM model gradually decline in a slope, the RMSE is smaller and smaller, which means that the prediction precision of the network is continuously improved through online self-learning.
Respectively calculating error curves of a traditional initial value prediction method and a Gamma adjustment method in the embodiment of the invention, and respectively carrying out Gamma adjustment on 1000 screen bodies of a production line by using the traditional initial value prediction method (an initial value prediction method based on loglog linear interpolation) and the Gamma adjustment method in the embodiment of the invention, wherein the results are as follows: counting errors of the traditional initial value prediction method, wherein the average RMSE is 18.60, and the standard deviation of the RMSE is 3.96; counting errors of the Gamma adjustment method of the embodiment of the invention, wherein the average RMSE is 9.75, and the standard deviation of the RMSE is 3.37; meanwhile, as can be seen from the error curve, in the Gamma adjusting method of the embodiment of the invention, when 200 screen bodies are spaced, the average RMSE during the initial training is increased, but after 200 screen bodies, the RMSE is decreased to a certain extent, and after 200 screen bodies, the RMSE is decreased to a small extent.
The result of predicting the last 4 low gray level binding points by using the Gamma adjustment method of the embodiment of the invention (1000 screen bodies are adjusted at the moment), and the comparison between the predicted value and the final value of the low gray level predicted by using the LSTM is shown in the table 1, so that the prediction is very close to the final true value, and the LSTM prediction precision is fully improved when the data volume is large to a certain degree.
TABLE 1 schematic diagram of the prediction result of the Gamma adjustment method for low gray scale in the embodiment of the present invention
Truth value 248 203 179 169
Prediction value 246.79733 200.84691 178.85611 168.42386
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A module Gamma adjusting method based on an LSTM neural network is characterized by comprising the following steps:
s1, training an LSTM neural network by using a sample module to obtain a trained LSTM neural network; the input of the LSTM neural network is an input binding point queue consisting of a plurality of binding point initial vectors, wherein the binding point initial vectors comprise input binding points and Gamma adjustment values thereof; an output binding point queue of the LSTM neural network comprises one or more output binding point prediction vectors, and the output binding point prediction vectors comprise output binding points and Gamma adjustment prediction initial values thereof;
s2, acquiring a current input binding point queue of the module to be modulated, acquiring a current output binding point queue of the module to be modulated by using a trained LSTM neural network, and performing Gamma adjustment on the current output binding point of the module to be modulated by using a Gamma adjustment prediction initial value of the current output binding point to obtain a Gamma adjustment value;
and S3, updating the current input binding point queue by using the current output binding point queue to generate a next input binding point queue, and repeating the step S2 until Gamma adjustment of all binding points of the module to be modulated is completed.
2. The method as claimed in claim 1, wherein in any of the tuning modes, the LSTM neural network is trained using the sample module to obtain the LSTM neural network trained in the tuning mode, and the LSTM neural network trained in the tuning mode is used to perform initial prediction on the tie points of the module to be modulated.
3. The LSTM neural network-based model Gamma adjustment method of claim 1 or 2, wherein the step S3 of updating the current input tie queue with the current output tie queue to generate the next input tie queue is replaced with: updating the current input binding queue with the output bindings and their Gamma adjustment values to generate a next input binding queue.
4. The model Gamma adjustment method based on the LSTM neural network as claimed in claim 1 or 2, wherein the loss function of the current output tie point is generated by using the Gamma adjustment value and the Gamma adjustment prediction initial value of the current output tie point, and the weight coefficients of the LSTM neural network are optimized by using the loss function of the current output tie point to obtain the trained LSTM neural network.
5. The method of claim 3, wherein a plurality of sample patterns are used as a sample group to train the LSTM neural network, and the weighting coefficients of the LSTM neural network are adjusted by using the overall loss function of the sample group, wherein the overall loss function is the sum of the loss functions of all the sample patterns in the sample group.
6. The method of claim 4, wherein the loss function of the current output binding point is: and the Gamma adjustment value of the binding point and the Euclidean distance value of the initial Gamma adjustment prediction value are output currently.
7. The method of claim 4, wherein the loss function of the current output binding point is: the sum of the absolute values of the differences between the Gamma adjustment value and the initial Gamma adjustment value of any register.
8. The model Gamma adjustment method based on LSTM neural network as claimed in claim 1 or 2, wherein the modulation order of the binding points is: the gray-scale values of the binding points are from high to low, wherein the first N binding points are adjusted by using a conventional Gamma initial value prediction method, and the rest binding points are adjusted by using the module Gamma adjustment method based on the LSTM neural network.
9. The model Gamma adjustment method based on the LSTM neural network as claimed in claim 1 or 2, wherein the model to be modulated after Gamma adjustment is used as a new sample model to update the weighting coefficients of the LSTM neural network.
10. A terminal device, comprising at least one processing unit and at least one memory unit, wherein the memory unit stores a computer program which, when executed by the processing unit, causes the processing unit to carry out the steps of the method according to any one of claims 1 to 9.
CN201911314100.0A 2019-12-19 2019-12-19 Module Gamma adjusting method based on LSTM neural network Active CN110728362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911314100.0A CN110728362B (en) 2019-12-19 2019-12-19 Module Gamma adjusting method based on LSTM neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911314100.0A CN110728362B (en) 2019-12-19 2019-12-19 Module Gamma adjusting method based on LSTM neural network

Publications (2)

Publication Number Publication Date
CN110728362A CN110728362A (en) 2020-01-24
CN110728362B true CN110728362B (en) 2020-05-22

Family

ID=69226454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911314100.0A Active CN110728362B (en) 2019-12-19 2019-12-19 Module Gamma adjusting method based on LSTM neural network

Country Status (1)

Country Link
CN (1) CN110728362B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487577B (en) * 2021-07-15 2023-12-26 哈尔滨工业大学(深圳) Quick Gamma adjustment method, system and application based on GRU-CNN combined model
CN116994515B (en) * 2023-09-26 2023-12-12 昇显微电子(苏州)股份有限公司 Quick gamma correction method based on gradient descent

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201322241A (en) * 2011-11-25 2013-06-01 Jae-Yeol Park Calibration system of display device using transfer functions and calibration method thereof
CN107578755A (en) * 2017-09-30 2018-01-12 晶晨半导体(上海)股份有限公司 A kind of bearing calibration of screen intensity and colour temperature
CN109191386A (en) * 2018-07-18 2019-01-11 武汉精测电子集团股份有限公司 A kind of quick Gamma bearing calibration and device based on BPNN
CN110310596A (en) * 2019-06-17 2019-10-08 武汉精立电子技术有限公司 A kind of the GAMMA adjusting initial value prediction technique and system of OLED mould group
CN110459170A (en) * 2019-10-11 2019-11-15 武汉精立电子技术有限公司 A kind of mould group Gamma bearing calibration, terminal device and computer-readable medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101861795B1 (en) * 2011-03-24 2018-05-29 삼성디스플레이 주식회사 Luminance Correction System for Organic Light Emitting Display Device
KR102370280B1 (en) * 2014-10-24 2022-03-07 삼성디스플레이 주식회사 Adaptive black clipping circuit, display device including the same and adaptive black clipping method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201322241A (en) * 2011-11-25 2013-06-01 Jae-Yeol Park Calibration system of display device using transfer functions and calibration method thereof
CN107578755A (en) * 2017-09-30 2018-01-12 晶晨半导体(上海)股份有限公司 A kind of bearing calibration of screen intensity and colour temperature
CN109191386A (en) * 2018-07-18 2019-01-11 武汉精测电子集团股份有限公司 A kind of quick Gamma bearing calibration and device based on BPNN
CN110310596A (en) * 2019-06-17 2019-10-08 武汉精立电子技术有限公司 A kind of the GAMMA adjusting initial value prediction technique and system of OLED mould group
CN110459170A (en) * 2019-10-11 2019-11-15 武汉精立电子技术有限公司 A kind of mould group Gamma bearing calibration, terminal device and computer-readable medium

Also Published As

Publication number Publication date
CN110728362A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN111223437B (en) Gamma register calibration method, gamma register calibration device and display device
CN110675818B (en) Curve matching-based module Gamma correction method and system
CN110728362B (en) Module Gamma adjusting method based on LSTM neural network
CN110459170B (en) Module Gamma correction method, terminal equipment and computer readable medium
CN109767722B (en) OLED module Gamma adjusting method and device
CN108305585B (en) Display driver with gamma correction
CN105206239A (en) Mura phenomenon compensation method
CN110910847B (en) Gamma correction method and device for display module
CN109637441B (en) Module Gamma correction method based on Kalman filtering
CN110767170B (en) Picture display method and picture display device
US20150356929A1 (en) Display device for correcting display non-uniformity
CN104835438A (en) Display device, display panel driver, image processing apparatus and image processing method
CN205282055U (en) Display panel and colour control equipment thereof
CN107633808A (en) The brightness adjusting method and brightness regulating apparatus of display panel
CN109584818B (en) Gamma voltage division circuit, voltage regulation method and liquid crystal display device
CN105632407A (en) Display regulation method of AMPLED display screen and mobile terminal
CN110534058B (en) Method and system for rapidly converging Gamma tuning
TWI796865B (en) Gamma debugging method and gamma debugging device for display panel
CN105788518B (en) The uneven method and device compensated of display, display to display
TWI745062B (en) Timing controller applicable to performing dynamic peak brightness control in display module
CN116052591A (en) Compensation method and compensation device for display panel and computer readable storage medium
CN111524484B (en) Rapid Gamma adjustment method and one-time burning OTP system
CN112908257A (en) Compensation method, device and system for display panel
Zhan et al. P‐2.1: A LSTM‐based Deep Learning Model for the Prediction of Initial Register Values in IC Modules in the Process of Gamma Tuning for OLED Panels
US11804189B2 (en) Display device, method for generating offset current values and current offsetting system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant