CN117192821A - Backlight module detection method, electronic equipment and storage medium - Google Patents
Backlight module detection method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117192821A CN117192821A CN202311169272.XA CN202311169272A CN117192821A CN 117192821 A CN117192821 A CN 117192821A CN 202311169272 A CN202311169272 A CN 202311169272A CN 117192821 A CN117192821 A CN 117192821A
- Authority
- CN
- China
- Prior art keywords
- lamp panel
- optical data
- backlight module
- data
- optical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 156
- 230000003287 optical effect Effects 0.000 claims abstract description 180
- 238000000034 method Methods 0.000 claims abstract description 63
- 239000007888 film coating Substances 0.000 claims abstract description 45
- 238000009501 film coating Methods 0.000 claims abstract description 45
- 239000012788 optical film Substances 0.000 claims description 58
- 238000012549 training Methods 0.000 claims description 40
- 230000007547 defect Effects 0.000 claims description 30
- 239000010408 film Substances 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 11
- 239000011324 bead Substances 0.000 claims description 8
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 238000013459 approach Methods 0.000 claims description 5
- 238000003475 lamination Methods 0.000 claims 6
- 238000012545 processing Methods 0.000 abstract description 9
- 239000010410 layer Substances 0.000 description 49
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 6
- 239000011248 coating agent Substances 0.000 description 6
- 238000000576 coating method Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Landscapes
- Planar Illumination Modules (AREA)
Abstract
The application provides a backlight module detection method, electronic equipment and a storage medium, and relates to the technical field of display, wherein the method comprises the following steps: controlling a lamp panel to be detected to be in a lighting state, acquiring optical data of the lamp panel, inputting the optical data of the lamp panel into a prediction model, and processing the optical data of the lamp panel through the prediction model to obtain optical data after film coating; performing quality detection processing on the optical data of the lamp panel to obtain a lamp panel detection result; performing quality detection treatment on the optical data after the film coating to obtain a detection result after the film coating; and obtaining the overall detection result of the backlight module BLU according to the detection result of the lamp panel and the detection result after film coating. Through the scheme, the BLU detection speed and accuracy can be improved, and the detection cost is reduced.
Description
Technical Field
The present application relates to the field of display technologies, and in particular, to a backlight module detection method, an electronic device, and a storage medium.
Background
A backlight module (BLU) is one of the key components of a lcd panel, and generally consists of a lamp panel and a plurality of optical films, and has the function of providing a Light source with sufficient brightness and uniform distribution, so that the backlight module can display images normally. The most important indicator for the BLU as a whole is uniformity at the time of lighting.
Currently, there are two general ways to check the BLU uniformity or yield: human eye observation or device detection. The observation mode of human eyes has larger influence on the main observation, inaccurate result and low efficiency. However, the device detection generally needs to perform a lighting detection once after each layer of film is covered on the lamp panel, so as to determine whether a lighting defect exists or whether uniformity accords with the lighting defect.
Therefore, there is a need for an efficient, accurate, and manual-participation-free BLU detection method.
Disclosure of Invention
The application provides a backlight module detection method, electronic equipment and a storage medium, which solve the problems of complicated process and low efficiency in the prior art for inspecting the uniformity or yield of a BLU.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, the present application provides a method for detecting a backlight module, where the backlight module includes a lamp panel and a first optical film layer covering a light emitting surface of the lamp panel, the method includes: image acquisition is carried out on the lamp panel in the lighted state, so that optical data of the lamp panel are obtained; inputting the optical data of the lamp panel into a first prediction model corresponding to the first optical film layer to obtain the optical data of the backlight module when the backlight module is in a lighting state; and performing quality detection on the backlight module according to the optical data of the backlight module to obtain a quality detection result of the backlight module.
According to the backlight module detection method provided by the application, under the condition that only the optical data of the lamp panel are collected, the optical data after film coating is directly obtained through the prediction model, and the quality detection processing is respectively carried out on the optical data of the lamp panel and the optical data after film coating, so that the overall detection result of the backlight module BLU is obtained. The speed and the accuracy of BLU detection are improved, and the detection cost is reduced.
In some possible implementations, a first sample dataset is obtained prior to inputting the lamp panel optical data into a first predictive model corresponding to a first optical film layer; the first sample data set comprises N first sample data subsets, each first sample data subset comprising a plurality of sets of first sample data; each group of first sample data comprises optical data of the lamp panel sample in a lighting state, and the light emitting surface of the lamp panel sample covers the first optical film layer and then the optical data of the lamp panel sample in the lighting state; the types of the lamp panel samples corresponding to the different sample data subsets are different, and the types of the lamp panel samples corresponding to the same sample data subset are the same; model training is carried out according to the first sample data set, and a first generation network model is obtained; and carrying out iterative training on the first generated network model through the discrimination network model, and taking the second generated network model obtained by training as a first prediction model. By establishing the first prediction model, optical data after the lamp panel is coated can be directly generated by the optical data of the lamp panel, and the lamp panel after the coating is not required to be subjected to image acquisition independently.
In some possible implementations, iteratively training the first generated network model by discriminating the network model includes: training a first generation network model according to a plurality of groups of first sample data in each first sample data subset, so that the first generation network model generates a plurality of groups of backlight module optical data; each group of optical data of the backlight module comprises optical data of a lamp panel sample in a lighting state and optical data of the lamp panel after film coating; and training the judging network model according to the plurality of groups of first sample data and the plurality of groups of backlight module optical data generated by the first generating network model, so that the plurality of groups of backlight module optical data generated by the first generating network model approaches to the plurality of groups of first sample data. And carrying out iterative training on the first generation network model by using the discrimination network model, so that the first generation network model generates generation data which is infinitely close to the acquired backlight module sample data.
In some possible implementations, acquiring the first sample data set includes: and respectively acquiring multiple images of the lamp panel samples of each of the N types to obtain multiple groups of first sample data.
In some possible implementations, according to the optical data of the backlight module, performing quality detection on the backlight module to obtain a quality detection result of the backlight module, including: abnormal lamp point detection and uniformity detection among lamp beads are carried out on the optical data of the lamp panel, so that a lamp panel detection result is obtained; performing uniformity detection and defect detection on the optical data of the lamp panel after film coating to obtain a detection result of the lamp panel after film coating; and obtaining the quality detection result of the backlight module according to the detection result of the lamp panel and the detection result after the lamp panel is coated with the film. And quality detection is carried out on the optical data of the lamp panel and the optical data of the lamp panel after film coating respectively, so that a quality detection result of the backlight module is obtained.
In a second aspect, the present application provides a method for detecting a backlight module, where the backlight module includes a lamp panel and M optical film layers, the method includes: collecting images of the lamp panel in the lighted state to obtain optical data of the lamp panel; inputting the optical data of the lamp panels into a second prediction model, and outputting the optical data of M lamp panels after film coating by the second prediction model; the optical data of the M lamp panels after film coating are images which are displayed when the lamp panels are in a lighting state and cover M different optical film layers respectively; and respectively performing quality detection on the optical data of the M lamp panels after being coated to obtain detection results of the M lamp panels of the backlight module after being coated.
According to the backlight module detection method provided by the application, after different optical film layers are covered according to the existing lamp panel, the final detection result of the BLU can be obtained, and then the optimal optical film layer can be selected. The lamp panel is not required to be covered with different optical film layers, equipment is sequentially utilized for detection, the BLU detection speed and accuracy are improved, and the detection cost is reduced.
In some possible implementations, after obtaining the detection result after the M lamp panels of the backlight module are covered with the film, the method further includes: according to the detection results of the M lamp panels of the backlight module after film coating, determining optical data of the optimal lamp panels after film coating from the optical data of the M lamp panels after film coating; and determining the optical film layer corresponding to the optical data of the optimal lamp panel after film coating as a second optical film layer of the backlight module.
In some possible implementations, before inputting the lamp panel optical data into the second predictive model, the method further includes: acquiring a second sample dataset; the second sample data set includes M first sample data subsets, each second sample data subset including a plurality of sets of second sample data; each group of second sample data comprises optical data of the lamp panel sample in a lighting state, and the optical data of the lamp panel sample in the lighting state after the luminous surface of the lamp panel sample is covered with the optical film layer; the types of the optical film layers corresponding to the data subsets of different samples are different, and the types of the optical film layers corresponding to the data subsets of the same sample are the same; model training is carried out according to the second sample data set, and a third generated network model is obtained; the third generation network model is a model for generating optical data of M lamp panel films according to the optical data of one lamp panel; and carrying out iterative training on the third generated network model through the judging network model, and taking the fourth generated network model obtained through training as a second prediction model.
In some possible implementations, iteratively training the third generated network model by discriminating the network model includes: training a third generating network model according to a plurality of groups of second sample data in each second sample data subset, so that the third generating network model generates a plurality of groups of backlight module optical data; the optical data of the backlight module comprises optical data of a plurality of lamp panel samples in a lighting state and optical data of a plurality of lamp panels after being coated with films; and training the judging network model according to the plurality of groups of second sample data and the plurality of groups of backlight module optical data generated by the third generating network model, so that the plurality of groups of backlight module optical data generated by the third generating network model approaches to the plurality of groups of second sample data.
In some possible implementations, acquiring the second sample data set includes: and acquiring multiple images of the backlight module corresponding to each optical film layer in the M types to obtain multiple groups of second sample data.
In some possible implementation manners, quality detection is performed on optical data after the film is covered by the M lamp panels, so as to obtain detection results after the film is covered by the M lamp panels of the backlight module, and the method is as follows: and respectively carrying out uniformity detection and defect detection on the optical data after the M lamp panels are coated, and obtaining detection results after the M lamp panels of the backlight module are coated.
In a third aspect, the application provides an electronic device comprising a processor, a memory and a computer program stored on the memory, the processor being for executing the computer program to cause the electronic device to implement the method as in the first aspect.
In a fourth aspect, the application provides a computer readable storage medium storing a computer program which, when run on an electronic device, causes the electronic device to perform the method as in the first aspect.
The method described in the first aspect may be implemented by hardware, or may be implemented by executing corresponding software by hardware. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a processing module or unit, a display module or unit, etc.
It will be appreciated that the advantages of the third and fourth aspects may be found in the relevant description of the first aspect and are not described in detail herein.
Drawings
FIG. 1 is a flow chart of a method for detecting a backlight module disclosed in the prior art;
fig. 2 is a flowchart of a method for detecting a backlight module according to an embodiment of the present application;
fig. 3 is a schematic diagram of a model application flow and a defect detection flow corresponding to a backlight module detection method according to an embodiment of the present application;
fig. 4 is a flowchart of a method for detecting a backlight module according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a lamp panel image captured by an image capturing device according to an embodiment of the present application;
FIG. 6 is a flowchart of a second generating network model according to an embodiment of the present application;
FIG. 7 is a schematic diagram of acquiring a first sample data set according to an embodiment of the present application;
FIG. 8 is a schematic diagram of iterative training of a first generated network model through a discriminant network model according to an embodiment of the present application;
fig. 9 is a schematic diagram of quality detection of a backlight module according to an embodiment of the present application;
fig. 10 is a schematic diagram of optical data obtained by obtaining a film from optical data of a lamp panel according to an embodiment of the present application through a prediction model;
FIG. 11 is a schematic diagram of another embodiment of a quality detection method for a backlight module according to the present application;
FIG. 12 is a flowchart of another method for detecting a backlight module according to an embodiment of the present application;
fig. 13 is a schematic diagram of M backlight modules formed by coating M optical film layers on a lamp panel according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The term "and/or" herein is an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. The symbol "/" herein indicates that the associated object is or is a relationship, e.g., A/B indicates A or B.
The terms "first" and "second" and the like in the description and in the claims are used for distinguishing between different objects and not for describing a particular sequential order of objects. In the description of the embodiments of the present application, unless otherwise specified, the meaning of "plurality" means two or more, for example, the meaning of a plurality of processing units means two or more, or the like; the plurality of elements means two or more elements and the like.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In order to facilitate understanding of the embodiments of the present application, some terms of the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
Backlight module or backlight Unit refers to a Light source behind a Liquid Crystal Display (LCD), and its Light emitting effect directly affects the visual effect of the liquid crystal display module (LCM). The liquid crystal display itself does not emit light and must emit light by means of a backlight. The backlight mainly comprises the following components: the light source (lamp plate), light guide plate, optical film layer and structural member.
Generating an countermeasure network: (GAN, generative Adversarial Networks) is a deep learning model, and is one of the most promising approaches for unsupervised learning on complex distributions in recent years. The model is built up of (at least) two modules in a frame: the mutual game learning of the Generative Model and the discriminant Model Discriminative Model produces a fairly good output. In the original GAN theory, it is not required that both G and D are neural networks, but only functions that can fit the corresponding generation and discrimination. But in practice deep neural networks are generally used as G and D. Their functions are respectively: g is compared with a sample generator, a noise is input, and a vivid sample is output; d is a classifier for judging whether the input sample is true or false. An excellent GAN application requires a good training method, otherwise the output may be non-ideal due to the freedom of the neural network model.
Currently, there are two general ways to check the BLU uniformity or yield: human eye observation or device detection. The observation mode of human eyes has larger influence on the main observation, inaccurate result and low efficiency. The device detection technology is generally used for detecting a lighting panel to judge whether lighting defects exist or whether uniformity meets the requirements or not, or detecting the whole BLU to judge the uniformity of the BLU.
Referring to fig. 1, in the prior art, the detection of the light uniformity of the BLU is generally divided into two types of processes, one is that before the lamp panel is not covered, an image of the illuminated lamp panel is collected at a lamp bead station through a visual device such as a camera, and the optical data of the light emitted by the lamp panel is analyzed, so as to realize the detection of the light uniformity or defect of the lamp panel; and the other is that after the lamp panel is coated, an image of the lamp panel after the coating is lightened is acquired through a visual device at a panel station, and optical data of the light emitted by the lamp panel after the coating is analyzed, so that the light emitting uniformity or defect detection of the lamp panel after the coating is realized. Because the BLU generally consists of a lamp panel and a plurality of optical film layers, the panel station needs to be tested multiple times, i.e. one optical film layer is coated on each time, and the mode has complicated procedures and low efficiency.
Based on this, an embodiment of the present application provides a method for detecting a backlight module, as shown in fig. 2, where optical data after film coating is obtained through a prediction model under the condition that only optical data of a lamp panel is collected; and performing defect detection according to the optical data of the backlight module to obtain the overall detection result of the backlight module BLU. Through this scheme, can improve BLU detection's speed and accuracy, reduce detection cost.
For a better understanding of embodiments of the present application, the following is a brief description of the embodiments of the present application:
the method provided by the embodiment of the application is applied to a backlight module, wherein the backlight module comprises a lamp panel and an optical film layer covered on the light emitting surface of the lamp panel, and the method comprises the following steps: image acquisition is carried out on the lamp panel in the lighted state, so that optical data of the lamp panel are obtained; inputting the optical data of the lamp panel into a prediction model corresponding to the optical film layer to obtain optical data after film coating; and respectively carrying out quality detection processing on the optical data of the lamp panel and the optical data after the film coating to obtain a lamp panel detection result and a detection result after the film coating, and summarizing the lamp panel detection result and the detection result after the film coating to obtain a quality detection result of the backlight module.
Referring to fig. 3, a backlight module detection method provided by an embodiment of the present application includes a model application process and a defect detection process. Through the two processes, under the condition that only the optical data of the lamp panel are collected, the optical data of the lamp panel are input into a prediction model, and the optical data after film coating is obtained through the prediction model; respectively carrying out quality detection processing on the optical data of the lamp panel and the optical data after film coating to obtain a lamp panel detection result and a detection result after film coating; and summarizing the lamp panel detection result and the detection result after film coating to obtain the overall detection result of the backlight module BLU. By the scheme, the BLU detection speed and accuracy can be improved; under the condition that a plurality of optical film layers exist, the optimal optical film layer can be selected according to the detection result, so that the detection cost is reduced.
The execution main body of the backlight module detection method provided by the embodiment of the application can be the electronic equipment, or can be a functional module and/or a functional entity capable of realizing the backlight module detection method in the electronic equipment, and the scheme of the application can be realized in a hardware and/or software mode, and can be specifically determined according to actual use requirements, and the embodiment of the application is not limited. The method for detecting the backlight module provided by the embodiment of the application is exemplified by an electronic device with reference to the accompanying drawings.
The following describes a method and a system for detecting a backlight module according to the present application with reference to specific embodiments.
Embodiment one: backlight module detection method for given optical film layer
Fig. 4 is a flowchart of a method for detecting a backlight module according to an embodiment of the application. Referring to fig. 4, the method includes steps S101 to S103 described below.
S101, image acquisition is carried out on the lamp panel in the lighted state, and optical data of the lamp panel are obtained.
Referring to fig. 5, the lamp panel in the lighted state is image-captured by an image capturing device including, for example, a camera, a scanner, a radar, a laser scanner, and the like. These devices are capable of converting real world images into digital signals for processing and analysis by a computer. This is not limiting in the embodiments of the present application.
It can be understood that the acquired image should meet the conditions of high contrast, uniform overall gray level of the image, moderate brightness, etc., so that the quality detection can be conveniently performed according to the data.
S102, inputting the optical data of the lamp panel into a first prediction model corresponding to the first optical film layer to obtain the optical data of the backlight module when the backlight module is in a lighting state.
Illustratively, the obtained optical data of the backlight module includes optical data of the lamp panel and optical data of the lamp panel after film coating predicted by the first prediction model.
The first prediction model is a generated type countermeasure network, is a deep learning model, and is formed by at least two models in a framework: generating a network model and discriminating a mutual game learning of the network model yields a fairly good output.
In an exemplary embodiment of the present application, the first prediction model is a model that generates optical data after a lamp panel is covered according to optical data of a lamp panel; the output of the first prediction model is generated data which is infinitely close to the acquired optical data after the lamp panel is covered.
In the embodiment of the present application, as shown in fig. 6, before inputting the optical data of the lamp panel into the first prediction model corresponding to the first optical film layer, a first generating network model needs to be established, and the first generating network model is trained to obtain a second generating network model.
Illustratively, a first sample dataset is acquired; the first sample data set comprises N first sample data subsets, each first sample data subset comprising a plurality of sets of first sample data; each group of first sample data comprises optical data of the lamp panel sample in a lighting state, and the light emitting surface of the lamp panel sample covers the first optical film layer and then the optical data of the lamp panel sample in the lighting state; the first optical film layer may be a single-layer film or a multi-layer composite film, and specifically, which film depends on the actual situation, which is not limited in the embodiment of the present application.
Referring to fig. 7, images are acquired for each of the N types of light panel samples, respectively, to obtain N first sample data subsets; the first sample data subset is represented by BLU data sets 1-N in the figure; each first sample data subset includes a plurality of sets of first sample data.
Specifically, model training is carried out according to a first sample data set, and a first generation network model is obtained; the first generation network model is a model for generating optical data of a lamp panel after being covered by a film according to the optical data of the lamp panel.
In the embodiment of the application, G is compared with a sample generator, and a lamp panel optical data is input to generate a group of backlight module optical data; the D ratio is a classifier used for judging whether the input optical data of the backlight module is true or false.
In the embodiment of the present application, as shown in fig. 8, iterative training is further required to be performed on the first generated network model through the discrimination network model, and the second generated network model obtained through training is used as the first prediction model. By establishing the first prediction model, optical data after the lamp panel is coated can be directly generated by the optical data of the lamp panel, and the lamp panel after the coating is not required to be subjected to image acquisition independently.
Illustratively, training a first generation network model according to a plurality of groups of first sample data in each first sample data subset, so that the first generation network model generates a plurality of groups of backlight module optical data; the optical data of each group of backlight module comprises optical data of a lamp panel sample in a lighting state and optical data of the lamp panel after film coating.
Further, training the discrimination network model according to the plurality of groups of first sample data and the plurality of groups of optical data of the backlight module generated by the first generation network model.
The method comprises the steps of exemplarily recording a plurality of groups of backlight module optical data generated by a first generation network model as generation data, recording a plurality of groups of acquired backlight module sample data as real data, inputting the generation data and the real data into a discrimination network model, judging whether the input data is the real data by the discrimination network, if true, if false, outputting a probability value that the input data is the real data by the discrimination network, and calculating a loss function of the generation type countermeasure network according to the probability value; and updating parameters of the generating network and the judging network by using a back propagation algorithm according to the loss function of the generating type countermeasure network. So that the generated data is approximated to real data.
Illustratively, training is stopped when the discrimination network is unable to resolve the true or false of the input data, i.e., the probability of the discrimination network outputting is 0.5, regardless of whether the generated data or the true data is input.
The generating network and the discriminating network may be a neural network model built, and may be CNN (Convolutional Neural Networks, convolutional neural network), RNN (Recurrent Neural Networks, cyclic neural network), or fully connected neural network, so long as tasks can be completed, which is not limited in the embodiment of the present application.
And performing iterative training on the first generation network model by using the discrimination network model to obtain a second generation network model, so that the second generation network model generates generation data which is infinitely close to the acquired backlight module sample data.
And taking the second generated network model as a first prediction model, and inputting the optical data of the lamp panel into the first prediction model corresponding to the first optical film layer to obtain the optical data of the backlight module when the backlight module is in a lighting state.
And S103, performing quality detection on the backlight module according to the optical data of the backlight module to obtain a quality detection result of the backlight module.
Illustratively, referring to fig. 9 in detail, the quality detection of the backlight module includes detecting the optical data of the lamp panel and detecting the optical data of the lamp panel after the lamp panel is coated.
Abnormal lamp point detection and uniformity detection among lamp beads are carried out according to the optical data of the lamp panel, so that a lamp panel detection result is obtained; performing uniformity detection and defect detection according to the optical data of the lamp panel after film coating to obtain a detection result of the lamp panel after film coating; and summarizing the detection result of the lamp panel and the detection result of the lamp panel after film coating to obtain the quality detection result of the backlight module. And quality detection is carried out on the optical data of the lamp panel and the optical data of the lamp panel after film coating respectively, so that a quality detection result of the backlight module is obtained.
In some embodiments, the abnormal light point detection is in the following manner: abnormal indexes of target lamp points in a preset area are utilized, and abnormal lamp points are rapidly and accurately determined by utilizing connected domain detection; uniformity among the lamp beads can be detected by whether the uniformity of the pearl illumination of the lamp meets the requirement.
Illustratively, uniformity detection and defect detection are performed according to optical data after the lamp panel is coated, and possible implementation manners are as follows: acquiring an acquired image of a display screen to be detected; generating a defect detection map according to the acquired image; and displaying the generated defect detection diagram in detection equipment for detection so as to facilitate the visual inspection of the defects by a user.
Specifically, in the embodiment of the application, the display uniformity judgment and/or the product classification can be performed on the display screen to be detected according to at least one of the number of the defect areas, the average value of the defect scores of all the defect areas and the positions of the defect areas.
In an application scenario of the embodiment of the present application, referring to fig. 10, in the case of collecting only optical data of a lamp panel, the optical data of the lamp panel is input to a first prediction model, and optical data after the lamp panel is covered is obtained through the first prediction model; referring to fig. 11, quality detection is performed on the optical data of the lamp panel and the optical data of the lamp panel after film coating, so as to obtain a lamp panel detection result and a detection result after film coating; and summarizing the lamp panel detection result and the detection result after film coating to obtain the overall detection result of the backlight module BLU. Through this scheme, can improve BLU detection's speed and accuracy, reduce detection cost.
For the case of the first embodiment, it can also be extended to another application scenario: when the backlight module is researched and developed or manufactured by an actual screen factory, different numbers of lamp beads, lamp bead intervals, lamp bead arrangement and matching with an optical film layer are required to be designed, so that the requirement of uniform overall light emission of the backlight module is met. If the backlight module is actually produced, detection is performed again, and uniformity is found to be unsatisfactory, so that time and material are wasted. Based on the above, the embodiment of the application provides a backlight module detection method, under the condition that a plurality of optical film layers exist, optical data of a lamp panel are collected, the optical data of the lamp panel after the different optical film layers are covered on the light emitting surface of the lamp panel are predicted through second prediction models corresponding to the different optical film layers, and the optical data of the lamp panel after the film is analyzed to determine an optimal film covering scheme. This scheme is described in detail in embodiment two below.
Embodiment two: backlight module detection method when M optical film layers exist
Fig. 12 is a flowchart of a method for detecting a backlight module according to an embodiment of the application. Referring to fig. 12, the method includes steps S201 to S203 described below.
S201, collecting images of the lamp panel in the lighted state, and obtaining optical data of the lamp panel.
And (3) carrying out image acquisition on the lamp panel in the lighted state by using image acquisition equipment, and selecting an image with high contrast, uniform integral gray level and moderate brightness to obtain the optical data of the lamp panel. The image acquisition method is the same as that in the first embodiment.
S202, inputting the optical data of the lamp panels into a second prediction model, and outputting the optical data of M lamp panels after film coating by the second prediction model.
The second prediction model is a generated type countermeasure network, is a deep learning model and is formed by at least two models in a framework: generating a network model and discriminating a mutual game learning of the network model yields a fairly good output.
In the embodiment of the application, the second prediction model is a model for generating optical data of a lamp panel after film coating according to the optical data of the lamp panel; the output of the second prediction model is generated data which is infinitely close to the acquired optical data after the lamp panel is covered. It can be understood that M optical film layers in this embodiment correspond to different second prediction models, and M second prediction models are used in this embodiment.
In the embodiment of the application, before the optical data of the lamp panel is input into the second prediction model, a third generation network model is also required to be established, and the third generation network model is trained to obtain a fourth generation network model.
For example, referring to fig. 13, M optical film layers are respectively coated on a given lamp panel sample to form M backlight modules as samples, and each backlight module includes one optical film layer and a given lamp panel.
Further, a second sample data set is acquired; the second sample data set includes M first sample data subsets, each second sample data subset including a plurality of sets of second sample data; each group of second sample data comprises optical data of the lamp panel sample in a lighting state, and the optical data of the lamp panel sample in the lighting state after the luminous surface of the lamp panel sample is covered with the optical film layer; the types of the optical film layers corresponding to the different sample data subsets are different, and the types of the plurality of optical film layers corresponding to the same sample data subset are the same. The M optical film layers may be single-layer films or multi-layer composite films, and in particular, which film depends on the actual situation, which is not limited in the embodiment of the present application.
Further, multiple images are acquired for the backlight module corresponding to each optical film layer in the M types, so that multiple groups of second sample data are obtained.
Specifically, model training is carried out according to the second sample data set, and a third generated network model is obtained; the third generation network model is a model for generating optical data of M lamp panel films according to the optical data of one lamp panel;
in the embodiment of the application, G is compared with a sample generator, and a lamp panel optical data is input to generate a group of backlight module optical data; the D ratio is a classifier used for judging whether the input optical data of the backlight module is true or false.
In the embodiment of the application, the third generation network model is required to be trained iteratively through the discrimination network model, and the optical data of the lamp panel after the lamp panel is coated can be directly generated by the optical data of the lamp panel through establishing the second prediction model, so that the lamp panel after the lamp panel is coated is not required to be subjected to multiple image acquisition.
Illustratively, training a third generating network model according to a plurality of groups of second sample data in each second sample data subset, so that the third generating network model generates a plurality of groups of backlight module optical data; the optical data of the backlight module comprises optical data of a plurality of lamp panel samples in a lighting state and optical data of a plurality of lamp panels after being coated with films;
further, training and distinguishing the network model according to the acquired multiple groups of backlight module sample data and multiple groups of backlight module optical data generated by the third generation network model.
The method includes the steps that a plurality of groups of backlight module optical data generated by a third generation network model are recorded as generation data, a plurality of groups of collected backlight module sample data are recorded as real data, the generation data and the real data are input into a discrimination network model, whether the input data are the real data or not is judged by the discrimination network, whether the input data are true or false is judged, the discrimination network outputs probability values of the input data are the real data, and a loss function of the generation type countermeasure network is calculated according to the probability values; and updating parameters of the generating network and the judging network by using a back propagation algorithm according to the loss function of the generating type countermeasure network. So that the generated data is approximated to real data.
Illustratively, training is stopped when the discrimination network is unable to resolve the true or false of the input data, i.e., the probability of the discrimination network outputting is 0.5, regardless of whether the generated data or the true data is input.
The generating network and the discriminating network may be a neural network model built, and may be CNN, RNN, or a fully connected neural network, so long as tasks can be completed.
And performing iterative training on the third generation network model by using the discrimination network model to obtain a fourth generation network model, so that a plurality of groups of backlight module optical data generated by the fourth generation network model approach to a plurality of groups of acquired backlight module sample data.
And taking the fourth generated network model obtained through training as a second prediction model. And inputting the optical data of the lamp panels into a second prediction model, and outputting the optical data of M lamp panels after film coating by the second prediction model.
And S203, respectively performing quality detection on the optical data after the M lamp panels are coated, and obtaining detection results after the M lamp panels of the backlight module are coated.
For example, uniformity detection and defect detection are performed on the optical data after the M lamp panels are coated, so as to obtain detection results after the M lamp panels of the backlight module are coated.
In the embodiment of the application, uniformity detection and defect detection are respectively carried out on optical data of M lamp panels after film coating, and possible implementation modes are as follows: acquiring an acquired image of a display screen to be detected; generating a defect detection map according to the acquired image; and displaying the generated defect detection diagram in detection equipment for detection so as to facilitate the visual inspection of the defects by a user.
Specifically, in the embodiment of the application, the display uniformity judgment and/or the product classification can be performed on the display screen to be detected according to at least one of the number of the defect areas, the average value of the defect scores of all the defect areas and the positions of the defect areas.
According to the backlight module detection method provided by the embodiment of the application, under the condition that a plurality of optical film layers exist, the optical data of the lamp panel are collected, the optical data after the different optical film layers are covered are predicted through the second prediction models corresponding to the different optical film layers, and the optical data after the film layers are analyzed to determine the optimal film covering scheme. Through this scheme, can improve BLU detection's speed and accuracy, reduce detection cost.
The foregoing describes the solution provided by the embodiments of the present application primarily from the perspective of method steps. It will be appreciated that, in order to implement the above-described functions, an electronic device implementing the method includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the electronic device according to the method example, for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other possible division manners may be implemented in practice.
It should also be noted that, in the embodiment of the present application, "greater than" may be replaced with "greater than or equal to", "less than or equal to" may be replaced with "less than", or "greater than or equal to" may be replaced with "greater than", "less than" may be replaced with "less than or equal to".
The embodiment of the application also provides a chip, which is coupled with the memory and is used for reading and executing the computer program or the instructions stored in the memory to execute the method in each embodiment.
The embodiments of the present application also provide an electronic device including a chip for reading and executing a computer program or instructions stored in a memory, so that the method in each embodiment is performed.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions, which when run on an electronic device, cause the electronic device to execute the related method steps to implement a method for acquiring parameters of a wheel aligner in the above embodiment.
The embodiment of the application also provides a computer program product, the computer readable storage medium stores program codes, when the computer program product runs on a computer, the computer is caused to execute the related steps, so as to realize the method for acquiring the parameters of the wheel aligner.
In addition, embodiments of the present application also provide an apparatus, which may be embodied as a chip, component or module, which may include a processor and a memory coupled to each other; the memory is configured to store computer-executable instructions, and when the device is operated, the processor may execute the computer-executable instructions stored in the memory, so that the chip performs one of the methods of obtaining the parameters of the wheel aligner according to the embodiments of the method.
The electronic device, the computer readable storage medium, the computer program product or the chip provided by the embodiments of the present application are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (13)
1. The backlight module detection method is characterized in that the backlight module comprises a lamp panel and a first optical film layer covered on a light emitting surface of the lamp panel, and the method comprises the following steps:
image acquisition is carried out on the lamp panel in the lighted state, so that optical data of the lamp panel are obtained;
inputting the optical data of the lamp panel into a first prediction model corresponding to the first optical film layer to obtain the optical data of the backlight module when the backlight module is in a lighting state;
and performing quality detection on the backlight module according to the optical data of the backlight module to obtain a quality detection result of the backlight module.
2. The method of claim 1, wherein prior to said inputting the lamp panel optical data into the first predictive model corresponding to the first optical film, the method further comprises:
Acquiring a first sample data set; the first sample data set includes N first sample data subsets, each first sample data subset including a plurality of sets of first sample data; each group of first sample data comprises optical data of the lamp panel sample in a lighting state, and the light emitting surface of the lamp panel sample covers the first optical film layer and then the optical data of the lamp panel sample in the lighting state; the types of the lamp panel samples corresponding to the different sample data subsets are different, and the types of the lamp panel samples corresponding to the same sample data subset are the same;
model training is carried out according to the first sample data set, and a first generation network model is obtained;
and carrying out iterative training on the first generated network model through the judging network model, and taking a second generated network model obtained through training as the first prediction model.
3. The method of claim 2, wherein the iteratively training the first generated network model by discriminating the network model comprises:
training the first generation network model according to a plurality of groups of first sample data in each first sample data subset, so that the first generation network model generates a plurality of groups of backlight module optical data; each group of optical data of the backlight module comprises optical data of the lamp panel sample in a lighting state and optical data of the lamp panel after film coating;
And training the judging network model according to the multiple groups of first sample data and the multiple groups of backlight module optical data generated by the first generating network model, so that the multiple groups of backlight module optical data generated by the first generating network model are close to the multiple groups of first sample data.
4. A method according to claim 2 or 3, wherein said obtaining a first sample dataset comprises: and respectively acquiring multiple images of the lamp panel samples of each of the N types to obtain multiple groups of first sample data.
5. The method of claim 1, wherein the performing quality detection on the backlight module according to the optical data of the backlight module to obtain a quality detection result of the backlight module comprises:
abnormal lamp point detection and uniformity detection among lamp beads are carried out on the lamp panel optical data, and a lamp panel detection result is obtained;
performing uniformity detection and defect detection on the optical data of the lamp panel after film coating to obtain a detection result of the lamp panel after film coating;
and obtaining the quality detection result of the backlight module according to the detection result of the lamp panel and the detection result after the lamp panel is coated with the film.
6. The backlight module detection method is characterized in that the backlight module comprises a lamp panel and M optical film layers, and the method comprises the following steps:
Collecting images of the lamp panels in the lighted state to obtain optical data of the lamp panels;
inputting the optical data of the lamp panels into a second prediction model, and outputting the optical data of M lamp panels after film coating by the second prediction model; the optical data of the M lamp panels after film coating are images which are displayed when the lamp panels are in a lighting state and respectively cover M different optical film layers;
and respectively performing quality detection on the optical data after the M lamp panels are coated, and obtaining detection results after the M lamp panels of the backlight module are coated.
7. The method according to claim 6, wherein after the detection result after the lamination of the M lamp panels of the backlight module is obtained, the method further comprises:
according to the detection results of the M lamp panels of the backlight module after film lamination, determining optical data of the optimal lamp panels after film lamination from the optical data of the M lamp panels after film lamination;
and determining the optical film layer corresponding to the optical data of the optimal lamp panel after film coating as the optimal optical film layer of the backlight module.
8. The method of claim 6 or 7, wherein prior to said entering the lamp panel optical data into the second predictive model, the method further comprises:
Acquiring a second sample dataset; the second sample data set includes M first sample data subsets, each second sample data subset including a plurality of sets of second sample data; each group of second sample data comprises optical data of the lamp panel sample in a lighting state, and the optical data of the lamp panel sample in the lighting state after the luminous surface of the lamp panel sample is covered with the optical film layer; the types of the optical film layers corresponding to the data subsets of different samples are different, and the types of the optical film layers corresponding to the data subsets of the same sample are the same;
model training is carried out according to the second sample data set, and a third generated network model is obtained; the third generation network model is a model for generating optical data of M lamp panel films according to the optical data of one lamp panel;
and carrying out iterative training on the third generated network model through the judging network model, and taking a fourth generated network model obtained through training as the second prediction model.
9. The method of claim 8, wherein the iteratively training the third generated network model by discriminating the network model comprises:
training the third generating network model according to a plurality of groups of second sample data in each second sample data subset, so that the third generating network model generates a plurality of groups of backlight module optical data; the optical data of the backlight module comprises optical data of the plurality of lamp panel samples in a lighting state and optical data of the plurality of lamp panels after film coating;
And training the judging network model according to the plurality of groups of second sample data and the plurality of groups of backlight module optical data generated by the third generating network model, so that the plurality of groups of backlight module optical data generated by the third generating network model approaches to the plurality of groups of second sample data.
10. The method of claim 8, wherein the acquiring a second sample data set comprises: and acquiring multiple images of the backlight module corresponding to each optical film layer in the M types to obtain multiple groups of second sample data.
11. The method of claim 6, wherein the quality detecting the optical data of the M lamp panels after film lamination to obtain the detection results of the M lamp panels after film lamination of the backlight module comprises:
and respectively carrying out uniformity detection and defect detection on the optical data after the M lamp panels are coated, and obtaining detection results after the M lamp panels of the backlight module are coated.
12. An electronic device comprising a processor, a memory, and a computer program stored on the memory, the processor being configured to execute the computer program to cause the electronic device to implement the method of any one of claims 1-11.
13. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when run on an electronic device, causes the electronic device to perform the method of any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311169272.XA CN117192821A (en) | 2023-09-12 | 2023-09-12 | Backlight module detection method, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311169272.XA CN117192821A (en) | 2023-09-12 | 2023-09-12 | Backlight module detection method, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117192821A true CN117192821A (en) | 2023-12-08 |
Family
ID=88988269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311169272.XA Pending CN117192821A (en) | 2023-09-12 | 2023-09-12 | Backlight module detection method, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117192821A (en) |
-
2023
- 2023-09-12 CN CN202311169272.XA patent/CN117192821A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3982292A1 (en) | Method for training image recognition model, and method and apparatus for image recognition | |
CN103528617B (en) | A kind of cockpit instrument identifies and detection method and device automatically | |
CN112926405A (en) | Method, system, equipment and storage medium for detecting wearing of safety helmet | |
JP2017049974A (en) | Discriminator generator, quality determine method, and program | |
CN110852316A (en) | Image tampering detection and positioning method adopting convolution network with dense structure | |
CN113421230B (en) | Visual detection method for defects of vehicle-mounted liquid crystal display light guide plate based on target detection network | |
KR20200092143A (en) | System and method for diagnosising display panel using deep learning neural network | |
CN109886147A (en) | A kind of more attribute detection methods of vehicle based on the study of single network multiple-task | |
CN111784665B (en) | OCT image quality evaluation method, system and device based on Fourier transform | |
CN113327243B (en) | PAD light guide plate defect visual detection method based on Ayolov3-Tiny new framework | |
CN109886864A (en) | Privacy covers processing method and processing device | |
CN111680575B (en) | Human epithelial cell staining classification device, equipment and storage medium | |
CN112070762A (en) | Mura defect detection method and device for liquid crystal panel, storage medium and terminal | |
CN117094980A (en) | Ultrasonic breast nodule image interpretation method based on deep learning | |
CN106226033A (en) | Method and device for detecting transmittance of transparent substrate | |
CN111401183A (en) | Artificial intelligence-based cell body monitoring method, system, device and electronic equipment | |
Yang et al. | Deep learning-based weak micro-defect detection on an optical lens surface with micro vision | |
Roudot et al. | u-track3D: Measuring, navigating, and validating dense particle trajectories in three dimensions | |
CN113642425A (en) | Multi-mode-based image detection method and device, electronic equipment and storage medium | |
CN116503258B (en) | Super-resolution computing imaging method, device, electronic equipment and storage medium | |
CN117437615A (en) | Foggy day traffic sign detection method and device, storage medium and electronic equipment | |
CN117192821A (en) | Backlight module detection method, electronic equipment and storage medium | |
CN116129537A (en) | Living body detection method, living body detection device, electronic equipment and storage medium | |
CN110738208A (en) | efficient scale-normalized target detection training method | |
CN113627393B (en) | Commodity identification method based on dual neural network and intelligent vending system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |