CN115516496A - Neural network-based regional backlight dimming method and device - Google Patents

Neural network-based regional backlight dimming method and device Download PDF

Info

Publication number
CN115516496A
CN115516496A CN202080100768.XA CN202080100768A CN115516496A CN 115516496 A CN115516496 A CN 115516496A CN 202080100768 A CN202080100768 A CN 202080100768A CN 115516496 A CN115516496 A CN 115516496A
Authority
CN
China
Prior art keywords
image
backlight
liquid crystal
neural network
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080100768.XA
Other languages
Chinese (zh)
Inventor
张涛
王昊
杜文丽
曾琴
李蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN115516496A publication Critical patent/CN115516496A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/3406Control of illumination source
    • G09G3/342Control of illumination source using several illumination sources separately controlled corresponding to different display panel areas, e.g. along one dimension such as lines
    • G09G3/3426Control of illumination source using several illumination sources separately controlled corresponding to different display panel areas, e.g. along one dimension such as lines the different display panel areas being distributed in two dimensions, e.g. matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0626Adjustment of display parameters for control of overall brightness
    • G09G2320/0646Modulation of illumination source brightness and image signal correlated to each other

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Liquid Crystal Display Device Control (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

A regional backlight dimming method and device based on a neural network relates to the field of image processing and solves the problem of how to improve the dynamic range of an image. The method comprises the following steps: after receiving the first image, inputting the first image into a first neural network and a second neural network for operation respectively to obtain a backlight image and a liquid crystal pixel compensation image of the first image; then, the backlight module (203) is controlled to provide light to the liquid crystal panel (204) according to the backlight image, namely, the backlight module (203) is controlled to provide a light source matched with the backlight image to the liquid crystal panel (204) according to the backlight image, the liquid crystal panel (204) is controlled according to the liquid crystal pixel compensation image, and under the combined action of the backlight image and the liquid crystal pixel compensation image, the liquid crystal panel (204) is enabled to display a second image, wherein the dynamic range of the second image is larger than that of the first image.

Description

Neural network-based regional backlight dimming method and device Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for regional backlight dimming based on a neural network.
Background
Dynamic range refers to the ratio between the maximum value of luminance and the minimum value of luminance in an image, reflecting the range from the darkest to the lightest shades. Compared with a Low Dynamic Range (HDR), the High Dynamic Range (HDR) expands the brightness Range of display, presents more details of bright parts and dark parts, brings richer colors and more vivid and natural detailed expressions to the picture, and enables the picture to be closer to the human eye. Therefore, the high dynamic range image can reproduce the brightness of the real scene to the maximum extent, and the user can feel as if he is in the scene.
Currently, video systems in the consumer electronics field typically display low dynamic range images due to the state of the art effects of display and the like. The low dynamic range image does not reach the natural scene dynamic range which can be perceived by human eyes, and a user only sees a screen representation from the liquid crystal display, but not the real reproduction of the natural scene, so that the viewing quality is reduced, and further the unrealistic feeling is generated.
In the conventional art, a local backlight dimming technology is adopted, and an image is displayed through a liquid crystal display including a backlight module and a liquid crystal panel. Specifically, firstly, the input image is subjected to partition processing according to the size of the backlight module, and the partition of the input image corresponds to the partition of the backlight module; then extracting the brightness information of the subareas of the input image, and determining the original backlight image of the backlight module according to the brightness information of the subareas of the input image; and performing liquid crystal pixel compensation according to the original backlight image, controlling the backlight module to emit light to the liquid crystal panel according to the original backlight image, and controlling the liquid crystal panel according to the liquid crystal pixel compensation image, so that the liquid crystal display displays images under the combined action of the original backlight image and the liquid crystal pixel compensation image. However, the original backlight image determined according to the conventional technique has low accuracy, and image noise is easily amplified in the liquid crystal pixel compensation process, and the display quality of the image is still low while the dynamic range of the image is improved. Therefore, how to increase the dynamic range of the image is an urgent problem to be solved.
Disclosure of Invention
The application provides a regional backlight dimming method and device based on a neural network, which can effectively improve the dynamic range of an image.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect, the present application provides a neural network-based regional backlight dimming method, which may be applied to a liquid crystal display, or a processor that may support the liquid crystal display to implement the method, for example, the processor includes a system-on-chip, and the method includes: after receiving the first image, inputting the first image into a first neural network and a second neural network for operation respectively to obtain a backlight image and a liquid crystal pixel compensation image of the first image; and then, controlling the backlight module to provide light rays for the liquid crystal panel according to the backlight image, namely controlling the backlight module to provide a light source matched with the backlight image for the liquid crystal panel according to the backlight image, and controlling the liquid crystal panel according to the liquid crystal pixel compensation image to enable the liquid crystal panel to display a second image under the combined action of the backlight image and the liquid crystal pixel compensation image, wherein the dynamic range of the second image is larger than that of the first image.
In addition, before the first image is input into the first neural network for operation, a target backlight image of the sample image is determined, and the first neural network is trained by using the sample image and the target backlight image to generate parameters of the first neural network. And before inputting the first image into the second neural network for operation, determining a target liquid crystal pixel compensation image of the sample image, and training the second neural network by using the sample image and the target liquid crystal pixel compensation image to generate parameters of the second neural network.
Optionally, the first neural network and the second neural network are both deep convolutional neural networks. The first neural network and the second neural network share part of the parameters.
Since the artificial neurons of the neural network can respond to a part of the surrounding cells within the coverage, it has an excellent performance for image processing. The regional backlight dimming method based on the neural network comprises the steps of firstly obtaining a target backlight image and a target liquid crystal pixel compensation image of a sample image, and training a first neural network by using the target backlight image to generate parameters of the first neural network; and training the second neural network by using the target liquid crystal pixel compensation image to generate parameters of the second neural network. And then, determining a backlight image of the first image by using the first neural network, determining a liquid crystal pixel compensation image of the first image by using the second neural network, and controlling the liquid crystal panel to display the second image based on the backlight image and the liquid crystal pixel compensation image, so that the dynamic range of the image displayed by the liquid crystal is improved, namely the contrast of the image is improved, the display quality of the image is also improved, and the energy consumption is reduced.
In one possible design, determining a target liquid crystal pixel compensation image for a sample image includes: the method comprises the steps of determining a diffusion backlight image of a sample image according to a target backlight image, then determining a plurality of candidate liquid crystal pixel compensation images according to the sample image, the diffusion backlight image and a plurality of preset liquid crystal pixel compensation algorithms, and then determining the target liquid crystal pixel compensation image according to the candidate liquid crystal pixel compensation images and the sample image. Optionally, the liquid crystal pixel compensation algorithm includes a linear compensation algorithm and a non-linear compensation algorithm.
Optionally, determining a target liquid crystal pixel compensation image according to the multiple candidate liquid crystal pixel compensation images and the sample image includes: comparing the candidate liquid crystal pixel compensation images with the sample image, and determining the candidate liquid crystal pixel compensation image with the maximum structural similarity as a target liquid crystal pixel compensation image; or, the candidate liquid crystal pixel compensation image with the highest average score is determined as the target liquid crystal pixel compensation image.
In another possible design, determining a target backlight image of the sample image includes: determining an original backlight image of a sample image according to pixel values of the sample image, and adjusting the original backlight image to obtain a plurality of candidate backlight images of the sample image; a target backlight image is determined from the plurality of candidate backlight images. Thereby, a light source better matching the backlight image is provided to the liquid crystal panel.
Optionally, adjusting the original backlight image to obtain a plurality of candidate backlight images of the sample image includes: partitioning an original backlight image to obtain a plurality of backlight partitions, wherein partitioned backlight images of the plurality of backlight partitions are different; and adjusting the partitioned backlight images in the plurality of backlight partitions according to the adjusting value to obtain a plurality of candidate backlight images, wherein the candidate backlight images meet backlight limiting conditions, the backlight limiting conditions are determined according to the sample image and a preset regional backlight extraction algorithm, and the value range of the adjusting value is 0-255.
Optionally, determining the target backlight image from the plurality of candidate backlight images includes: controlling the backlight module to provide light to the liquid crystal panel according to the candidate backlight images; calculating the dynamic ranges of a plurality of display brightnesses of the liquid crystal panel under the illumination of the backlight module controlled by different candidate backlight images; and determining the target backlight image according to the contrast ratios of the candidate backlight images and the dynamic ranges of the display brightness.
Optionally, determining the target backlight image according to the contrasts of the candidate backlight images and the dynamic ranges of the display luminances includes: carrying out weighting calculation on the contrast ratios of the candidate backlight images and the dynamic ranges of the display brightness to determine a plurality of candidate target backlight images; and determining the candidate target backlight image with the maximum weight as the target backlight image, or determining the candidate target backlight image with the highest average score as the target backlight image.
In a second aspect, an embodiment of the present application further provides a neural network-based local backlight dimming device, and for beneficial effects, reference may be made to the description of the first aspect and details are not repeated here. The neural network-based regional backlight dimming device has a function of realizing the behavior in the method example of the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above. In one possible design, the neural network-based local backlight dimming device includes: a processing unit and a control unit. The processing unit is used for determining a target backlight image of the sample image and training the first neural network according to the sample image and the target backlight image to generate parameters of the first neural network. The processing unit is further configured to determine a target liquid crystal pixel compensation image of the sample image, and train the second neural network according to the sample image and the target liquid crystal pixel compensation image to generate parameters of the second neural network. The processing unit is further used for inputting the first image into the first neural network and the second neural network to respectively perform operation so as to obtain a backlight image and a liquid crystal pixel compensation image of the first image. The control unit is used for controlling the liquid crystal panel to display a second image based on the backlight image and the liquid crystal pixel compensation image, wherein the backlight image controls the backlight module to provide light rays for the liquid crystal panel, and the dynamic range of the second image is larger than that of the first image. The units may perform corresponding functions in the method example of the first aspect, for specific reference, detailed description of the method example is given, and details are not repeated here.
In a third aspect, a neural network-based local backlight dimming device is provided, and the neural network-based local backlight dimming device may be the liquid crystal display in the above method embodiment, or a chip disposed in the liquid crystal display. The neural network based regional backlight dimming device comprises a processor and optionally a memory. Wherein the memory is used for storing a computer program or instructions, and the processor is coupled with the memory, and when the processor executes the computer program or instructions, the neural network based area backlight dimming device is caused to execute the method according to the first aspect executed by the processor in the method embodiments.
In a fourth aspect, a liquid crystal display is provided. The liquid crystal display comprises a processor, a memory, a backlight module and a liquid crystal panel, wherein the memory is used for storing computer programs and instructions, and the processor is used for calling the computer programs and the instructions to assist the backlight module and the liquid crystal panel to execute the neural network-based regional backlight dimming method in the first aspect.
In a fifth aspect, there is provided a computer program product comprising: computer program code which, when run, causes the method performed by the processor in the above-described first aspect to be performed.
In a sixth aspect, the present application provides a chip system, which includes a processor, and is configured to implement the functions of the processor in the method of the first aspect. In one possible design, the system-on-chip further includes a memory for storing program instructions and/or data. The chip system may be formed by a chip, or may include a chip and other discrete devices.
In a seventh aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed, implements the method performed by a processor in the first aspect.
In the present application, the names of the neural network based regional backlight dimming device and the liquid crystal display do not limit the devices themselves, and in practical implementations, the devices may appear by other names. Provided that the function of each device is similar to that of the present application, and that the devices are within the scope of the claims of the present application and their equivalents.
Drawings
Fig. 1 is a schematic diagram of a neural network provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a liquid crystal display according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an LCD for displaying images according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a neural network based regional backlight dimming method according to an embodiment of the present application;
fig. 5 is a flowchart of a neural network based regional backlight dimming method according to an embodiment of the present application;
FIG. 6 is a block diagram of a first neural network according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of a second neural network according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of regional backlight dimming based on a neural network according to an embodiment of the present application;
fig. 9 is a flowchart of a neural network based regional backlight dimming method according to an embodiment of the present application;
fig. 10 is a flowchart of a neural network based local backlight dimming method according to an embodiment of the present application;
fig. 11 is a schematic diagram illustrating a regional backlight dimming result based on a neural network according to an embodiment of the present application;
fig. 12 is a schematic composition diagram of a neural network based local backlight dimming device according to an embodiment of the present application;
fig. 13 is a schematic composition diagram of a neural network-based local backlight dimming device according to an embodiment of the present application.
Detailed Description
The terms "first," "second," and "third," etc. in the description and claims of this application and the above-described drawings are used for distinguishing between different objects and not for limiting a particular order.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
For clarity and conciseness of the following descriptions of the various embodiments, a brief introduction to the related art is first given:
in the field of machine learning and cognitive science, neural Networks (NNs) are mathematical or computational models that mimic the structure and function of biological Neural Networks (the central nervous system of animals, particularly the brain) and are used to estimate or approximate functions. The neural network may also be referred to as an Artificial Neural Network (ANN), a neural network Model, or a Connection Model. The artificial neural network depends on the complexity of the system, and the purpose of processing information is achieved by adjusting the interconnection relationship among a large number of internal nodes. The neural network may include a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), a multilayer perceptron (MLP), and a Recurrent Neural Network (RNN).
(1) Neural network
The neural network may be composed of neural units, which may be referred to as x s And an arithmetic unit having intercept 1 as input. The output of the arithmetic unit satisfies the following formula (1).
Figure PCTCN2020091430-APPB-000001
Wherein s =1, 2, … … n, n is a natural number greater than 1, W s Is x s B is the bias of the neural unit. f is an activation function (activation functions) of the neural unit for introducing a nonlinear characteristic into the neural network to convert an input signal in the neural unit into an output signal. The output signal of the activation function may be used as an input for the next layer, and the activation function may be a sigmoid function. A neural network is a network formed by connecting a plurality of single neural units together, i.e. the output of one neural unit may be that of another neural unitAnd (4) inputting. The input of each neural unit can be connected with the local receiving domain of the previous layer to extract the characteristics of the local receiving domain, and the local receiving domain can be a region composed of a plurality of neural units.
Fig. 1 is a schematic diagram of a neural network according to an embodiment of the present application. The neural network 100 has N processing layers, where N is greater than or equal to 3 and is a natural number. The first layer of the neural network is an input layer 110, which is responsible for receiving input signals, and the last layer of the neural network is an output layer 130, which outputs the processing results of the neural network. The other layers except the first and last layers are intermediate layers 140, and these intermediate layers 140 collectively constitute the hidden layers 120, and each intermediate layer 140 in the hidden layers 120 can receive either an input signal or an output signal. The hidden layer 120 is responsible for processing of the input signal. Each layer represents a logic level of signal processing, and through multiple layers, data signals may be processed through multiple levels of logic.
The input signal to the neural network may be various forms of signals such as a video signal, a voice signal, a text signal, an image signal, a temperature signal, and the like in some possible embodiments. In this embodiment, the processed image signal may be various sensor signals such as a landscape signal captured by a camera (image sensor), an image signal of a community environment captured by a monitoring device, and a face signal of a human face acquired by an access control system. The input signals to the neural network also include various other computer-processable engineering signals, which are not further described herein. If the neural network is used for deep learning of the image signal, the image quality can be improved.
(2) Deep neural network
A deep neural network, also called a multi-layer neural network, may be understood as a neural network with multiple hidden layers. Dividing the deep neural network according to the positions of different layers, wherein the neural networks in the deep neural network can be divided into three types: an input layer, a hidden layer and an output layer. Typically the first layer is the input layer, the last layer is the output layer, and the number of layers in between are all hidden layers. The layers are all connected, namely any neuron at the ith layer is connected with any neuron at the (i + 1) th layer.
Although deep neural networks seem complex, they are not really complex in terms of the work of each layer, which is simply a linear relational expression as follows: y = α (Wx + b), where x is the input vector, y is the output vector, b is the offset vector, W is the weight matrix (also called coefficient), and α () is the activation function. Each layer simply performs such a simple operation on the input vector x to obtain the output vector y. Due to the large number of layers of the deep neural network, the number of coefficients W and offset vectors b is also large. The definition of these parameters in the deep neural network is as follows: taking the coefficient W as an example: suppose that in a three-layer deep neural network, the linear coefficients from the 4 th neuron of the second layer to the 2 nd neuron of the third layer are defined as
Figure PCTCN2020091430-APPB-000002
The superscript 3 represents the number of layers in which the coefficient W is located, and the subscripts correspond to the third-layer index 2 that is output and the second-layer index 4 that is input.
In summary, the coefficients from the kth neuron at layer L-1 to the jth neuron at layer L are defined as
Figure PCTCN2020091430-APPB-000003
Note that the input layer is without the W parameter. In deep neural networks, more hidden layers make the network more able to depict complex situations in the real world. Theoretically, the more parameters the higher the model complexity, the larger the "capacity", which means that it can accomplish more complex learning tasks. The final goal of the process of training the deep neural network, i.e., learning the weight matrix, is to obtain the weight matrix (formed by a number of layers of vectors W) of all layers of the deep neural network that has been trained.
(3) Convolutional neural network
A convolutional neural network is a deep neural network with a convolutional structure. The convolutional neural network comprises a feature extractor consisting of convolutional layers and sub-sampling layers, which can be regarded as a filter. The convolutional layer is a neuron layer for performing convolution processing on an input signal in a convolutional neural network. In convolutional layers of convolutional neural networks, one neuron may be connected to only a portion of the neighbor neurons. In a convolutional layer, there are usually several characteristic planes, and each characteristic plane may be composed of several neural units arranged in a rectangular shape. The neural units of the same feature plane share weights, where the shared weights are convolution kernels. Sharing weights may be understood as the way image information is extracted is location independent. The convolution kernel can be initialized in the form of a matrix of random size, and can be learned to obtain reasonable weights in the training process of the convolutional neural network. In addition, sharing weights brings the direct benefit of reducing connections between layers of the convolutional neural network, while reducing the risk of overfitting.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 2 is a schematic composition diagram of an lcd according to an embodiment of the present disclosure, and as shown in fig. 2, an lcd 200 includes a processor 201, a processor 202, a backlight module 203, a liquid crystal panel 204, and a memory 205. The processor 201, the processor 202, the backlight module 203, the liquid crystal panel 204 and the memory 205 are connected by a control bus 206.
The following describes the components of the lcd 200 in detail with reference to fig. 2:
the liquid crystal display 200 includes a plurality of processors, such as the processor 201 and the processor 202 shown in fig. 2. The processor 201 is the control center of the liquid crystal display 200. Typically, the processor 201 is a Central Processing Unit (CPU) including a CPU core or a plurality of CPU cores, such as CPU0 and CPU1 shown in fig. 2. Further, the processor 201 may also be an Application Specific Integrated Circuit (ASIC), or be configured as one or more integrated circuits, such as: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs). The processor 201 may control the liquid crystal display 200 to display an image by running or executing a software program stored in the memory 205 and calling up data stored in the memory 205.
The processor 202 may have the same physical form as the processor 201, or may have a different physical form from the processor 201. The processor 202 is a processing chip with computing power for performing an operation on the first image to obtain a backlight image and a liquid crystal pixel compensation image of the first image to reduce the computing burden of the processor 201. The backlight image may also be referred to herein as a backlight matrix or backlight brightness.
In an embodiment of the application, the processor 202 is configured to assist the processor 201 in processing the first image. For example, the processor 202 acquires a backlight image of the first image using the first neural network and a liquid crystal pixel compensation image of the first image using the second neural network according to the first image indicated by the processor 201, and feeds back the backlight image and the liquid crystal pixel compensation image of the first image to the processor 201. The processor 201 controls the backlight module 203 to provide light to the liquid crystal panel 204 according to the backlight image, controls the liquid crystal of the liquid crystal panel 204 according to the liquid crystal pixel compensation image, and enables the liquid crystal display 200 to display the second image under the combined action of the backlight image and the liquid crystal pixel compensation image. For example, as shown in fig. 3, a schematic diagram of a liquid crystal display device displaying an image according to an embodiment of the present application is provided. The backlight module 203 provides light to the liquid crystal panel 204, and the liquid crystal panel 204 displays a second image according to the liquid crystal pixel compensation image.
Specifically, the liquid crystal panel 204 includes sub-pixels, liquid crystal molecules, a color filter layer, and the like. The processor 201 generates a corresponding data voltage V according to the display data of the liquid crystal pixel compensation image data The switching tubes in the sub-pixels are gated row by gate lines (gates), and V is gated by data lines data And the pixel electrode electrically connected with the switching tube is charged through the switching tube. Under the combined action of the pixel electrode and the common electrode in the sub-pixel, the deflection angle of the liquid crystal molecules can be controlled, thereby controlling the backlight moduleThe light provided by the group penetrates through the light transmittance of the liquid crystal layer in the sub-pixel, so that the aim of controlling the gray scale displayed by the sub-pixel is fulfilled. In addition, the display panel is also provided with a color filter layer which can filter light rays emitted by different sub-pixels, so that the image display of colors, such as three primary colors of RGB, is realized. The specific principle of displaying images by the liquid crystal panel can be illustrated by referring to the prior art, and the application is not limited thereto.
In practical applications, the processor 202 may be an accelerator card, a coprocessor, a Graphics Processing Unit (GPU), a Neural Network Processor (NPU), or the like. In this embodiment, one or more of the processors 201 may be configured, and one or more of the processors 202 may be configured. It should be noted, however, that the processor 202 is an optional component in this embodiment. Even if only one processor 201 is provided, the processor 201 can independently receive the first image, obtain the backlight image of the first image by using the first neural network, and obtain the liquid crystal pixel compensation image of the first image by using the second neural network, and then the processor 201 controls the backlight module 203 to provide light to the liquid crystal panel 204 according to the backlight image, controls the liquid crystal of the liquid crystal panel 204 according to the liquid crystal pixel compensation image, and enables the liquid crystal display 200 to display the second image under the combined action of the backlight image and the liquid crystal pixel compensation image. When the lcd 200 includes both the processor 201 and the processor 202, the processor 201 and the processor 202 cooperate to perform the above operations.
Further, the processor 202 is further configured to determine a target backlight image of the sample image, train the first neural network according to the sample image and the target backlight image, and generate parameters of the first neural network; and determining a target liquid crystal pixel compensation image of the sample image, and training the second neural network according to the sample image and the target liquid crystal pixel compensation image to generate parameters of the second neural network.
In another embodiment, an Artificial Intelligence (AI) chip 207 is disposed on the liquid crystal display 200, the AI chip 207 trains the first and second neural networks periodically according to the received sample data to generate parameters of the first and second neural networks, and the parameters of the first and second neural networks are stored in the memory 205 or transferred to the processor 202 for storage through the control bus 206. Compared with the embodiments listed above, the artificial intelligence chip 207 trains the sample data periodically to generate the parameters of the first neural network and the parameters of the second neural network, so that the change of the image can be better adapted to determine the backlight image and the liquid crystal pixel compensation image. The form of the artificial intelligence chip 207 may be a chip or other physical components, such as a training chip for constructing a neural network model, or an inference chip for performing inference by using the neural network model.
Of course, the process of determining the target backlight image and the target liquid crystal pixel compensation image of the sample image, and training the first neural network and the second neural network may be performed in a device other than the liquid crystal display 200, and the application is not limited thereto.
Both processor 201 and processor 202 may access memory 205 through control bus 206. The memory 205 stores therein a first neural network and a second neural network, i.e., parameters of the first neural network and parameters of the second neural network. Processor 201 or processor 202 may call up a first neural network and a second neural network from memory 205, obtain a backlight image according to the first neural network, and obtain a liquid crystal pixel compensation image according to the second neural network.
In addition, the memory 205 is also used for storing software programs for executing the scheme of the application, and the processor 201 controls the execution.
In physical form, the memory 205 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM) or other types of dynamic storage devices that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to. The memory 205 may be self-contained and coupled to the processor 201 via a control bus 206. The memory 205 may also be integrated with the processor 201, without limitation.
The control bus 206 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 2, but it is not intended that there be only one bus or one type of bus.
The configuration of the liquid crystal display shown in fig. 2 does not constitute a limitation of the liquid crystal display, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
Next, referring to fig. 4, a detailed description is given of a neural network-based regional backlight dimming method provided in an embodiment of the present application, where the method is applied to the liquid crystal display 200 shown in fig. 2, and a processor 201 and a processor 202 are taken as examples for description here. As shown in fig. 4, the method may include the following steps.
S401, the processor 202 determines a target backlight image of the sample image.
S402, the processor 202 trains the first neural network according to the sample image and the target backlight image to generate parameters of the first neural network.
In one possible design, the processor 202 first determines an original backlight image of the sample image, and adjusts the original backlight image to obtain a target backlight image. Illustratively, as shown in FIG. 5, determining the target backlight image of the sample image comprises the following steps.
S4011, the processor 202 determines an original backlight image of the sample image according to the pixel values of the sample image.
S4012, the processor 202 adjusts the original backlight image to obtain a plurality of candidate backlight images of the sample image.
S4013, the processor 202 determines a target backlight image from the plurality of candidate backlight images.
The processor 202 may partition the original backlight image, adjust the partitioned backlight image of the backlight partition, obtain a plurality of candidate backlight images, and determine the target backlight image from the plurality of candidate backlight images. Reference is made to the following discussion regarding the specific process of determining a target backlight image for a sample image.
Inputting a sample image and a target backlight image into a first neural network, from the first layer, each layer in the first neural network operates the sample image, namely, forward propagation is carried out to obtain a predicted backlight image, the predicted backlight image and the target backlight image obtain a loss value according to a loss function, the loss value is fed back according to a certain mechanism, the weight and the bias of each layer in the first neural network are changed, then, from the first layer, the sample image is operated, the steps are repeated, iterative training is carried out on the first neural network until the loss function is converged, and parameters of the first neural network are obtained.
Fig. 6 is a schematic structural diagram of a first neural network according to an embodiment of the present application. As shown in fig. 6 (a), the first neural network includes convolutional layer (conv) 0_1, convolutional layer 0_2, convolutional layer 1, convolutional layer 2, convolutional layer 3, and Residual Block (Residual Block). Of these, convolutional layer 0_1, convolutional layer 0_2, convolutional layer 1, convolutional layer 2, convolutional layer 3, and the residual block are used to extract image features, specifically, convolutional layer 0_1, convolutional layer 0_2, convolutional layer 1, convolutional layer 2, and convolutional layer 3 are used to extract image features in different dimensions, respectively. It is understood that image features in different dimensions refer to features of images of different sizes. Convolutional layer 0_1 and convolutional layer 0_2 are used to extract image features in the same dimension. As shown in (b) of fig. 6, the residual block includes a convolution layer and an inclusion structure. The inclusion structure includes a plurality of convolutional and pooling layers, and normalization, activation function (ReLU), and feature concatenation (concat) processes. The ReLU is also called a modified linear unit, and is an activation function (activation function) commonly used in an artificial neural network, and generally refers to a nonlinear function represented by a ramp function and a variation thereof. The concat () method is used to concatenate two or more arrays (or tensors). The setting of the residual block can not only expand the network width and increase the adaptability of the network to different scales, but also improve the efficiency and performance of network training by utilizing the advantages of residual training. The first neural network adopts convolution operations of convolution kernels with different sizes, step lengths and zero padding, so that the first image is subjected to deep learning to obtain a backlight image with the same size as the backlight module 203.
Optionally, before training the first neural network, data enhancement is performed on the data set, normalization is performed, and the enhanced data set is divided into a training set and a test set according to a proportion. For example, the data is flipped and mirrored. The training set and the test set are divided according to the proportion of 8:2. After the parameters of the first neural network are generated, the test images in the test set are input to the trained first neural network for testing, a predicted backlight value is obtained, and evaluation indexes are calculated.
In some embodiments, the parameter initialization of the first neural network uses the Xavier method; ADAM optimizer settings as beta 1 =0.9,β 2 =0.999,ε=10 -8 (ii) a Initial training rate of 10 -4 Every 10 times, the reduction is 20%; the learning rate may be set to 0.0001, the number of iterations may be set to 24000, and the number of samples per training is 2. The loss function satisfies the following formula (2).
Figure PCTCN2020091430-APPB-000004
Wherein, I input Representing a sample image, I ref Representing the target backlight image, the F (·, Θ) parameter is the Θ network mapping function. λ represents a weight coefficient, and λ takes a valueIn the range of 0 to 1.Loss represents a Loss function.
And training the first neural network according to the mode until the loss function is converged, finishing the training to obtain the parameters of the first neural network, and then testing the first neural network.
S403, the processor 202 inputs the first image into the first neural network for operation, so as to obtain a backlight image of the first image.
After receiving the first image from the processor 201, the processor 202 inputs the first image into the first neural network for operation, the first neural network extracts features of the first image, and after operation, the first neural network outputs a backlight image of the first image, that is, a backlight matrix of the first image.
S404, the processor 202 inputs the first image into a second neural network for operation so as to obtain a liquid crystal pixel compensation image of the first image.
And inputting the first image into a second neural network for operation, extracting the characteristics of the first image by the second neural network, and outputting the liquid crystal pixel compensation image of the first image by the second neural network after operation.
Optionally, the first neural network and the second neural network are both deep convolutional neural networks.
For example, fig. 7 is a schematic structural diagram of a second neural network provided in an embodiment of the present application. Since the second neural network is to obtain the liquid crystal pixel compensation image of the same resolution size as the first image and the second neural network is closely related to the backlight image of the first image, the first neural network and the second neural network share part of the parameters. The second neural network is obtained by adding deconvolution and skip-connections (skip-connections) operations on the basis of the first neural network. The structure of the second neural network is similar to the encoding (encoder) and decoding (decoder) network structures of U-net. The decoding layer 1, the decoding layer 2, the decoding layer 3 and the decoding layer 4 are all deconvolution layers. And inspired by the superiority of the Resnet model and the inclusion, the residual block comprises a Resnet residual structure and an inclusion structure. Because the Incep adopts convolution kernels with different sizes, which means that different receptive fields exist, and finally, the Incep means the fusion of features with different scales, richer spatial features can be processed, and the diversity of the features is increased. The second neural network contains the same residual layers as the first neural network. In addition, in order to ensure the output result of the second neural network, the second neural network adds the output of the last deconvolution to the first image, namely, jump connection 4, which has the function of gradually fusing the local and bottom layer information of the shallow image into the deep image, and finally carrying out convolution to obtain the liquid crystal pixel compensation image. The jump connection 1, the jump connection 2 and the jump connection 3 are used for splicing tensor data of two different layers, so that the receptive field of a network is improved, the number of characteristic diagrams of the current layer is increased, and further information is increased.
S405, the processor 201 controls the liquid crystal panel 204 to display the second image based on the backlight image and the liquid crystal pixel compensation image, wherein the backlight image controls the backlight module 203 to provide light to the liquid crystal panel 204.
As shown in fig. 8, a schematic diagram of a neural network-based regional backlight dimming principle provided in an embodiment of the present application is shown. After receiving the backlight image and the liquid crystal pixel compensation image from the processor 202, the processor 201 controls the backlight module 203 to provide light to the liquid crystal panel 204 according to the backlight image, that is, the backlight module 203 emits light to the liquid crystal panel 204, and at the same time, the processor 201 controls the liquid crystal of the liquid crystal panel 204 according to the liquid crystal pixel compensation image, and under the combined action of the backlight image and the liquid crystal pixel compensation image, the liquid crystal display 200 displays a second image. After the first image passes through the first neural network and the second neural network, the dynamic range of the second image displayed on the liquid crystal display 200 is larger than that of the first image.
Since the artificial neurons of the neural network can respond to a part of the surrounding cells within the coverage, it is excellent for image processing. According to the regional backlight dimming method based on the neural network, the first neural network is used for determining the backlight image of the first image, the second neural network is used for determining the liquid crystal pixel compensation image of the first image, and the liquid crystal panel is controlled to display the second image based on the backlight image and the liquid crystal pixel compensation image, so that the dynamic range of the image displayed by the liquid crystal display is improved, namely the contrast of the image is improved, the display quality of the image is improved, and the energy consumption is reduced.
In clinical medical applications, medical instruments display images with a higher dynamic range, so that the images can be clearer, texture details can be enhanced, and the diagnosis accuracy of doctors can be improved.
In biomedical applications, higher dynamic range images can represent true biomolecular structures.
In astronomical observations, images of higher dynamic range can truly display scenes with obvious differences in brightness and darkness in large-scale space.
In rocket lift-off monitoring, the brightness information of flames and the dark information of surrounding smog can be observed by using images with higher dynamic ranges, so that workers can obtain important information of rockets to carry out scientific research.
It should be noted that the process of displaying the second image on the liquid crystal display 200 has no dependency relationship with the process of training the first neural network and the second neural network, and the process of training the first neural network and the second neural network may be pre-trained before the liquid crystal display 200 displays the second image. I.e., S401 and S402, have no dependency relationship with S403 to S405.
Further, before the first image is input into the second neural network for operation, the second neural network is trained so as to determine a liquid crystal pixel compensation image of the image by using the second neural network. As shown in fig. 9, the embodiment of the present application further includes the following steps.
S901, the processor 202 determines a target liquid crystal pixel compensation image of the sample image.
As shown in fig. 10, the processor 202 determines the target lc pixel compensation image of the sample image specifically includes the following steps.
S9011, the processor 202 determines a diffused backlight image of the sample image according to the target backlight image.
For example, a Blur masking method (BMA) is used to diffuse the target backlight image to the same size resolution as the liquid crystal panel 204. Since the light has a diffusion phenomenon in the propagation process, after the target backlight image is subjected to diffusion processing, the diffusion backlight image is used for liquid crystal pixel compensation, so that the dynamic range and the quality of the image displayed by the liquid crystal panel 204 can be further improved.
S9012, the processor 202 determines a plurality of candidate liquid crystal pixel compensation images according to the sample image, the diffused backlight image, and a plurality of preset liquid crystal pixel compensation algorithms.
S9013, the processor 202 determines a target liquid crystal pixel compensation image according to the plurality of candidate liquid crystal pixel compensation images and the sample image.
Liquid crystal pixel compensation is the process of performing some necessary transformations on the pixel values of the input image based on the backlight brightness to achieve the desired display effect. In the regional backlight dimming process, the liquid crystal pixel compensation can better keep image details, improve the display quality of the image after dimming and improve the contrast of the image. The current popular liquid crystal pixel compensation algorithm mainly comprises a linear compensation method and a nonlinear compensation method. The linear compensation method simply considers that the final display brightness of the liquid crystal display is the result of multiplying the backlight brightness by the pixel brightness of the input image. The non-linear compensation method simply increases the overall brightness of the displayed image when the backlight brightness decreases.
Optionally, the liquid crystal pixel compensation algorithm may further include a logarithmic function compensation method, an S-curve + logarithmic function compensation method, and a sectional compensation method. The specific process of determining the candidate lc pixel compensation image by the processor 202 may refer to the description of the process of the existing lc pixel compensation algorithm, which is not repeated herein.
In one possible implementation, the processor 202 compares a plurality of candidate liquid crystal pixel compensation images with the sample image, and determines the candidate liquid crystal pixel compensation image with the largest structural similarity as the target liquid crystal pixel compensation image.
For example, the processor 202 calculates a structural similarity index of each of the plurality of candidate liquid crystal pixel compensation images with the sample image, and determines the candidate liquid crystal pixel compensation image with the largest structural similarity index as the target liquid crystal pixel compensation image.
Structural Similarity Index (SSIM) is an index that measures the similarity between two images. The value of the structural similarity index is any one of 0 to 1. When the two images are identical, the value of the structural similarity indicator is equal to 1. As the realization of the structural similarity theory, the structural similarity index defines structural information as independent from brightness and contrast from the angle of image composition, reflects the attribute of an object structure in a scene, and models distortion as the combination of three different factors of brightness, contrast and structure. The mean is used as an estimate of the luminance, the standard deviation as an estimate of the contrast, and the covariance as a measure of the degree of structural similarity.
The processor 202 may calculate the contrast of each of the candidate liquid crystal pixel compensation images, for example, the processor 202 calculates the contrast of the candidate liquid crystal pixel compensation image by using the ratio of the gray-level value of the candidate liquid crystal pixel compensation image at 90% of all pixels to the gray-level value of the candidate liquid crystal pixel compensation image at 10% of all pixels.
The contrast of the candidate liquid crystal pixel compensation image satisfies the following formula (3).
Figure PCTCN2020091430-APPB-000005
Wherein, CR 1 Representing the contrast, H, of the candidate liquid crystal pixel compensation image 90 Representing a 90% gray value, L 10 Representing a 10% gray value.
For another example, the contrast and the structural similarity index of each candidate liquid crystal pixel compensation image are added to obtain a comprehensive index for evaluating the candidate liquid crystal pixel compensation image, and the candidate liquid crystal pixel compensation image with the maximum comprehensive index value is determined as the target liquid crystal pixel compensation image.
Optionally, the processor 202 filters the target liquid crystal pixel compensation image by using a bilateral filter, removes noise, and enhances details of the target liquid crystal pixel compensation image to obtain the optimal target liquid crystal pixel compensation image.
In another possible implementation, processor 202 determines the candidate liquid crystal pixel compensation image with the highest average score as the target liquid crystal pixel compensation image.
The average score of the candidate liquid crystal pixel compensation image may be determined by the processor 202 based on the display effect of the liquid crystal panel 204 displaying the candidate liquid crystal pixel compensation image. For example, the processor 202 determines a candidate liquid crystal pixel compensation image having the highest Mean Opinion Score (MOS) value as the target liquid crystal pixel compensation image. Optionally, the processor 202 controls the liquid crystal panel 204 to display an image according to a plurality of candidate liquid crystal pixel compensation images, so that a plurality of users score the image displayed by the liquid crystal panel 204, and the processor 202 determines the candidate liquid crystal pixel compensation image corresponding to the image with the highest average score as the target liquid crystal pixel compensation image.
S902, the processor 202 trains the second neural network according to the sample image and the target liquid crystal pixel compensation image to generate parameters of the second neural network.
After the target liquid crystal pixel compensation image is obtained, inputting the sample image and the target liquid crystal pixel compensation image into a second neural network, conducting forward propagation to obtain a predicted liquid crystal pixel compensation image, enabling the predicted liquid crystal pixel compensation image and the target liquid crystal pixel compensation image to obtain a loss value according to a loss function, conducting backward feedback on the loss value according to a certain mechanism, changing the weight and the bias of each layer in the second neural network, repeating the steps, conducting iterative training on the second neural network until the loss function is converged, and obtaining parameters of the second neural network.
Optionally, the initialization mode of the parameters of the second neural network is the same as the initialization mode of the parameters of the first neural network. Before training the second neural network, data enhancement is carried out on the data set, normalization is carried out, and the enhanced data set is divided into a training set and a testing set according to a certain proportion. Alternatively, the processor 202 may convert the RGB image into YUV space, take the Y component, and divide the 1080 × 1920 image into 256 × 256 batches (patches) with a step size of 128 pixels (pixels).
And after the parameters of the second neural network are generated, inputting the test images in the test set into the trained second neural network for testing to obtain a predicted liquid crystal pixel compensation image, and calculating an evaluation index. For example, the Y component of the test image at 1080 × 1920 inputs at test time, and then the Y component of the liquid crystal pixel compensation image predicted by the second neural network and the U and V components of the test image are transformed to the RGB space, resulting in the final RGB image.
It should be noted that, in order to ensure the practical application of the technology, a neural network with lower complexity may be designed, and parameters of the network are reduced on the basis of ensuring the improvement of image contrast and detail preservation, so that the network is applied to a practical display.
Next, a specific process of determining a target backlight image of the sample image will be described in detail.
First, the processor 202 divides the sample image into a plurality of regions according to the size of the backlight module 203, and determines the original backlight image of each region in the sample image by using a region backlight extraction algorithm. The value of an element in the backlight matrix may indicate the brightness of a Light Emitting Diode (LED) lamp in the backlight module. For example, the size of the backlight module 203 is 36 × 66, and the size of the backlight matrix is 36 × 66. That is, the backlight image (or backlight matrix) includes the brightness of all the LED lamps in the backlight module. The brightness of an LED lamp is determined according to the pixel values of the area of the image corresponding to the LED lamp.
The area backlight extraction algorithm includes a maximum value method, an average value method and a root mean square method.
The maximum value method is to select the maximum gray value of all pixel points in a partition as the backlight value of the partition.
The average value method is to select the average value of the gray levels of all the pixels in the partition as the backlight value of the partition.
The root-mean-square method is to normalize the average gray level of all pixels in the partition, square the normalized average gray level, and multiply the squared average gray level by 255 to obtain the backlight value of the partition.
In addition to the area backlight extraction algorithm, a Cumulative Distribution Function (CDF), a mapping function Inversion (IMF), a look-up table (LUT), an adaptive area backlight algorithm based on histogram and image features, and the like are common.
The CDF method is to obtain a corresponding probability density curve according to a gray histogram of each gray area of an image, then obtain an accumulative distribution function, and finally map a corresponding gray value on the accumulative distribution function according to a preset threshold value as a backlight brightness value of the corresponding area.
The IMF method is based on the CDF method, mapping y = x is carried out on a CDF curve to obtain a mapping inversion curve IMF, and then the IMF is corresponding to the maximum value or other parameters of each partition to obtain the backlight value of the corresponding partition.
In this embodiment, the processor 202 may further obtain a grayscale threshold for separating the foreground and the background of the sample image by using a maximum inter-class variance method, binarize the sample image by using the grayscale threshold to obtain a binarized image, calculate a backlight adjustment coefficient by using the binarized image, further determine a backlight dimming grayscale, determine a backlight dimming ratio by using the backlight dimming grayscale, and finally determine the original backlight image by using the backlight dimming ratio in combination with the maximum value of the luminance and the average value of the luminance of the sample image. The algorithm comprehensively considers two factors of contrast improvement and visual quality enhancement, and improves the contrast of the display image and the display image quality.
Then, the processor 202 partitions the original backlight image to obtain a plurality of backlight partitions, wherein partitioned backlight images of the plurality of backlight partitions are different; and adjusting the partitioned backlight images in the plurality of backlight partitions according to the adjusting value to obtain a plurality of candidate backlight images. The value range of the brightness value of the pixel point is 0-255, and the value range of the adjusting value is 0-255.
For example, the processor 202 calculates the luminance mean and variance of the original backlight image, and determines two demarcation points using the luminance mean and variance of the original backlight image. Understandably, the difference between the average brightness value and the variance of the original backlight image is calculated to obtain a first brightness value, namely a first boundary point. And calculating the sum of the brightness average value and the variance of the original backlight image to obtain a second brightness value, namely a second demarcation point.
The processor 202 divides the pixels in the original backlight image smaller than the first brightness value into low-brightness regions, divides the pixels in the original backlight image larger than the second brightness value into high-brightness regions, and divides the pixels in the original backlight image smaller than the second brightness value and larger than the first brightness value into medium-brightness regions.
Processor 202 reduces the brightness of the pixel points in the low bright regions using the first adjustment value and increases the brightness of the pixel points in the high bright regions using the second adjustment value. The first adjustment value and the second adjustment value may be different or the same, and are not limited. The first adjustment value may take on different values from 0-255 and the second adjustment value may take on different values from 0-255. The processor 202 adjusts the brightness of the pixel points in the low-brightness region for multiple times, and adjusts the brightness of the pixel points in the high-brightness region for multiple times to obtain multiple candidate backlight images.
The candidate backlight image satisfies the following formula (4).
Figure PCTCN2020091430-APPB-000006
BL i (x, y) denotes the luminance of the candidate backlight image, BL 0 (x, y) represents the brightness of a pixel in a highlight region or the brightness of a pixel in a low-highlight region; (2 i -1) represents the adjustment value, i ∈ [0,8],O 1 Representing a first luminance value; o is 2 Representing a second luminance value.
It should be noted that, in order to ensure the effectiveness of the candidate backlight, the luminance values of the pixels in the candidate backlight image cannot exceed the backlight limiting conditions, i.e., the maximum luminance value and the minimum luminance value. The backlight limiting condition is determined according to the sample image and a preset regional backlight extraction algorithm. The processor 202 extracts a plurality of backlight images of the sample image using the above-mentioned multi-region backlight extraction algorithm, and determines a maximum luminance value and a minimum luminance value from the plurality of backlight images of the sample image. The backlight restriction conditions satisfy the following equations (5) and (6).
p max (x,y)=max(p k (x,y)) (5)
p min (x,y)=max(p k (x,y)) (6)
Wherein p is max (x, y) represents the maximum luminance value, p min (x, y) represents the minimum luminance value.
Furthermore, the processor 202 controls the backlight module 203 to emit light to the liquid crystal panel 204 according to the candidate backlight images; calculating the dynamic ranges of a plurality of display brightnesses of the liquid crystal panel 204 under the illumination of the backlight module 203 controlled by different candidate backlight images; and determining the target backlight image according to the contrast ratios of the candidate backlight images and the dynamic ranges of the display brightness.
Alternatively, a luminance meter may be used to obtain a plurality of display luminances of the liquid crystal panel 204 under illumination by the backlight module 203 controlled by different candidate backlight images. The luminance meter is a metering instrument for measuring light and color.
Optionally, the processor 202 performs calculation by using a ratio of a gray value of 90% of all pixels in the candidate backlight image to a gray value of 10% of all pixels in the candidate backlight image, so as to obtain the contrast of the candidate backlight image.
Optionally, the processor 202 controls the liquid crystal panel 204 according to the sample image, controls the backlight module 203 according to the candidate backlight image, and measures the display brightness by using a point brightness meter to measure the central brightness of each backlight partition corresponding to the liquid crystal panel, so as to obtain 36 × 66 display brightnesses, or by using a surface brightness meter to measure the image display brightness of the whole screen to obtain a display brightness matrix, where the size of the display brightness matrix is determined according to the distance between the surface brightness meter and the liquid crystal screen and the angle of the surface brightness meter, and the size of the measured display brightness matrix is not unique, but the operation of measuring the display brightness of an image is simple, and more display brightness information can be obtained. The processor 202 obtains the maximum value and the minimum value of the display brightness of the liquid crystal panel 204, and divides the maximum value of the display brightness by the minimum value of the display brightness to obtain the dynamic range of the display brightness.
In one possible implementation, the processor 202 performs weighted calculation on the contrast ratios of the candidate backlight images and the dynamic ranges of the display brightness to determine candidate target backlight images; and determining the candidate target backlight image with the maximum weight value as the target backlight image.
For example, the above formula (7) is satisfied by performing weighted calculation of the contrast of the plurality of candidate backlight images and the dynamic range of the plurality of display luminances.
res=w 1 *CR 2 +w 2 *DR (7)
Wherein res represents a weight, CR 2 Representing the contrast, w, of candidate backlight images 1 A weight coefficient representing a contrast of the candidate backlight image; DR represents the dynamic range of display luminance, w 2 A weight coefficient representing a dynamic range of display luminance. w is a 1 And w 2 The sum is 1.w is a 1 And w 2 Are numbers greater than 0 and less than 1.
In another possible implementation, the processor 202 determines the candidate target backlight image with the highest average score as the target backlight image.
The average score of the candidate target backlight image may be determined by the display effect of the processor 202 controlling the backlight template 203 to emit light to the liquid crystal panel 204 according to the candidate target backlight image, and the liquid crystal panel 204 displaying the target liquid crystal pixel compensation image. For example, the processor 202 determines a candidate target backlight image with the highest Mean Opinion Score (MOS) value as the target backlight image.
Optionally, the processor 202 controls the backlight template 203 to emit light according to the candidate target backlight image with the first three weight values, controls the liquid crystal panel 204 to display an image according to the target liquid crystal pixel compensation image, allows a plurality of users to score the image displayed by the liquid crystal panel 204, and the processor 202 determines the candidate target backlight image corresponding to the image with the highest average score as the target backlight image.
The following compares, with reference to fig. 11, display effects obtained after adjusting an image by using the neural network-based regional backlight dimming method and the conventional regional backlight dimming method provided in the embodiments of the present application. Suppose that the backlight module included in the lcd uses white LED sub-area backlight, the sub-area size is 36 × 66, the dimming resolution of the LED driver is 8 bits (bit), and the resolution of the lcd panel is 3840 × 2160.
As shown in fig. 11 (a), an image displayed by the liquid crystal display is controlled according to the global backlight method. The global backlight image controls the backlight module to emit light to the liquid crystal panel, the first image controls the liquid crystal panel, and the second image is displayed by the liquid crystal display.
As shown in (b) of fig. 11, an image displayed by the liquid crystal display is controlled according to a conventional backlight method. The traditional backlight image control backlight module emits light to the liquid crystal panel, the traditional liquid crystal pixel compensation image control liquid crystal panel and the liquid crystal display a second image.
As shown in fig. 11 (c), the backlight module is controlled to emit light to the liquid crystal panel to predict the backlight image, and the predicted liquid crystal pixel compensates the image to control the liquid crystal panel, and the liquid crystal display displays the second image. The predicted backlight image and the predicted liquid crystal pixel compensation image are obtained by using the neural network-based regional backlight dimming method provided by the embodiment of the application.
Comparing (a), (b) and (c) in fig. 11, it can be seen that the backlight brightness affects the contrast of the image, and compared with the conventional algorithm, the prediction value of the present application can achieve brighter brightness area and darker dark area, thereby improving the contrast of the image, and effectively enhancing the texture detail and definition of the image (for example, the white dotted line labeled area in the image).
It is understood that, in order to implement the functions of the above embodiments, the processor includes a corresponding hardware structure and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software driven hardware depends on the particular application scenario and design constraints imposed on the solution.
Fig. 12 and fig. 13 are schematic structural diagrams of a possible neural network-based regional backlight dimming device according to an embodiment of the present application. These neural network based regional backlight dimming devices can be used to implement the functions of the processor 201 or the processor 202 in the above method embodiments, and therefore, the beneficial effects of the above method embodiments can also be achieved. In the embodiment of the present application, the neural network based regional backlight dimming device may be the processor 201 or the processor 202 shown in fig. 2, and may also be a module (e.g., a chip) applied to the processor 201 or the processor 202.
As shown in fig. 12, the neural network-based regional backlight dimming device 1200 includes a processing unit 1210 and a control unit 1220. The neural network based regional backlight dimming device 1200 is used to implement the functions of the processor 201 or the processor 202 in the method embodiments shown in fig. 4, fig. 5, fig. 9 or fig. 10.
When the neural network based regional backlight dimming device 1200 is used to implement the functions of the processor 201 in the method embodiment shown in fig. 4 or fig. 5: the control unit 1220 is configured to execute S405.
When the neural network based regional backlight dimming device 1200 is used to implement the functions of the processor 202 in the method embodiment shown in fig. 4: the processing unit 1210 is configured to perform S401-S404.
When the neural network based regional backlight dimming device 1200 is used to implement the functions of the processor 202 in the method embodiment shown in fig. 5: the processing unit 1210 is configured to execute S4011-S4013, S402-S404.
When the neural network based regional backlight dimming device 1200 is used to implement the functions of the processor 201 in the method embodiments shown in fig. 9 or fig. 10: the control unit 1220 is configured to execute S405.
When the neural network based regional backlight dimming device 1200 is used to implement the functions of the processor 202 in the method embodiment shown in fig. 9: the processing unit 1210 is configured to perform S401 to S404, and S901 and S902.
When the neural network based regional backlight dimming device 1200 is used to implement the functions of the processor 202 in the method embodiment shown in fig. 10: the processing unit 1210 is configured to execute S401 to S404, and S9011 to S9013, and S902.
More detailed descriptions about the processing unit 1210 and the control unit 1220 can be directly obtained by referring to the related descriptions in the method embodiments shown in fig. 4, fig. 5, fig. 9, or fig. 10, which are not repeated herein.
As shown in fig. 13, the neural network-based regional backlight dimming device 1300 includes a processor 1310 and an interface circuit 1320. The processor 1310 and the interface circuit 1320 are coupled to each other. It will be appreciated that interface circuit 1320 may be a transceiver or an input-output interface. Optionally, the neural network based regional backlight dimming device 1300 may further include a memory 1330 for storing instructions executed by the processor 1310 or storing input data required by the processor 1310 to execute the instructions or storing data generated by the processor 1310 after executing the instructions.
When the neural network based regional backlight dimming device 1300 is used to implement the methods shown in fig. 4, 5, 9 or 10, the processor 1310 is used to perform the functions of the processing unit 1210 and the control unit 1220.
It is understood that the Processor in the embodiments of the present Application may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The general purpose processor may be a microprocessor, but may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, read-Only Memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a network device or a terminal device. Of course, the processor and the storage medium may reside as discrete components in a network device or a terminal device.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network appliance, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or optical media such as Digital Video Disks (DVDs); it may also be a semiconductor medium, such as a Solid State Drive (SSD).
In various embodiments of the present application, unless otherwise specified or conflicting, terms and/or descriptions between different embodiments have consistency and may be mutually referenced, and technical features in different embodiments may be combined to form a new embodiment according to their inherent logical relationships.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the text description of the present application, the character "/" generally indicates that the preceding and following associated objects are in an "or" relationship; in the formula of the present application, the character "/" indicates that the preceding and following associated objects are in a "division" relationship.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application. The sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic.

Claims (25)

  1. A regional backlight dimming method based on a neural network is characterized by comprising the following steps:
    determining a target backlight image of the sample image;
    training a first neural network according to the sample image and the target backlight image to generate parameters of the first neural network;
    inputting a first image into the first neural network for operation to obtain a backlight image of the first image;
    inputting the first image into a second neural network for operation so as to obtain a liquid crystal pixel compensation image of the first image;
    and controlling a liquid crystal panel to display a second image based on the backlight image and the liquid crystal pixel compensation image, wherein the backlight image controls a backlight module to provide light rays for the liquid crystal panel, and the dynamic range of the second image is larger than that of the first image.
  2. The method of claim 1, wherein before inputting the first image into a second neural network for operation, the method further comprises:
    determining a target liquid crystal pixel compensation image of the sample image;
    training the second neural network according to the sample image and the target liquid crystal pixel compensation image to generate parameters of the second neural network.
  3. The method of claim 2, wherein determining a target liquid crystal pixel compensation image for the sample image comprises:
    determining a diffused backlight image of the sample image according to the target backlight image;
    determining a plurality of candidate liquid crystal pixel compensation images according to the sample image, the diffused backlight image and a plurality of preset liquid crystal pixel compensation algorithms;
    determining the target liquid crystal pixel compensation image according to the candidate liquid crystal pixel compensation images and the sample image.
  4. The method of claim 3, wherein determining the target LC pixel compensation image from the plurality of candidate LC pixel compensation images and the sample image comprises:
    comparing the candidate liquid crystal pixel compensation images with the sample image, and determining the candidate liquid crystal pixel compensation image with the largest structural similarity as the target liquid crystal pixel compensation image;
    or, determining the candidate liquid crystal pixel compensation image with the highest average score as the target liquid crystal pixel compensation image.
  5. The method of claim 3 or 4, wherein the liquid crystal pixel compensation algorithm comprises a linear compensation algorithm and a non-linear compensation algorithm.
  6. The method of any of claims 1-5, wherein determining the target backlight image of the sample image comprises:
    determining an original backlight image of the sample image according to the pixel values of the sample image;
    adjusting the original backlight image to obtain a plurality of candidate backlight images of the sample image;
    determining the target backlight image from the plurality of candidate backlight images.
  7. The method of claim 6, wherein the adjusting the original backlight image to obtain a plurality of candidate backlight images of the sample image comprises:
    partitioning the original backlight image to obtain a plurality of backlight partitions, wherein partitioned backlight images of the backlight partitions are different;
    and adjusting the partitioned backlight images in the plurality of backlight partitions according to the adjusting value to obtain a plurality of candidate backlight images, wherein the candidate backlight images meet backlight limiting conditions, the backlight limiting conditions are determined according to the sample image and a preset regional backlight extraction algorithm, and the value range of the adjusting value is 0-255.
  8. The method of claim 6 or 7, wherein determining the target backlight image from the plurality of candidate backlight images comprises:
    controlling the backlight module to provide light rays for the liquid crystal panel according to the candidate backlight images;
    calculating the dynamic ranges of a plurality of display brightnesses of the liquid crystal panel under the illumination of the backlight module controlled by different candidate backlight images;
    determining the target backlight image according to the contrast of the candidate backlight images and the dynamic ranges of the display brightness.
  9. The method of claim 8, wherein determining the target backlight image according to the contrast ratios of the candidate backlight images and the dynamic ranges of the display luminances comprises:
    performing weighted calculation on the contrast ratios of the candidate backlight images and the dynamic ranges of the display brightness to determine a plurality of candidate target backlight images;
    and determining the candidate target backlight image with the maximum weight as the target backlight image, or determining the candidate target backlight image with the highest average score as the target backlight image.
  10. The method of any one of claims 1-9, wherein the first neural network and the second neural network are both deep convolutional neural networks.
  11. The method of claim 10, wherein the first neural network and the second neural network share partial parameters.
  12. A neural network-based local backlight dimming device, comprising:
    a processing unit for determining a target backlight image of the sample image;
    the processing unit is further used for training a first neural network according to the sample image and the target backlight image to generate parameters of the first neural network;
    the processing unit is further configured to input a first image into the first neural network for operation, so as to obtain a backlight image of the first image;
    the processing unit is further used for inputting the first image into a second neural network for operation so as to obtain a liquid crystal pixel compensation image of the first image;
    and the control unit is used for controlling the liquid crystal panel to display a second image based on the backlight image and the liquid crystal pixel compensation image, wherein the backlight image controls the backlight module to provide light rays for the liquid crystal panel, and the dynamic range of the second image is larger than that of the first image.
  13. The apparatus of claim 12, wherein the processing unit is further configured to:
    determining a target liquid crystal pixel compensation image of the sample image;
    training the second neural network according to the sample image and the target liquid crystal pixel compensation image to generate parameters of the second neural network.
  14. The apparatus according to claim 13, wherein the processing unit is specifically configured to:
    determining a diffuse backlight image of the sample image from the target backlight image;
    determining a plurality of candidate liquid crystal pixel compensation images according to the sample image, the diffused backlight image and a plurality of preset liquid crystal pixel compensation algorithms;
    determining the target liquid crystal pixel compensation image according to the candidate liquid crystal pixel compensation images and the sample image.
  15. The apparatus according to claim 14, wherein the processing unit is specifically configured to:
    comparing the candidate liquid crystal pixel compensation images with the sample image, and determining the candidate liquid crystal pixel compensation image with the largest structural similarity as the target liquid crystal pixel compensation image;
    or, determining the candidate liquid crystal pixel compensation image with the highest average score as the target liquid crystal pixel compensation image.
  16. The apparatus of claim 14 or 15, wherein the liquid crystal pixel compensation algorithm comprises a linear compensation algorithm and a non-linear compensation algorithm.
  17. The apparatus according to any one of claims 12 to 16, wherein the processing unit is specifically configured to:
    determining an original backlight image of the sample image according to the pixel values of the sample image;
    adjusting the original backlight image to obtain a plurality of candidate backlight images of the sample image;
    determining the target backlight image from the plurality of candidate backlight images.
  18. The apparatus according to claim 17, wherein the processing unit is specifically configured to:
    partitioning the original backlight image to obtain a plurality of backlight partitions, wherein partitioned backlight images of the backlight partitions are different;
    and adjusting the partitioned backlight images in the backlight partitions according to the adjusting value to obtain a plurality of candidate backlight images, wherein the candidate backlight images meet backlight limiting conditions, the backlight limiting conditions are determined according to the sample image and a preset regional backlight extraction algorithm, and the value range of the adjusting value is 0-255.
  19. The apparatus of claim 17 or 18,
    the control unit is also used for controlling the backlight module to provide light rays for the liquid crystal panel according to the candidate backlight images;
    the processing unit is specifically configured to:
    calculating the dynamic ranges of a plurality of display brightness of the liquid crystal panel under the illumination of the backlight module controlled by different candidate backlight images;
    determining the target backlight image according to the contrast of the candidate backlight images and the dynamic ranges of the display brightness.
  20. The apparatus according to claim 19, wherein the processing unit is specifically configured to:
    performing weighted calculation on the contrast of the candidate backlight images and the dynamic ranges of the display brightness to determine candidate target backlight images;
    and determining the candidate target backlight image with the maximum weight as the target backlight image, or determining the candidate target backlight image with the highest average score as the target backlight image.
  21. The apparatus of any one of claims 12-20, wherein the first neural network and the second neural network are both deep convolutional neural networks.
  22. The apparatus of claim 21, wherein the first neural network and the second neural network share partial parameters.
  23. A neural network-based local backlight dimming device, comprising: a processor and a memory, wherein the memory is used for storing a computer program and instructions, and the processor is used for calling the computer program and instructions to execute the neural network based regional backlight dimming method according to any one of claims 1-11, so that the liquid crystal panel displays images.
  24. A liquid crystal display, comprising: a processor and a memory, and a backlight module and a liquid crystal panel, wherein the memory is used for storing a computer program and instructions, and the processor is used for calling the computer program and instructions, and the backlight module and the liquid crystal panel assist in executing the neural network based regional backlight dimming method according to any one of claims 1-11.
  25. A computer readable storage medium having stored therein computer readable instructions which, when run on a neural network-based local backlight dimming device, cause the neural network-based local backlight dimming device to perform the method of any of claims 1-11.
CN202080100768.XA 2020-05-20 2020-05-20 Neural network-based regional backlight dimming method and device Pending CN115516496A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/091430 WO2021232323A1 (en) 2020-05-20 2020-05-20 Local backlight dimming method and device based on neural network

Publications (1)

Publication Number Publication Date
CN115516496A true CN115516496A (en) 2022-12-23

Family

ID=78709063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080100768.XA Pending CN115516496A (en) 2020-05-20 2020-05-20 Neural network-based regional backlight dimming method and device

Country Status (2)

Country Link
CN (1) CN115516496A (en)
WO (1) WO2021232323A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746806B (en) * 2024-02-20 2024-05-10 广东中强精英电子科技有限公司 Driving method, device, equipment and storage medium of mini LED backlight module

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI748035B (en) * 2017-01-20 2021-12-01 日商半導體能源硏究所股份有限公司 Display system and electronic device
CN109951594A (en) * 2017-12-20 2019-06-28 广东欧珀移动通信有限公司 Intelligent adjusting method, device, storage medium and the mobile terminal of screen intensity
CN109493814B (en) * 2019-01-15 2020-07-07 京东方科技集团股份有限公司 Picture compensation method, picture compensation device, display device and computer readable storage medium
CN110675336A (en) * 2019-08-29 2020-01-10 苏州千视通视觉科技股份有限公司 Low-illumination image enhancement method and device
CN110728637B (en) * 2019-09-21 2023-04-18 天津大学 Dynamic dimming backlight diffusion method for image processing based on deep learning
CN110838090B (en) * 2019-09-21 2023-04-21 天津大学 Backlight diffusion method for image processing based on residual error network

Also Published As

Publication number Publication date
WO2021232323A1 (en) 2021-11-25

Similar Documents

Publication Publication Date Title
US11295178B2 (en) Image classification method, server, and computer-readable storage medium
CN111507993B (en) Image segmentation method, device and storage medium based on generation countermeasure network
CN112446834B (en) Image enhancement method and device
EP3706032A1 (en) Image classification method, computer device and computer readable storage medium
EP4074247A1 (en) Health status test method and device, and computer storage medium
CN110197463B (en) High dynamic range image tone mapping method and system based on deep learning
CN113011567B (en) Training method and device of convolutional neural network model
Pang et al. Predictive coding feedback results in perceived illusory contours in a recurrent neural network
CN112257759A (en) Image processing method and device
CN112330684A (en) Object segmentation method and device, computer equipment and storage medium
Li et al. Hdrnet: Single-image-based hdr reconstruction using channel attention cnn
Yan et al. Attention-guided dynamic multi-branch neural network for underwater image enhancement
Lecca et al. Point‐based spatial colour sampling in Milano‐Retinex: a survey
Gianini et al. A fuzzy set approach to Retinex spray sampling
Yang et al. NAUNet: lightweight retinal vessel segmentation network with nested connections and efficient attention
Li et al. MCANet: Multi-channel attention network with multi-color space encoder for underwater image classification
Ding et al. Visualizing deep networks using segmentation recognition and interpretation algorithm
WO2021232323A1 (en) Local backlight dimming method and device based on neural network
CN114187515A (en) Image segmentation method and image segmentation device
Wu et al. Fish Target Detection in Underwater Blurred Scenes Based on Improved YOLOv5
CN117911370A (en) Skin image quality evaluation method and device, electronic equipment and storage medium
Di et al. FDNet: An end-to-end fusion decomposition network for infrared and visible images
Zhang et al. A novel DenseNet Generative Adversarial network for Heterogenous low-Light image enhancement
Wang et al. MetaScleraSeg: an effective meta-learning framework for generalized sclera segmentation
CN116403537A (en) Liquid crystal display area dynamic dimming method based on image saliency ranking model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination