CN113887599A - Screen light detection model training method, and ambient light detection method and device - Google Patents

Screen light detection model training method, and ambient light detection method and device Download PDF

Info

Publication number
CN113887599A
CN113887599A CN202111123048.8A CN202111123048A CN113887599A CN 113887599 A CN113887599 A CN 113887599A CN 202111123048 A CN202111123048 A CN 202111123048A CN 113887599 A CN113887599 A CN 113887599A
Authority
CN
China
Prior art keywords
parameter
light detection
screen
screen light
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111123048.8A
Other languages
Chinese (zh)
Inventor
李正汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111123048.8A priority Critical patent/CN113887599A/en
Publication of CN113887599A publication Critical patent/CN113887599A/en
Priority to PCT/CN2022/119722 priority patent/WO2023045884A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses a screen light detection model training method, an ambient light detection method and an ambient light detection device, and belongs to the technical field of electronic equipment. The screen light detection model training method comprises the following steps: acquiring N training samples, wherein the training samples comprise a first display parameter and a first screen light parameter, the first screen light parameter is a light detection value obtained by a photosensitive sensor under a target environment condition, and a target display area is displayed according to the first display parameter, the target environment condition comprises that the ambient light intensity is less than or equal to a preset light intensity value, the target display area is a display area matched with the photosensitive sensor in position in a screen, the first display parameter is a display parameter of the target display area, and N is an integer greater than 1; and training the neural network to be trained through the N training samples to obtain the screen light detection model.

Description

Screen light detection model training method, and ambient light detection method and device
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to a screen light detection model training method, an ambient light detection method and an ambient light detection device.
Background
Currently, many electronic devices are provided with photosensitive sensors to perform certain functions according to the light information collected by the photosensitive sensors. For example, the electronic device may select an appropriate screen brightness or shooting mode according to the ambient light information collected by the light sensor.
In some application scenarios, the light emitted from the screen of the electronic device may interfere with the acquisition result of the ambient light information. In the prior art, a compensation value is usually determined according to a light emitting color of a screen and a corresponding relationship between a preset color and the compensation value, and light information collected by a photosensitive sensor is corrected through the compensation value to obtain ambient light information.
However, since it is difficult for the correspondence between the preset color and the compensation value to cover all the light emitting states of the screen in practical applications, it may result in poor accuracy of the determined compensation value.
Disclosure of Invention
The embodiment of the application aims to provide a screen light detection model training method, an ambient light detection method and an ambient light detection device, and can solve the problem that in the prior art, the accuracy of a compensation value determined through a corresponding relation between a preset color and the compensation value is poor.
In a first aspect, an embodiment of the present application provides a screen light detection model training method, where the method includes:
acquiring N training samples, wherein the training samples comprise a first display parameter and a first screen light parameter, the first screen light parameter is a light detection value obtained by a photosensitive sensor under a target environment condition, and a target display area is displayed according to the first display parameter, the target environment condition comprises that the ambient light intensity is less than or equal to a preset light intensity value, the target display area is a display area matched with the photosensitive sensor in position in a screen, the first display parameter is a display parameter of the target display area, and N is an integer greater than 1;
and training the neural network to be trained through the N training samples to obtain the screen light detection model.
In a second aspect, an embodiment of the present application provides an ambient light detection method, including:
acquiring a second display parameter, a light detection value and a screen light detection model of a target display area; the light detection value is obtained by a photosensitive sensor, and the screen light detection model is obtained by training according to the screen light detection model training method shown in the first aspect;
processing the second display parameter through the screen light detection model to obtain a second screen light parameter;
and determining the ambient light parameter according to the second screen light parameter and the light detection value.
In a third aspect, an embodiment of the present application provides a screen light detection model training device, including:
the system comprises a first acquisition module, a second acquisition module, a third acquisition module and a fourth acquisition module, wherein the training samples comprise first display parameters and first screen light parameters, the first screen light parameters are light detection values obtained by a photosensitive sensor under a target environment condition, and when a target display area is displayed according to the first display parameters, the target environment condition comprises that the ambient light intensity is less than or equal to a preset light intensity value, the target display area is a display area matched with the position of the photosensitive sensor in the screen, the first display parameters are display parameters of the target display area, and N is an integer greater than 1;
and the training module is used for training the neural network to be trained through the N training samples to obtain the screen light detection model.
In a fourth aspect, an embodiment of the present application provides an ambient light detection apparatus, including:
the second acquisition module is used for acquiring a second display parameter, a light detection value and a screen light detection model of the target display area; the light detection value is obtained by a photosensitive sensor, and the screen light detection model is obtained by training according to the screen light detection model training method shown in the first aspect;
the processing module is used for processing the second display parameter through the screen light detection model to obtain a second screen light parameter;
and the determining module is used for determining the ambient light parameter according to the second screen light parameter and the light detection value.
In a fifth aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect or implement the steps of the method according to the second aspect.
In a sixth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect, or implement the steps of the method according to the second aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect, or to implement the method according to the second aspect.
According to the screen light detection model training method provided by the embodiment of the application, a neural network to be trained can be trained through N training samples to obtain a screen light detection model, the training samples comprise first display parameters, and the first screen light parameters are obtained through a photosensitive sensor under the condition of a target environment and when a target display area displays according to the first display parameters. Therefore, the screen light detection model can accurately predict and obtain the screen light parameters when the target display area is in various display states. In subsequent applications, the predicted screen light parameters can be used for compensating the light detection values obtained by the photosensitive sensors, so that the environment light parameters can be obtained more accurately.
Drawings
FIGS. 1a to 1b are diagrams illustrating a structure of an electronic device in the related art;
FIG. 1b is an exemplary diagram of a side view of an electronic device in the related art;
FIGS. 2a to 2c are exemplary diagrams of status bars of electronic devices in the related art;
FIG. 3 is a schematic flowchart of a screen light detection model training method provided in an embodiment of the present application;
FIGS. 4a to 4c are exemplary diagrams of display states of the target display area;
FIG. 5 is an architectural diagram of a neural network to be trained;
FIG. 6 is a schematic flow chart of an ambient light detection method provided by an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a screen light detection model training apparatus provided in an embodiment of the present application;
FIG. 8 is a schematic structural diagram of an ambient light detection apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
As shown in fig. 1a and 1b, fig. 1a and 1b are schematic structural diagrams of an electronic device in the related art, and the electronic device mainly includes a glass cover plate 11, a screen 12, a diffusion sheet 13, a photosensor 14, and a main board 15.
The photosensitive sensor 14 can be used to convert the optical signal into an electrical signal to obtain a detected light value. In some examples, the light detection value may be a current or voltage value generated in the photosensor 14, or may be a light intensity obtained by processing the current or voltage value, or the like.
The main board 15 can further process the light detection value obtained by the light sensor 14 and use the light detection value for other control functions.
In general, the photosensor 14 may receive ambient light and screen light emitted from the screen 12 during the process of receiving the light signal.
As shown in fig. 1b, when the photosensor 14 is located below the screen 12, the ambient light may reach the photosensor 14 after sequentially passing through the glass cover plate 11, the screen 12, and the diffusion sheet 13. The screen light emitted from the screen 12 mainly passes through the glass cover plate 11 and propagates to the external environment, and a small part of the light can pass through the diffusion plate 13 and reach the photosensitive sensor 14.
Of course, in other electronic devices, the light-sensitive sensor 14 may not be located below the screen 12. For example, there may be openings or recesses in the screen 12 in which the perpendicular projection of the light sensitive sensor 14 onto the screen 12 may be located. However, in these electronic devices, the light sensor 14 may still receive light emitted from the peripheral screen area.
In some application scenarios, the electronic device may need to acquire the ambient light parameter, and the ambient light and the screen light may be received by the photosensitive sensor 14 at the same time due to the influence of the light emitted from the screen 12, so that the light detection value acquired by the photosensitive sensor 14 may not accurately reflect the actual ambient light state, that is, the ambient light parameter acquired by the electronic device may not be accurate enough.
For convenience of description, the change of the light detection value of the photosensitive sensor 14 caused by the screen light when the electronic device obtains the ambient light parameter may be referred to as a detection error. In order to obtain the ambient light parameter more accurately, the detection error is often corrected.
In the related art, in order to correct the monitoring error, a screen light parameter of the screen in a preset display state is obtained in advance, and a corresponding relationship between the display state and the screen light parameter is established. In practical application, the screen light parameters can be determined according to the display state of the screen and the corresponding relation, and the light detection values obtained by the photosensitive sensors are compensated by using the screen light parameters to obtain the ambient light parameters.
Referring to fig. 2a to 2c, in some application scenarios, the electronic device may be in a state of vertical screen display. However, there may be a difference in the display content of the display area (hereinafter referred to as a target display area) in the screen 12 that is positionally matched with the photosensor 14.
For example, the target display area may display part of the content of the status bar, and in fig. 2a, the display colors of the pixel units in the target display area may be the same. In fig. 2b and fig. 2c, there may be a plurality of display colors of the pixel units in the target display area, and as the status bar icon changes, the display color corresponding to the target display area may also change accordingly.
In other application scenarios, the electronic device may be in a landscape display state, and in this case, the display content of the target display area may be more diverse.
However, in general, the detection error described above is mainly affected by the display state of the target display region in the screen 12.
In summary, it can be seen that the display states are difficult to be exhausted in a manner of compensating the light detection values to obtain the ambient light parameters by establishing the corresponding relationship between the display states and the screen light parameters, and thus the ambient light parameters obtained in some display states are not accurate enough.
In order to solve the above problems, embodiments of the present application provide a screen light detection model training method, an ambient light detection method, and an ambient light detection device. The screen light detection model training method, the ambient light detection method and the device provided by the embodiment of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 3, the screen light detection model training method provided in the embodiment of the present application includes:
301, acquiring N training samples, where a training sample includes a first display parameter and a first screen light parameter, the first screen light parameter is a light detection value obtained by a photosensitive sensor under a target environment condition, and a target display area is displayed according to the first display parameter, the target environment condition includes that an ambient light intensity is less than or equal to a preset light intensity value, the target display area is a display area in a screen matched with the photosensitive sensor, the first display parameter is a display parameter of the target display area, and N is an integer greater than 1;
step 302, training the neural network to be trained through the N training samples to obtain the screen light detection model.
According to the screen light detection model training method provided by the embodiment of the application, a neural network to be trained can be trained through N training samples to obtain a screen light detection model, the training samples comprise first display parameters, and the first screen light parameters are obtained through a photosensitive sensor under the condition of a target environment and when a target display area displays according to the first display parameters. Therefore, the screen light detection model can accurately predict and obtain the screen light parameters when the target display area is in various display states. In subsequent applications, the predicted screen light parameters can be used for compensating the light detection values obtained by the photosensitive sensors, so that the environment light parameters can be obtained more accurately.
The screen light detection model training method provided by the embodiment of the application can be applied to a server or electronic equipment.
For example, the method can be applied to a server, the server trains a neural network to be trained by using N training samples to obtain a screen light detection model, and then the server can send the screen light detection model to at least one electronic device; or the server may also process the display parameters sent by the electronic device and send the processed screen light parameters to the electronic device.
For another example, the method can also be applied to electronic equipment such as a mobile terminal or a personal computer, and after a screen light detection model is obtained by training in the electronic equipment, the screen light detection model is directly used for determining screen light parameters subsequently; alternatively, the electronic device may also transmit the screen light detection model to other electronic devices.
For simplicity of description, the screen light detection model training method is mainly applied to the electronic device as an example.
In step 301, the electronic device may obtain a plurality of training samples. The first display parameter in the training sample may include a brightness parameter or a color parameter, etc.
In one example, the first display parameter may be directly obtained from a motherboard of the electronic device. For example, the main board may record a setting parameter of the brightness of the user, and the electronic device may directly extract the setting parameter of the brightness from the main board as the brightness parameter. Similarly, the electronic device may also extract color parameters and the like from the motherboard.
As indicated above, the first display parameter may be a display parameter of the target display area, i.e. a display parameter of a display area in the screen that matches the position of the light sensitive sensor.
The target display area can be set according to actual needs. For example, for an electronic device with a photosensitive sensor located below a screen, the target display area may be a display area occupied by a projection of the photosensitive sensor on the screen, or a rectangular display area circumscribing the projection, or a display area enclosed after being outwardly offset by a preset distance along an outline of the projection, and the like, which is not specifically limited herein. If the photosensitive sensor is not located in the electronic device below the screen, the target display area may be defined as a display area located within a preset distance range of the photosensitive sensor.
When the electronic device is in a target environment condition and the target display area displays according to the first display parameter, the photosensitive sensor can acquire the first screen light parameter. As described above, the light detection value may be a current or voltage value generated in the photosensor, or may be a light intensity obtained by processing the current or voltage value, or the like; accordingly, the unit of each first screen optical parameter may be an optical unit or an electrical unit, etc., and is not limited herein.
It is readily understood that when the electronic device is in a target environmental condition, the light detection value obtained by the light-sensitive sensor in the electronic device can be considered as a light detection value for the screen light. The target environmental condition may be that the intensity of the ambient light is less than or equal to a predetermined light intensity value, etc.
Before the first screen light parameter is obtained, the screen may be in a screen-off state to obtain the intensity of the ambient light (corresponding to the ambient light parameter), and whether the intensity of the ambient light is less than or equal to a preset light intensity value is determined. Of course, in practical applications, an independent sensor may also be used to detect the ambient light so as to determine whether the intensity of the ambient light is less than or equal to the preset light intensity value.
For simplicity, the target display area may be referred to as being in a display state when the target display area is displayed according to a first display parameter.
In combination with the above, on the basis that the electronic device is in the target environmental condition, when the target display area is in one display state, the photosensitive sensor can acquire the light detection value (i.e. the first screen light parameter) of the corresponding screen light. A training sample may be obtained by correlating the first display parameter in the display state with the first screen lighting parameter. Other training samples can be obtained in a similar manner, and are not described in detail here.
In step 302, a neural network to be trained may be trained based on the N training samples, so as to obtain a screen light detection model.
For any training sample, the first screen optical parameter can be regarded as an identifier of the first display parameter to a certain extent, the neural network to be trained can process the first display parameter and output a predicted value, and if the predicted value is the same as or similar to the first screen optical parameter, the predicted result of the neural network to be trained is more accurate; on the contrary, if the difference between the predicted value and the first screen light parameter is larger, it indicates that the predicted result of the neural network to be trained is not accurate enough.
Whether the prediction result is accurate or not can be reflected on a loss value of a loss function preset in the neural network to be trained, and the neural network to be trained can adjust related network parameters based on the loss value.
By training the neural network to be trained by using the N training samples, the network parameters can be adjusted to a better state, and the screen light detection model is obtained. The better state can be embodied in that the screen light detection model can process the input display parameters in the subsequent application process, and the screen light parameters which are more accurate are obtained through prediction.
The screen light detection model is a fully trained neural network, and can accurately predict screen light parameters when a target display area is in various display states. Therefore, in the embodiment, the display state of the target display area does not need to be exhausted to establish the corresponding relationship between the display state and the screen light parameter; when the display state has no corresponding relation, the screen light parameters can be obtained more accurately. The screen optical parameter is used as a compensation value to correct the light detection value obtained by the photosensitive sensor, and a more accurate environment optical parameter can also be obtained.
Optionally, the first display parameter comprises at least one of a brightness parameter and a color parameter.
As indicated above, the first display parameter may be obtained directly from the motherboard or other electronic components of the electronic device.
The first display parameter may include a plurality of luminance parameters. The plurality of luminance parameters may be sampled from a luminance range of the electronic device according to a preset sampling rule.
In one example, the sampling rules described above may be determined in conjunction with empirical data. For example, according to the sampling rule, sampling may be performed at a smaller sampling interval in a luminance range common to the user and at a larger sampling interval in a luminance range not common to the user.
Of course, in practical applications, the plurality of luminance parameters may be extracted in an average sampling manner.
In combination with a specific application example, if the whole luminance range of the electronic device is divided into 2047 luminance levels, a plurality of luminance levels may be extracted from the 2047 luminance levels according to a preset sampling rule to determine the luminance parameter.
In particular, sampling may be performed between 8-208 brightness levels at sampling interval 40, between 208 and 508 at sampling interval 40, and between 508 and 2047 at sampling interval 40. Therefore, the reasonability of the training sample can be improved while the quantity of the extracted color parameters is controlled, and the quality of the trained screen light detection model is ensured.
Of course, the first display parameter may comprise a color parameter.
In general, the color parameters may be represented by RGB values. In other words, each color parameter may include at least three types of values, which are values in three color channels of red, green, and blue, respectively, and a value range of the value in each color channel is 0 to 255.
Of course, the color parameters may also be represented by gray values. That is, each color parameter may include a class of gray values, which range from 0 to 255.
For simplicity, the following description mainly takes the color parameters as RGB values as an example. As indicated above, the target display area may include a plurality of pixel cells, each of which may correspond to an RGB value, and thus, one color parameter may include a set of values. For example, if the target display area includes 66 × 66 pixel units, one color parameter may include 66 × 66 RGB values. Each RGB value may in turn comprise three values, so that one color parameter may be represented by a 66 x 3 matrix of values.
One color parameter can be obtained by assigning the respective RGB values to the 66 × 66 pixel units. In practical applications, the color parameters of different training samples may be the same or different.
In one embodiment, the first display parameter may include both a brightness parameter and a color parameter. In connection with the above example, the plurality of luminance parameters and the plurality of color parameters may be freely combined to form the N first display parameters. And the first screen light parameters of the target display area when the target display area is displayed with the first display parameters can be acquired through the photosensitive sensor. And associating the first display parameter with the first screen light parameter to obtain N training samples.
In general, the brightness parameter may be considered as the overall brightness of the screen, and the color parameter may correspond to the display content of the local portion of the target display area, both of which may affect the light detection value of the photosensitive sensor. The first display parameters comprise the brightness parameters and the color parameters, so that the factors influencing the first screen light parameters can be considered comprehensively, and the screen light parameters can be predicted more accurately by a subsequently obtained screen light detection model.
Optionally, the target display area includes P target pixel units, the first display parameter includes P color parameters corresponding to the P target pixel units one to one, and P is an integer greater than 1;
at least part of the N training samples comprise first display parameters with at least two color parameters.
In connection with the above example, the target display area may include 66 × 66 pixel units, and the 66 × 66 pixel units may correspond to P target pixel units, where P is 66 × 66. The color parameter corresponding to each target pixel unit may be an RGB value or a gray scale value of the target pixel unit, and the RGB value will be mainly used for description below.
If the target display area displays a pure color, that is, each target pixel unit of the target display area displays the same color. Specifically, in 66 × 66 target pixel units, the RGB values of the respective target pixel units may all be equal.
If the target display area displays a non-pure color, the RGB values of at least two target pixel units of 66 × 66 target pixel units of the target display area are not equal.
In this embodiment, at least some of the N training samples include first display parameters in which at least two color parameters exist, that is, at least one training sample includes first display parameters in which RGB values of at least two target pixel units are not equal.
In practical applications, the RGB values of the 66 × 66 target pixel units in one first display parameter may correspond to more than two different RGB values, or more than two emission colors.
For simplicity, the first display parameter including at least two color parameters may be referred to as a third display parameter, and two RGB values may be respectively corresponding to the white light and the gray light, taking as an example that there are two RGB values in the P color parameters included in each third display parameter.
As shown in fig. 4a to 4c, these drawings are exemplary diagrams of some display states of the target display area. In fig. 4a to 4c, a circle represents an outline of a vertical projection of the photosensor on the screen, and the target display area is a circumscribed square of the circle, corresponding to 66 × 66 target pixel units.
In fig. 4a, the left 60 columns of target pixel cells may emit white light and the right 6 columns emit gray light; in fig. 4b, the left 30 columns of target pixel cells may emit white light and the right 36 columns emit gray light; in fig. 4c, 66 columns of target pixel cells all emit gray light.
It can be seen that, when the target display area is in the display state of fig. 4a or 4b, the obtained first display parameter is the third display parameter. And the target display area is in the display state of fig. 4c, the RGB values of the respective target pixel cells are equal.
Of course, in practical applications, the display state of the target display area, such as the type, number and distribution of the light emission colors of the target pixel units, can be selected according to the requirement.
In this embodiment, at least one third display parameter exists in the N first display parameters included in the N training samples, which is helpful for improving the generalization capability of the trained screen light detection model, that is, the screen light detection model can effectively predict the screen light parameters for multiple display states of the target display region.
Optionally, the neural network to be trained comprises a first decoder and a second decoder;
training a neural network to be trained through N training samples to obtain a screen light detection model, comprising the following steps of:
performing convolution processing on the training sample through a first decoder to obtain a target characteristic diagram;
performing convolution processing on the target characteristic diagram through a second decoder to obtain vector expression;
inputting the vector expression and the first screen light parameter into a preset loss function to obtain a loss value;
and obtaining a screen light detection model under the condition that the loss value is smaller than the loss value threshold value.
The architecture and operation of the neural network to be trained in the present embodiment are described below with reference to a specific application example.
As shown in fig. 5, the neural network to be trained may include a first decoder and a second decoder, where the first decoder includes three layers of neural networks for outputting feature maps, which are denoted as three layers of first neural networks, the second decoder includes two layers of neural networks for outputting feature vectors, which are denoted as two layers of second neural networks, and a density is shown in fig. 5.
Each of the first neural networks may include a depth separable convolutional network (Depthwise separable convolution), a normalization layer (Batchnorm), a linear rectification function (ReLu), and a max pooling layer (MaxPool) connected in sequence.
The input of the first decoder may receive first display parameters, each of which may be a 66 × 66 × 4 data matrix, where 66 × 66 corresponds to the number of target pixel units of the target display area, and 4 corresponds to the values of the red, green, and blue color channels and the brightness parameter, respectively.
The first display parameter is input to a first layer of a first neural network whose deep separable convolutional network convolves the first display parameter with a number of convolutional cores of size 3 x 3, 64. The convolution results are normalized and then non-linearly activated using a linear rectification function. And finally, outputting the first feature map by using the maximum pooling.
The first feature map is input to a second layer of a first neural network whose deep separable convolutional network convolves the first feature map with a convolution kernel of size 3 x 3 and number 32. The convolution results are normalized and then non-linearly activated using a linear rectification function. And finally, outputting the second feature map by using the maximum pooling.
The second feature map is input to a third tier of the first neural network whose deep separable convolutional network convolves the second feature map using a number of 32 convolutional cores of size 3 x 3. The convolution results are normalized and then non-linearly activated using a linear rectification function. And finally outputting a third feature map by using the maximum pooling. The third profile may correspond to the target profile described above.
And inputting the third feature map into a first layer second neural network, expanding the third feature map into a vector by the first layer second neural network, and inputting the vector into a full connection layer. The number of nodes of the fully connected layer is 128. Nonlinear activation is performed using a linear rectification function, outputting a first vector.
The first vector is input to a fully connected layer of a second layer, second neural network. The number of nodes of the full connection layer is 1. And performing nonlinear activation by using a linear rectification function, and outputting a predicted value of the first display parameter, which corresponds to the vector expression.
The neural network to be trained can also have a preset loss function, the vector expression and the first screen light parameter are input into the preset loss function, and a loss value can be obtained through calculation.
In general, the larger the difference between the vector representation and the first screen light parameter, the larger the loss value of the preset loss function, and vice versa. And according to the loss value, the network parameters of the neural network to be trained can be reversely adjusted, so that the loss value obtained in the subsequent training process tends to be continuously reduced until the loss value is smaller than the preset loss value threshold. That is, the screen light detection model can be obtained by training the neural network to be trained with the goal that the loss value is smaller than the loss value threshold.
In practical application, the architecture or the hyper-parameters of the neural network to be trained can be adjusted according to actual needs, and a screen light detection model can be obtained through training.
As shown in fig. 6, an embodiment of the present application further provides an ambient light detection method, including:
601, acquiring a second display parameter, a light detection value and a screen light detection model of a target display area; the light detection value is obtained through a photosensitive sensor, and the screen light detection model is obtained through training according to the screen light detection model training method;
step 602, processing the second display parameter through the screen light detection model to obtain a second screen light parameter;
step 603, determining an ambient light parameter according to the second screen light parameter and the light detection value.
In the ambient light detection method provided by the embodiment of the application, the screen light detection model is obtained by training through a screen light detection model training method, and can be used for processing second display parameters of the target display area in various display states and obtaining more accurate second screen light parameters; according to the second screen light parameter and the light detection value, the determined environment light parameter can effectively overcome the detection error of the screen light to the environment light parameter, and the accuracy of the environment light parameter is improved.
The following is an example of the principle of determining the ambient light parameter based on the second screen light parameter and the light detection value.
Referring to fig. 1a and 1b, ambient light is incident on the photosensor through the glass cover plate, the screen, and the like. Let the intensity of the ambient light that can be detected in this portion be x1. The process that ambient light propagates in media such as glass cover plate, screen and air gap, and is surveyed by photosensitive sensor is recorded as:
s=g(x1) (1)
wherein, g () represents the influence of various factors of the ambient light in the process of propagation and detection on the measurement result, represents the measurement result of the photosensitive sensor, and corresponds to the ambient light parameter.
When the screen emits light, a portion of the light emitted by the screen propagates through the medium and is detected by the light-sensitive sensor. Suppose that the intensity of light emitted from the screen (corresponding to the target display area) of the effective area is x2Then the process in which a portion of the light propagates through the medium and is detected is:
n=h(x2) (2)
h () represents the influence of various factors in the process of spreading and detecting part of screen luminescence on the measurement result, and n represents the measurement result of the photosensitive sensor on the screen luminescence, corresponding to the screen light parameter.
When ambient light and screen lighting are present simultaneously, the measurement of the light sensitive sensor is noted as y. Assuming that the final detection results of the ambient light and the screen luminescence satisfy a linear superposition relationship, the whole detection process is obtained by the following equations (1) and (2):
y=s+n=g(x1)+h(x2) (3)
the measurement component n produced by the screen glow can be considered as additive noise superimposed on the desired ambient light measurement s.
In connection with the above embodiment, the light detection value obtained by the photosensor corresponds to y; and processing the second display parameter through the screen light detection model to obtain a second screen light parameter corresponding to n. Then the noise n in y is removed and a more accurate ambient light measurement can be retained.
In one example, the noise n in y can be removed by a preset compensation algorithm, for example, the compensation algorithm can correspond to the calculation of (y-n) under the definition of equation (3). In practical applications, the compensation algorithm may also be set to other calculation manners as needed, which is not illustrated here.
As indicated above, the screen light detection model may be trained on a server or electronic device, which may transmit the screen light detection model to other electronic devices. The light emitting performance of the screen may vary between different electronic devices, and the measurement accuracy of the photosensitive sensor may also vary, which results in that the light detection values measured by different electronic devices may vary when the display is performed under the same display parameters.
Alternatively, the position of the photosensitive sensor may be different in different electronic devices, and the relative position of the target display area in the screen is determined, which also results in the difference of the light detection values measured by different electronic devices.
In order to overcome the problem that the light detection values measured by the light-sensitive sensors are greatly different when different electronic devices are in the same display parameter, optionally, the step 603 of determining the ambient light parameter according to the second screen light parameter and the light detection value may include:
acquiring a correction coefficient;
correcting the second screen light parameter according to the correction coefficient;
and compensating the light detection value through the corrected second screen light parameter to obtain an ambient light parameter.
In this embodiment, the correction coefficient may be stored in the electronic device in advance. The specific acquisition process can be seen in the following examples.
In one example, an electronic device for screen light detection model training (hereinafter referred to as a first electronic device) and an electronic device for which a correction coefficient is to be acquired (hereinafter referred to as a second electronic device) may be placed in the same environmental condition and the same display state.
For example, the first electronic device and the second electronic device are simultaneously placed in an environment with uniform light; and meanwhile, the display content and the display brightness of the two are adjusted to be consistent. And respectively recording the light detection values measured by the photosensitive sensors in the two electronic devices. And adjusting the intensity of light in the environment, and recording the light detection values measured by the photosensitive sensors in the two electronic devices for multiple times. Then, the correction coefficient is obtained by performing a sorting calculation based on the recorded photodetection values.
Of course, the correction coefficient may also be obtained by other manners, which is not illustrated here.
In one example, the correction coefficient may be obtained and stored once, and in the process of subsequent application, the correction coefficient is directly called to correct the second screen lighting parameter, without retesting and calculating the correction coefficient.
The corrected second screen light parameter can accurately reflect the ambient light detection error caused by the screen light of the second electronic device, and the corrected second screen light parameter compensates the light detection value, so that the accuracy of the obtained ambient light parameter can be further improved.
Optionally, the step of obtaining the correction coefficient may specifically include:
acquiring a third screen light parameter, wherein the third screen light parameter is a light detection value obtained by a photosensitive sensor under the target environment condition when a target display area is displayed according to a fourth display parameter;
processing the fourth display parameter through the screen light detection model to obtain a fourth screen light parameter;
and calculating a correction coefficient according to the third screen light parameter and the fourth screen light parameter.
As shown above, when the electronic device is in a target environmental condition, such as an environment in which the intensity of ambient light is less than or equal to the light intensity threshold, the light detection value obtained by the photosensor in the electronic device may be considered to be the light detection value of the screen light.
In this embodiment, the fourth display parameter may be a display parameter when the target display area is in any display state. When the screen displays according to the fourth display parameter, if the electronic device is placed in an environment where the light intensity of the ambient light is less than or equal to the light intensity threshold, the light detection value obtained by the photosensitive sensor is the third screen light parameter.
The screen light detection model may process the fourth display parameter to obtain a fourth screen light parameter.
In one example, a ratio of the third screen light parameter to the fourth screen light parameter may be used as the correction coefficient.
In another example, the target display area may be sequentially in multiple display states, and in each display state, a third screen light parameter and a fourth screen light parameter are obtained to calculate a correction coefficient in each display state; then, the median, average or mode of these correction coefficients is taken as the final correction coefficient.
In this embodiment, the third screen optical parameter is directly obtained by the photosensitive sensor, and the correction coefficient can be calculated more efficiently according to the third screen optical parameter and the fourth screen optical parameter obtained by processing the screen optical detection model.
It should be noted that, in the screen light detection model training method provided in the embodiment of the present application, the executing subject may be a screen light detection model training device, or a control module in the screen light detection model training device for executing the screen light detection model training method. The embodiment of the present application takes the screen light detection model training device as an example to execute the screen light detection model training method, and the screen light detection model training device provided in the embodiment of the present application is described.
As shown in fig. 7, the screen light detection model training apparatus 700 provided in the embodiment of the present application includes:
a first obtaining module 701, configured to obtain N training samples, where a training sample includes a first display parameter and a first screen light parameter, the first screen light parameter is a light detection value obtained by a light sensor under a target environment condition, and when a target display area is displayed according to the first display parameter, the target environment condition includes that an ambient light intensity is less than or equal to a preset light intensity value, the target display area is a display area in a screen that is matched with a position of the light sensor, the first display parameter is a display parameter of the target display area, and N is an integer greater than 1;
the training module 702 is configured to train the neural network to be trained through the N training samples, so as to obtain a screen light detection model.
Optionally, the target display area includes P target pixel units, the first display parameter includes P color parameters corresponding to the P target pixel units one to one, and P is an integer greater than 1;
at least part of the N training samples comprise first display parameters with at least two color parameters.
The screen light detection model training device 700 provided by the embodiment of the application trains the neural network to be trained based on the N training samples, and the obtained screen light detection model can be suitable for accurately predicting screen light parameters in various display states of a target display area, so that the accuracy of the subsequently obtained environment light parameters is improved.
Similarly, in the ambient light detection method provided in the embodiment of the present application, the execution subject may be an ambient light detection device, or a control module in the ambient light detection device for executing the ambient light detection method. In the embodiment of the present application, an ambient light detection method executed by an ambient light detection device is taken as an example, and the ambient light detection device provided in the embodiment of the present application is described.
As shown in fig. 8, an embodiment of the present application further provides an ambient light detection apparatus 800, including:
a second obtaining module 801, configured to obtain a second display parameter, a light detection value, and a screen light detection model of the target display area; the light detection value is obtained through a photosensitive sensor, and the screen light detection model is obtained through training according to the screen light detection model training method;
the processing module 802 is configured to process the second display parameter through the screen light detection model to obtain a second screen light parameter;
the determining module 803 is configured to determine the ambient light parameter according to the second screen light parameter and the light detection value.
Optionally, the determining module 803 includes:
a first acquisition unit configured to acquire a correction coefficient;
the correction unit is used for correcting the second screen light parameter according to the correction coefficient;
and the compensation unit is used for compensating the light detection value through the corrected second screen light parameter to obtain an ambient light parameter.
Optionally, the first obtaining unit may include:
the acquisition subunit is used for acquiring a third screen light parameter, wherein the third screen light parameter is a light detection value obtained by the photosensitive sensor under the target environment condition when the target display area is displayed according to the fourth display parameter;
the processing subunit is used for processing the fourth display parameter through the screen light detection model to obtain a fourth screen light parameter;
and the calculating subunit is used for calculating the correction coefficient according to the third screen light parameter and the fourth screen light parameter.
In the ambient light detection device 800 provided in the embodiment of the present application, the screen light detection model is obtained by training through a screen light detection model training method, and can be processed for the second display parameters of the target display area in various display states, and can obtain more accurate second screen light parameters; according to the second screen light parameter and the light detection value, the determined environment light parameter can effectively overcome the detection error of the screen light to the environment light parameter, and the accuracy of the environment light parameter is improved. And the second screen light parameter is corrected through the correction coefficient, so that the difference of light detection values obtained by different electronic equipment can be compensated, and the accuracy of the environment light parameter is further improved.
The screen light detection model training device and the ambient light detection device in the embodiment of the present application may be devices, or may be components, integrated circuits, or chips in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The screen light detection model training device and the ambient light detection device in the embodiment of the present application may be devices having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The screen light detection model training device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 3 to 5, and the ambient light detection device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 6, and is not described here again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and executable on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the above-mentioned screen light detection model training method or the above-mentioned ambient light detection method embodiment, and can achieve the same technical effect, and in order to avoid repetition, it is not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1010 is configured to obtain N training samples, where the training samples include a first display parameter and a first screen light parameter, the first screen light parameter is a light detection value obtained by a light sensor under a target environment condition, and when a target display area is displayed according to the first display parameter, the target environment condition includes that an ambient light intensity is less than or equal to a preset light intensity value, the target display area is a display area in the screen that is matched with the position of the light sensor, the first display parameter is a display parameter of the target display area, and N is an integer greater than 1; and training the neural network to be trained through the N training samples to obtain the screen light detection model.
The electronic equipment provided by the embodiment of the application can train the neural network to be trained through N training samples to obtain the screen light detection model, wherein one training sample comprises a first display parameter, and the first screen light parameter is obtained through the photosensitive sensor under the target environment condition when the target display area displays according to the first display parameter. Therefore, the screen light detection model can accurately predict and obtain the screen light parameters when the target display area is in various display states. In subsequent applications, the predicted screen light parameters can be used for compensating the light detection values obtained by the photosensitive sensors, so that the environment light parameters can be obtained more accurately.
Optionally, the target display area includes P target pixel units, the first display parameter includes P color parameters corresponding to the P target pixel units one to one, and P is an integer greater than 1;
at least part of the N training samples comprise first display parameters with at least two color parameters.
Optionally, the processor 1010 may be further configured to acquire a second display parameter, a light detection value, and a screen light detection model of the target display area; the light detection value is obtained through a photosensitive sensor, and the screen light detection model is obtained through training according to the screen light detection model training method; processing the second display parameter through the screen light detection model to obtain a second screen light parameter; and determining the ambient light parameter according to the second screen light parameter and the light detection value.
Optionally, the processor 1010 may be further configured to obtain a correction coefficient; correcting the second screen light parameter according to the correction coefficient; and compensating the light detection value through the corrected second screen light parameter to obtain an ambient light parameter.
Optionally, the processor 1010 may be further configured to acquire a third screen light parameter, where the third screen light parameter is a light detection value acquired by the light sensor under the target environment condition and when the target display area is displayed according to the fourth display parameter; processing the fourth display parameter through the screen light detection model to obtain a fourth screen light parameter; and calculating a correction coefficient according to the third screen light parameter and the fourth screen light parameter.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the screen light detection model training method or the ambient light detection method, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the screen light detection model training method or the ambient light detection method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A screen light detection model training method is characterized by comprising the following steps:
acquiring N training samples, wherein the training samples comprise a first display parameter and a first screen light parameter, the first screen light parameter is a light detection value obtained by a photosensitive sensor under a target environment condition, a target display area is displayed according to the first display parameter, the target environment condition comprises that the ambient light intensity is smaller than or equal to a preset light intensity value, the target display area is a display area matched with the photosensitive sensor in position in a screen, the first display parameter is a display parameter of the target display area, and N is an integer larger than 1;
and training the neural network to be trained through the N training samples to obtain the screen light detection model.
2. The method according to claim 1, wherein the target display area comprises P target pixel units, the first display parameters comprise P color parameters corresponding to the P target pixel units one by one, and P is an integer greater than 1;
at least two color parameters exist in first display parameters included in at least part of the N training samples.
3. An ambient light detection method, comprising:
acquiring a second display parameter, a light detection value and a screen light detection model of a target display area; the light detection value is obtained by a photosensitive sensor, and the screen light detection model is trained according to the screen light detection model training method as claimed in claim 1 or 2;
processing the second display parameter through the screen light detection model to obtain a second screen light parameter;
and determining an ambient light parameter according to the second screen light parameter and the light detection value.
4. The method of claim 3, wherein determining an ambient light parameter from the second screen light parameter and the light detection value comprises:
acquiring a correction coefficient;
correcting the second screen light parameter according to the correction coefficient;
and compensating the light detection value through the corrected second screen light parameter to obtain an environment light parameter.
5. The method of claim 4, wherein obtaining the correction factor comprises:
acquiring a third screen light parameter, wherein the third screen light parameter is a light detection value obtained by the photosensitive sensor under a target environment condition when the target display area is displayed according to a fourth display parameter;
processing the fourth display parameter through the screen light detection model to obtain a fourth screen light parameter;
and calculating the correction coefficient according to the third screen light parameter and the fourth screen light parameter.
6. A screen light detection model training device, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the training samples comprise a first display parameter and a first screen light parameter, the first screen light parameter is a light detection value obtained by a photosensitive sensor under a target environment condition, a target display area is displayed according to the first display parameter, the target environment condition comprises that the ambient light intensity is less than or equal to a preset light intensity value, the target display area is a display area matched with the photosensitive sensor in position in a screen, the first display parameter is a display parameter of the target display area, and N is an integer greater than 1;
and the training module is used for training the neural network to be trained through the N training samples to obtain the screen light detection model.
7. The apparatus according to claim 6, wherein the target display area comprises P target pixel units, the first display parameter comprises P color parameters corresponding to the P target pixel units one by one, and P is an integer greater than 1;
at least two color parameters exist in first display parameters included in at least part of the N training samples.
8. An ambient light detection device, comprising:
the second acquisition module is used for acquiring a second display parameter, a light detection value and a screen light detection model of the target display area; the light detection value is obtained by a photosensitive sensor, and the screen light detection model is trained according to the screen light detection model training method as claimed in claim 1 or 2;
the processing module is used for processing the second display parameter through the screen light detection model to obtain a second screen light parameter;
and the determining module is used for determining the ambient light parameter according to the second screen light parameter and the light detection value.
9. The apparatus of claim 8, wherein the determining module comprises:
a first acquisition unit configured to acquire a correction coefficient;
the correction unit is used for correcting the second screen light parameter according to the correction coefficient;
and the compensation unit is used for compensating the light detection value through the corrected second screen light parameter to obtain an environment light parameter.
10. The apparatus of claim 9, wherein the first obtaining unit comprises:
the acquisition subunit is configured to acquire a third screen light parameter, where the third screen light parameter is a light detection value obtained by the photosensor under a target environment condition and when the target display area is displayed according to a fourth display parameter;
the processing subunit is configured to process the fourth display parameter through the screen light detection model to obtain a fourth screen light parameter;
and the calculating subunit is configured to calculate the correction coefficient according to the third screen light parameter and the fourth screen light parameter.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the screen light detection model training method as claimed in claim 1 or 2, or implementing the steps of the ambient light detection method as claimed in any one of claims 3-5.
12. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the screen light detection model training method of claim 1 or 2, or implement the steps of the ambient light detection method of any one of claims 3 to 5.
CN202111123048.8A 2021-09-24 2021-09-24 Screen light detection model training method, and ambient light detection method and device Pending CN113887599A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111123048.8A CN113887599A (en) 2021-09-24 2021-09-24 Screen light detection model training method, and ambient light detection method and device
PCT/CN2022/119722 WO2023045884A1 (en) 2021-09-24 2022-09-19 Screen light detection model training method, ambient light detection method, and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111123048.8A CN113887599A (en) 2021-09-24 2021-09-24 Screen light detection model training method, and ambient light detection method and device

Publications (1)

Publication Number Publication Date
CN113887599A true CN113887599A (en) 2022-01-04

Family

ID=79006569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111123048.8A Pending CN113887599A (en) 2021-09-24 2021-09-24 Screen light detection model training method, and ambient light detection method and device

Country Status (2)

Country Link
CN (1) CN113887599A (en)
WO (1) WO2023045884A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114220381A (en) * 2022-02-22 2022-03-22 深圳灏鹏科技有限公司 Display brightness control method, device, equipment and storage medium
WO2023045884A1 (en) * 2021-09-24 2023-03-30 维沃移动通信有限公司 Screen light detection model training method, ambient light detection method, and apparatus
CN116682383A (en) * 2023-05-30 2023-09-01 惠科股份有限公司 Display panel, backlight compensation method thereof and display device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128166B (en) * 2023-04-17 2023-06-23 广州云硕科技发展有限公司 Data visualization processing method and device for intelligent traffic

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112992056A (en) * 2019-12-16 2021-06-18 北京小米移动软件有限公司 Ambient light detection method and apparatus, electronic device, and storage medium
CN113188656A (en) * 2021-06-28 2021-07-30 深圳小米通讯技术有限公司 Ambient light intensity detection method, detection device, electronic equipment and storage medium
CN113887599A (en) * 2021-09-24 2022-01-04 维沃移动通信有限公司 Screen light detection model training method, and ambient light detection method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045884A1 (en) * 2021-09-24 2023-03-30 维沃移动通信有限公司 Screen light detection model training method, ambient light detection method, and apparatus
CN114220381A (en) * 2022-02-22 2022-03-22 深圳灏鹏科技有限公司 Display brightness control method, device, equipment and storage medium
CN116682383A (en) * 2023-05-30 2023-09-01 惠科股份有限公司 Display panel, backlight compensation method thereof and display device
CN116682383B (en) * 2023-05-30 2024-05-03 惠科股份有限公司 Display panel, backlight compensation method thereof and display device

Also Published As

Publication number Publication date
WO2023045884A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN113887599A (en) Screen light detection model training method, and ambient light detection method and device
CN111476780B (en) Image detection method and device, electronic equipment and storage medium
US9344690B2 (en) Image demosaicing
CN110602473B (en) White balance calibration method and device
CN109238460B (en) Method for obtaining ambient light intensity and terminal equipment
CN109473059B (en) Display current determining method, display current compensating method, display current determining device, display current compensating device, display device, and storage medium
CN111612093A (en) Video classification method, video classification device, electronic equipment and storage medium
CN111380610B (en) Ambient light detection method and apparatus, electronic device, and storage medium
CN110796157A (en) Image difference identification method and device and storage medium
CN112384946A (en) Image dead pixel detection method and device
CN107292817B (en) Image processing method, device, storage medium and terminal
WO2023005818A1 (en) Noise image generation method and apparatus, electronic device, and storage medium
CN109348206A (en) Image white balancing treatment method, device, storage medium and mobile terminal
CN111292394A (en) Image color matching relationship determination method and device
CN112084959B (en) Crowd image processing method and device
CN112233075A (en) Video definition evaluation method and device, storage medium and electronic equipment
CN113676713A (en) Image processing method, apparatus, device and medium
CN112330692A (en) Matting method, device, equipment and storage medium
CN103679130B (en) Hand method for tracing, hand tracing equipment and gesture recognition system
CN114299056A (en) Defect point recognition method of image and defect image recognition model training method
CN111145151B (en) Motion area determining method and electronic equipment
CN116188301A (en) Light field image calibration method and device, electronic equipment and storage medium
CN114894444A (en) Screen light leakage and ambient light detection method and device and electronic equipment
TWI537877B (en) Method for detecting skin tone and skin tone detection system
CN113487592B (en) Hyper-spectral or multispectral image overexposure detection method and system based on statistics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination