CN115580781A - Exposure parameter adjusting method and device, electronic equipment and storage medium - Google Patents

Exposure parameter adjusting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115580781A
CN115580781A CN202211130692.2A CN202211130692A CN115580781A CN 115580781 A CN115580781 A CN 115580781A CN 202211130692 A CN202211130692 A CN 202211130692A CN 115580781 A CN115580781 A CN 115580781A
Authority
CN
China
Prior art keywords
picture
feature
histogram
brightness
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211130692.2A
Other languages
Chinese (zh)
Inventor
王楚鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211130692.2A priority Critical patent/CN115580781A/en
Publication of CN115580781A publication Critical patent/CN115580781A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an exposure parameter adjusting method and device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The method for adjusting the exposure parameters comprises the following steps: obtaining semantic features of a first picture, and obtaining brightness features based on exposure parameters, a color histogram and a brightness histogram of the first picture; performing feature fusion on the semantic features and the brightness features to obtain fusion features; acquiring the average reflectivity of the first picture based on the fusion characteristics; based on the average reflectivity, exposure parameters for the shot are adjusted.

Description

Exposure parameter adjusting method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an exposure parameter adjusting method and device, an electronic device and a storage medium.
Background
At present, in the shooting process, pictures meeting the brightness of human eyes can be obtained under different lighting conditions and scenes in an Automatic Exposure (AE) mode, so that the brightness of objects in the pictures is more real.
However, for some shooting scenes with large areas of white (for example, a white snow scene, a white paper scene, etc.) or large areas of black (for example, a gray wall scene, a black chassis scene, etc.), the existing automatic exposure method is prone to inaccurate exposure, and the problem is that the large areas of black scenes are overexposed, the brightness of black objects in pictures is too bright, and the large areas of white scenes are underexposed, and the brightness of white objects in pictures is too dark.
At present, the reflectivity of a shooting scene in a picture can be acquired by a deep learning method by utilizing semantic information (such as texture, contour, object type and the like) of the picture, and then exposure compensation is performed according to the acquired reflectivity, so that proper exposure is achieved, and the problem of inaccurate exposure is solved. The reflectivity of an object refers to the ability of the object to reflect light, and is related to the properties of the object itself (e.g., the material and surface roughness of the object).
However, in the existing method for acquiring the reflectivity of the shooting scene in the picture based on the deep learning, errors are easily caused in the acquisition of the reflectivity of the large-area white or large-area black shooting scene, the accuracy is poor, the adjustment of the exposure parameter is inaccurate, and the effect of the shot picture is poor.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for adjusting exposure parameters, an electronic device, and a storage medium, which can solve the problem of poor accuracy of adjusting exposure parameters.
In a first aspect, an embodiment of the present application provides a method for adjusting an exposure parameter, where the method includes:
obtaining semantic features of a first picture, and obtaining brightness features based on exposure parameters, a color histogram and a brightness histogram of the first picture;
performing feature fusion on the semantic features and the brightness features to obtain fusion features;
acquiring the average reflectivity of the first picture based on the fusion characteristics;
and adjusting the exposure parameters of the shooting based on the average reflectivity.
In a second aspect, an embodiment of the present application provides an apparatus for adjusting an exposure parameter, including:
the characteristic acquisition module is used for acquiring semantic characteristics of a first picture and acquiring brightness characteristics based on exposure parameters, a color histogram and a brightness histogram of the first picture;
the feature fusion module is used for performing feature fusion on the semantic features and the brightness features to obtain fusion features;
a reflectivity obtaining module, configured to obtain an average reflectivity of the first picture based on the fusion feature;
and the parameter adjusting module is used for adjusting the shot exposure parameters based on the average reflectivity.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the semantic feature and the brightness feature of the first picture are obtained, the fusion feature is obtained based on the semantic feature and the brightness feature, the average reflectivity of the shooting scene in the first picture is obtained based on the fusion feature, the influence of the brightness on the reflectivity is considered, the influence of the brightness change of the picture caused by the exposure parameter change on the reflectivity is weakened, white objects under dark light and black objects under strong light can be more accurately distinguished, the accuracy and the stability of the reflectivity obtaining result can be improved, the accuracy of the adjustment of the exposure parameter can be improved, the effect of the shot picture is improved, and the shot picture can more accurately display a large-area white shooting scene and a large-area black shooting scene.
Drawings
Fig. 1 is a schematic flowchart of an adjusting method of exposure parameters according to an embodiment of the present disclosure;
fig. 2 is a second schematic flowchart of a method for adjusting exposure parameters according to an embodiment of the present disclosure;
fig. 3 is a third schematic flowchart of a method for adjusting exposure parameters according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for adjusting exposure parameters according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The method, the apparatus, the electronic device, and the storage medium for adjusting the exposure parameter according to the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for adjusting an exposure parameter according to an embodiment of the present disclosure. The following describes an adjustment method of exposure parameters provided in an embodiment of the present application with reference to fig. 1. As shown in fig. 1, the method includes: step 101, step 102, step 103 and step 104.
Alternatively, the adjusting means of the exposure parameters may be implemented in various forms. For example, the adjusting device of the exposure parameter described in the embodiments of the present application may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, a smart band, a smart watch, a digital camera, and the like, and a fixed terminal such as a desktop computer, a television, and the like. In the following, it is assumed that the adjusting apparatus of the exposure parameter is a mobile terminal. However, it will be understood by those skilled in the art that the method according to the embodiments of the present application can also be applied to a fixed type terminal.
Step 101, obtaining semantic features of the first picture, and obtaining brightness features based on exposure parameters, a color histogram and a brightness histogram of the first picture.
Optionally, the semantic feature of the first picture refers to a feature of semantic information of the first picture.
The semantics of a picture can be divided into a visual layer, an object layer, and a concept layer.
The visual layer, which is commonly understood as the base layer, may contain colors, textures, shapes, and the like. The above features are all referred to as underlying feature semantics.
The object layer, i.e., the intermediate layer, typically contains attribute features and the like. An attribute feature is generally the state of an object at a certain time.
The conceptual level is a high level, being what the picture represents is closest to human understanding.
Generally speaking, for example, a picture has sand, blue sky, seawater and the like, the visual layer is a block of division, the object layer is sand, blue sky, seawater and the like, and the concept layer is a beach, which is the semantic meaning expressed on the picture.
The semantic features may include underlying semantic features and higher-level semantic features.
The underlying semantic features may include features such as contours, edges, colors, textures, and shapes. Where the edges and contours reflect the image slice content.
And the high-level semantic features refer to contents which can be seen by human eyes.
For example, the bottom semantic features are extracted from a face picture, the outline, nose, eyes, and the like of the face can be obtained, and the high semantic features can be displayed as a face.
Optionally, feature extraction may be performed on the first picture by any method for extracting semantic features, so as to obtain the semantic features of the first picture.
Optionally, semantic features of the first picture may be extracted based on a pre-trained first model. Alternatively, the first model may be a neural network-based model.
Illustratively, the first model may be a model based on MobileNet or a deep residual network, etc. The MobileNet series models include MobileNetV1, mobileNetV2 and MobileNetV3. The MobileNet series model is a lightweight classical neural network and can be applied to mobile terminals. A Deep residual network (ResNet) model, which may be used for fixed terminals.
The exposure parameters may include an aperture size, a shutter time, and a sensitivity ISO of the camera. The bigger the aperture is, the brighter the picture is; the longer the shutter time, the brighter the picture; the greater the sensitivity ISO, the brighter the picture; otherwise, the darker the picture.
The image shot by the electronic device may carry the exposure parameter, and therefore, based on the first image, the exposure parameter of the first image may be obtained. The exposure parameter of the first picture is the exposure parameter of the camera shooting the first picture when shooting the first picture.
Under the same exposure parameters, a black object is darker, a white object is brighter, and the change of the exposure parameters can cause the brightness change of the picture.
And the color histogram of the first picture is used for reflecting the color distribution of each pixel point in the first picture.
And the brightness histogram of the first picture is used for reflecting the brightness distribution of each pixel point in the first picture.
The brightness of the pixel point can be obtained according to the color of the pixel point.
It is understood that the exposure parameter and the color are both related to the luminance, and therefore, the luminance characteristic of the first picture can be obtained based on the exposure parameter, the color histogram and the luminance histogram of the first picture.
And 102, performing feature fusion on the semantic features and the brightness features to obtain fusion features.
Optionally, feature fusion may be performed on the semantic features of the first picture and the luminance features of the first picture based on any feature fusion method, so as to obtain fusion features of the first picture.
Optionally, under the condition that the semantic feature of the first picture and the luminance feature of the first picture are both vectors, the two vectors may be spliced to obtain the fusion feature of the first picture in the form of a vector.
Illustratively, when the semantic feature of the first picture and the luminance feature of the first picture are an m-dimensional column vector and an n-dimensional column vector (which may be referred to as a semantic feature vector and a luminance feature vector, respectively), the m-dimensional semantic feature vector and the n-dimensional luminance feature vector may be spliced into an (m + n) -dimensional column vector (which may be referred to as a fused feature vector).
And 103, acquiring the average reflectivity of the first picture based on the fusion characteristics.
Optionally, a relationship between the fusion feature of the picture and the average reflectivity may be obtained in advance, and based on the fusion feature of the first picture and the relationship, the average reflectivity corresponding to the fusion feature of the first picture may be obtained as the average reflectivity of the first picture.
Alternatively, the average reflectivity of the shot scene in the first picture may be obtained based on a second model trained in advance. Alternatively, the second model may be a neural network-based model.
Optionally, the fusion feature of the first picture may be input into a trained second model, and an average reflectivity of a shooting scene in the first picture output by the second model is obtained.
Optionally, the second model may comprise at least one fully connected layer.
Optionally, the second model may comprise a plurality of fully connected layers.
And multiple fully-connected layers are cascaded, and the output of one fully-connected layer above each fully-connected layer is input (except each fully-connected layer).
And inputting the fusion characteristics into the first full-connection layer, processing the fusion characteristics by the second full-connection layer after the fusion characteristics are processed by the first full-connection layer, and so on, thereby obtaining the average reflectivity output by the last full-connection layer.
It should be noted that, for some shooting scenes with large areas of white (even pure white) or white and black (setting pure black), semantic information of the scene is less, when the external light is too dark, a white object in the picture is grayed, and when the external light is too bright, a black object in the picture is grayed, so that it is difficult to determine the original color of the object, and thus the reflectivity acquisition result is incorrect. Moreover, the camera adjusts the brightness of the picture according to the reflectivity returned by the algorithm, and the brightness change of the picture can influence the stability of the reflectivity acquisition result, so that the reflectivity acquisition result is unstable, and the stability of the brightness of the preview picture is influenced.
By the method provided by the embodiment of the application, the brightness characteristic is increased, even if a shooting scene with less semantic information is shot, the color of the object can be judged more accurately, and the situation that the wrong reflectivity is obtained due to the wrong color identification of the object can be avoided.
And 104, adjusting the shot exposure parameters based on the average reflectivity.
Optionally, the exposure parameters of the shooting scene displayed by the first picture shot by the camera can be adjusted based on the average reflectivity of the first picture.
Optionally, exposure compensation may be performed based on a difference between the average reflectivity of the first picture and a preset reflectivity value, and an exposure parameter of a shooting scene displayed by the first picture shot by the camera is adjusted.
The preset value of the reflectivity can be determined according to actual requirements. The embodiment of the present application is not limited to a specific value of the preset value of the reflectivity.
Optionally, an adjustment target of the brightness may be determined based on a difference between the average reflectivity of the first picture and a preset reflectivity value; and adjusting exposure parameters of a shooting scene displayed by shooting a first picture by the camera so as to hopefully realize the brightness of a second picture obtained by shooting the shooting scene based on the adjusted camera, wherein the brightness is compared with the brightness of the first picture to achieve the adjustment target.
According to the embodiment of the application, the semantic features and the brightness features of the first picture are obtained, the fusion features are obtained based on the semantic features and the brightness features, the average reflectivity of a shooting scene in the first picture is obtained based on the fusion features, the influence of brightness on the obtaining of the reflectivity is considered, the influence of the brightness change of the picture caused by the change of exposure parameters on the obtaining of the reflectivity is weakened, white objects under dark light and black objects under strong light can be distinguished more accurately, the accuracy and the stability of the reflectivity obtaining result can be improved, the accuracy of the adjustment of the exposure parameters can be improved, the effect of the shot picture is improved, and the shot picture can display a large-area white shooting scene and a large-area black shooting scene more accurately.
Optionally, obtaining the luminance feature based on the exposure parameter, the color histogram, and the luminance histogram of the first picture includes: a color histogram and a luminance histogram are obtained.
Optionally, the number of the pixels of each numerical value of the color component may be obtained by counting the distribution of each color component of each pixel in the first picture, so as to obtain the color histogram of the first picture.
Optionally, the number of pixels of each value of the brightness may be obtained by counting the brightness distribution of each pixel in the first picture, so as to obtain a brightness histogram of the first picture.
A first feature is obtained based on the color histogram and the luminance histogram, and a weight is obtained based on the exposure parameter.
Optionally, the original luminance feature of the first picture may be extracted as the first feature based on the color histogram of the first picture and the luminance histogram of the first picture.
Alternatively, the aforementioned three exposure parameters may jointly affect the brightness of the picture, and the same picture brightness value may be obtained by combining different values of the three exposure parameters.
Alternatively, the weight may be obtained by obtaining the sum of the products of the exposure parameters and the corresponding coefficients based on each exposure parameter and the corresponding coefficient of each exposure parameter.
Alternatively, the coefficient corresponding to the exposure parameter may be preset, or may be obtained in advance in a deep learning manner.
Based on the first feature and the weight, a luminance feature is obtained.
Alternatively, the first feature may be dot-multiplied by the weight to obtain the luminance feature.
According to the embodiment of the application, the first characteristic is obtained based on the color histogram and the brightness histogram, the weight is obtained based on the exposure parameter, the brightness characteristic is obtained based on the first characteristic and the weight, and the brightness characteristic of the first picture can be accurately obtained, so that the more accurate average reflectivity of the shooting scene in the first picture can be obtained based on the brightness characteristic of the first picture.
Optionally, obtaining an average reflectivity of a scene captured in the first picture based on the fusion features includes: and inputting the fusion characteristics into the first full-connection layer for processing to obtain a first output.
Optionally, the multilayer fully-connected layer is two fully-connected layers: a first fully connected layer and a second fully connected layer.
The fused features are input into a first full connection layer to be processed, the first full connection layer integrates the fused features for the first time, and first output by the first full connection layer can be obtained.
Exemplarily, assuming that the fused feature is a 160 x 1-dimensional vector (x 1, x2, …, x 160), through the first fully-connected layer (the first fully-connected layer is a 160 x1 x 1280 fully-connected layer), a first output (y 1, y2, y 1280) is obtained similar to 1280 160 x1 parameters (w 1, w2, …, w 160) and 1280 offsets b of 1*1, where y = w1 x1+ w2 x2+ w3 x3+ … + w160 x160+ b.
And inputting the first output into the second full-connection layer for processing to obtain the average reflectivity.
Optionally, the first output is input into a second full-link layer for processing, and the second full-link layer performs second integration on the fusion features after the first integration, so as to obtain an average reflectivity of the first picture output by the second full-link layer.
Illustratively, the aforementioned first output (y 1, y 2., y 1280) passes through the second fully-connected layer 1280 × 1 (w 1, w2, …, w 1280) and the bias parameter b, resulting in a final reflectance r = w1 × y1+ w2 × y2+ … + w1280 y1280+ b.
According to the embodiment of the application, the fusion features are processed through the two full-connection layers to obtain the average reflectivity, compared with the situation that the fusion features are processed through only one full-connection layer, the non-linear problem can be solved better, the relation between the fusion features and the average reflectivity of the picture can be fitted more accurately, and therefore the more accurate average reflectivity can be obtained.
Optionally, the obtaining the first feature based on the color histogram and the luminance histogram includes: and splicing the color histogram and the brightness histogram to obtain the color brightness histogram.
Alternatively, the color histogram and the luminance histogram have the same dimension, and the color histogram and the luminance histogram may be spliced into a new histogram, which is called a color-luminance histogram.
Alternatively, the size of the color luminance histogram may be N1 × N2.
N1 indicates that the color component and the luminance are both a number of levels, corresponding to the range of the color component and the luminance. Illustratively, N1 may be 64, 128, 256, etc., representing 64, 128, or 256 levels of color components and luminance, respectively, which may range from 0 to 63, 0 to 127, and 0 to 255, respectively.
N2 represents the number of color components plus 1. Plus 1, brightness. N2 may be 3 or 4, etc.
Alternatively, the color component may be R/G/B, with N2 equal to 3; the color component may be C/M/Y/K, with N2 equal to 4.
And performing feature extraction on the color brightness histogram to obtain a first feature.
Specifically, a similar semantic feature extraction method may be adopted to perform feature extraction on the color brightness histogram to obtain the first feature.
Alternatively, the first feature may be extracted based on a previously trained third model. Alternatively, the third model may be a neural network-based model.
Illustratively, the third model may be a model based on MobileNet or a deep residual network, etc.
According to the embodiment of the application, the color histogram and the brightness histogram are spliced to obtain the color brightness histogram, the color brightness histogram is subjected to feature extraction to obtain the first feature, and the more accurate first feature can be obtained, so that the brightness feature of the first picture can be more accurately obtained based on the first feature, and further the more accurate average reflectivity of a scene shot in the first picture can be obtained based on the brightness feature of the first picture.
Optionally, adjusting the exposure parameters of the shot based on the average reflectivity comprises: and adjusting the exposure parameters of the shooting based on the proportional relation between the average reflectivity and the preset reflectivity value.
Optionally, a proportional relation between the average reflectivity and a reflectivity preset value may be obtained, so as to determine a proportion of brightness to be adjusted, perform exposure compensation according to the proportion, and adjust an exposure parameter of a shooting scene displayed by shooting the first picture by the camera.
The preset value of the reflectivity can be determined according to actual requirements. The embodiment of the present application is not limited to a specific value of the preset value of the reflectivity.
Alternatively, the preset value of the reflectivity may be generally 18%, but is not limited thereto.
Exemplarily, the reflectivity preset value is 18%; under the condition that the obtained average reflectivity is 30%, 30%/18% =166%, the proportion of brightness to be adjusted is 166% -1=66%, namely the brightness of the first picture needs to be improved by 66% through exposure compensation, and the exposure parameter of the shooting scene displayed by shooting the first picture by the camera is adjusted by taking the brightness improvement of 66% as a target; when the obtained average reflectivity is 9%, 9%/18% =50%, the ratio of the brightness to be adjusted is 50% -1= -50%, that is, the brightness of the first picture needs to be reduced by 50% through exposure compensation, and the exposure parameter of the shooting scene displayed by the camera shooting the first picture needs to be adjusted by taking the 50% reduction in brightness as a target.
According to the embodiment of the application, exposure compensation is carried out on the first picture based on the proportional relation between the average reflectivity and the reflectivity preset value, and the picture with the brightness closer to reality and perceived by human eyes can be obtained.
Fig. 2 is a second flowchart illustrating an adjusting method of exposure parameters according to an embodiment of the present disclosure. Fig. 2 shows a flow of inputting a first picture with a height H and a width W, and acquiring an average reflectivity of a shooting scene in the picture by using the method provided by the embodiment of the application.
Fig. 3 is a third schematic flowchart of a method for adjusting exposure parameters according to an embodiment of the present disclosure. A complete implementation of the method for adjusting exposure parameters can be shown in fig. 3. As shown in fig. 3, a complete implementation of the method for adjusting exposure parameters may include the following steps:
and 301, collecting and labeling pictures.
During picture collection, automatic exposure is used in a camera professional mode to simultaneously store DNG (raw picture) and JPG format pictures, one group of pictures are taken twice, a gray card is placed on the surface of an object for the first time, and the gray card is removed for the second time.
Using the formula: lux r K = bright to calculate the object reflectivity. Where lux is the incident illumination, r is the object reflectivity, K is the camera correlation constant, and brightness is the image brightness.
Taking a gray card picture or the average brightness of a gray card part, knowing that the gray card reflectivity is 18%, calculating: lux K = brightness/18; taking the picture without the gray card, obtaining the average brightness of the object part, and calculating the object reflectivity according to the lux obtained above: r = bright/lux K.
And training can be carried out based on the pictures and the calculated reflectivity, so that a trained model is obtained.
Step 302, reading the picture and the exposure parameter of the picture.
The input is a picture and exposure parameters of the picture.
And step 303, extracting semantic features by utilizing models such as MobileNet V3.
And extracting deep features of the picture by taking models such as MobileNet V3 and the like as a backbone to obtain a semantic feature vector.
Step 304, calculating a color brightness histogram and reading exposure parameters of the picture.
The distribution of values in three channels of R/G/B and the distribution of brightness values (calculated by three values of R/G/B) of each pixel point in the picture are counted to obtain a color brightness distribution histogram (256 × 4), which is equivalent to a one-dimensional picture.
Step 305, normalizing the color brightness histogram, and extracting the original brightness features through convolution pooling.
The color luminance distribution histogram is subjected to extraction of original luminance features in a manner similar to step 303.
Step 306, multiplying the exposure parameter by 3*1 weight matrix to generate the weight of the exposure parameter.
The information of exposure parameters (3*1) of a picture is passed through a layer of weight matrix (3*1, which can be learned), and weights (1*1) of the exposure parameters are generated.
For example, if the exposure parameter information is (x 1, x2, x 3) and the weight matrix is (w 1, w2, w 3), the weight w = w1 × x1 × w2 × x2 × w3 × x3 of the exposure parameter can be calculated.
And 307, multiplying the original brightness characteristic by the weight to obtain the brightness characteristic.
And performing dot multiplication on the original brightness characteristic obtained in the step 305 and the weight of the exposure parameter obtained in the step 106 to obtain restored real brightness characteristic information, namely the brightness characteristic.
For example, the extracted original image luminance features are an 80 × 1 feature vector (x 1, x2, x3, … x 80), and the 80 values are multiplied by the weight w of the exposure parameter obtained in step 306, so as to obtain a new 80 × 1 feature vector (w × x1, w × 2, w × x3, …, w × 80), that is, the luminance features after the influence of the exposure parameter is removed.
And 308, splicing and fusing the brightness characteristic and the semantic characteristic to obtain a fusion characteristic.
And splicing the restored brightness features obtained in the step 307 with the semantic features obtained in the step 33, and fusing to obtain fusion features containing the image semantic information and the real brightness information.
For example, assuming that the luminance feature obtained in step 307 is 80 × 1 dimension (x 1, x2, x3, …, x 80), the semantic feature obtained in step 303 is also 80 × 1 dimension (y 1, y2, y3, …, y 80), and the two feature vectors are directly spliced and fused to obtain a new feature (x 1, x2, x3, …, x80, y1, y2, …, y 80) of 160 × 1 dimension.
And 309, fusing the features to obtain the regression average reflectivity through two full connection layers.
And step 310, performing exposure compensation according to the average reflectivity.
And adjusting exposure parameters shot by the camera according to the average reflectivity to perform exposure compensation.
In the method for adjusting exposure parameters provided in the embodiment of the present application, the execution main body may be an adjusting device for exposure parameters. In the embodiment of the present application, a method for adjusting an exposure parameter by an exposure parameter adjusting apparatus is taken as an example, and the exposure parameter adjusting apparatus provided in the embodiment of the present application is described.
Fig. 4 is a schematic structural diagram of an apparatus for adjusting exposure parameters according to an embodiment of the present disclosure. Optionally, as shown in fig. 4, the apparatus includes a feature obtaining module 401, a feature fusing module 402, a reflectivity obtaining module 403, and a parameter adjusting module 404, where:
a feature obtaining module 401, configured to obtain semantic features of the first picture, and obtain luminance features based on exposure parameters, a color histogram, and a luminance histogram of the first picture;
a feature fusion module 402, configured to perform feature fusion on the semantic features and the luminance features to obtain fusion features;
a reflectivity obtaining module 403, configured to obtain an average reflectivity of a shooting scene in the first picture based on the fusion feature;
and a parameter adjusting module 404, configured to adjust the exposure parameter of the shot based on the average reflectivity.
Alternatively, the feature acquisition module 401, the feature fusion module 402, the reflectivity acquisition module 403, and the parameter adjustment module 404 may be electrically connected.
The feature obtaining module 401 may perform feature extraction on the first picture by any method for extracting semantic features, so as to obtain the semantic features of the first picture.
The feature obtaining module 401 may obtain the luminance feature of the first picture based on the exposure parameter, the color histogram, and the luminance histogram of the first picture.
The feature fusion module 402 may perform feature fusion on the semantic features of the first picture and the luminance features of the first picture based on any feature fusion method to obtain fusion features of the first picture.
The reflectivity obtaining module 403 may obtain an average reflectivity corresponding to the fusion feature of the first picture as the average reflectivity of the first picture based on the fusion feature of the first picture and a relationship between the fusion feature of the pre-obtained picture and the average reflectivity.
The parameter adjustment module 404 may adjust an exposure parameter of a shooting scene displayed by the first picture shot by the camera based on the average reflectivity of the first picture.
Optionally, the feature obtaining module 401 may include a first obtaining sub-module; the first obtaining sub-module may include:
a histogram acquisition unit configured to acquire a color histogram and a luminance histogram;
a first feature acquisition unit configured to acquire a first feature based on the color histogram and the luminance histogram;
a weight acquisition unit configured to acquire a weight based on the exposure parameter;
and a second feature acquisition unit configured to acquire the luminance feature based on the first feature and the weight.
Optionally, the reflectivity obtaining module 403 may be specifically configured to input the fusion feature into the first full connection layer for processing, and obtain a first output; and inputting the first output into the second full-connection layer for processing to obtain the average reflectivity.
Optionally, the first feature obtaining unit may be specifically configured to splice the color histogram and the luminance histogram to obtain a color luminance histogram; and performing feature extraction on the color brightness histogram to obtain a first feature.
Optionally, the parameter adjusting module 404 may be specifically configured to adjust the exposure parameter of the shot based on a proportional relationship between the average reflectivity and a reflectivity preset value.
According to the embodiment of the application, the semantic features and the brightness features of the first picture are obtained, the fusion features are obtained based on the semantic features and the brightness features, the average reflectivity of a shooting scene in the first picture is obtained based on the fusion features, the influence of brightness on the obtaining of the reflectivity is considered, the influence of the brightness change of the picture caused by the change of exposure parameters on the obtaining of the reflectivity is weakened, white objects under dark light and black objects under strong light can be distinguished more accurately, the accuracy and the stability of the reflectivity obtaining result can be improved, the accuracy of the adjustment of the exposure parameters can be improved, the effect of the shot picture is improved, and the shot picture can display a large-area white shooting scene and a large-area black shooting scene more accurately.
The adjusting device of the exposure parameter in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), an assistant, a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The adjusting device of the exposure parameter in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The adjusting device of the exposure parameter in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The adjusting device of the exposure parameter in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The exposure parameter adjusting device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 3, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 5, an electronic device 500 is further provided in an embodiment of the present application, and includes a processor 501 and a memory 502, where the memory 502 stores a program or an instruction that can be executed on the processor 501, and when the program or the instruction is executed by the processor 501, the steps of the embodiment of the method for adjusting an exposure parameter are implemented, and the same technical effects can be achieved, and are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 610 may be configured to obtain a semantic feature of the first picture, and obtain a luminance feature based on an exposure parameter, a color histogram, and a luminance histogram of the first picture;
the processor 610 may further be configured to perform feature fusion on the semantic features and the luminance features to obtain fusion features;
the processor 610 may be further configured to obtain an average reflectivity of the first picture based on the fusion feature;
the processor 610 may also be configured to adjust exposure parameters for the shot based on the average reflectivity.
According to the embodiment of the application, the semantic features and the brightness features of the first picture are obtained, the fusion features are obtained based on the semantic features and the brightness features, the average reflectivity of a shooting scene in the first picture is obtained based on the fusion features, the influence of brightness on the obtaining of the reflectivity is considered, the influence of the brightness change of the picture caused by the change of exposure parameters on the obtaining of the reflectivity is weakened, white objects under dark light and black objects under strong light can be distinguished more accurately, the accuracy and the stability of the reflectivity obtaining result can be improved, the accuracy of the adjustment of the exposure parameters can be improved, the effect of the shot picture is improved, and the shot picture can display a large-area white shooting scene and a large-area black shooting scene more accurately.
Optionally, the processor 610 may be further configured to obtain a color histogram and a luminance histogram;
the processor 610 may be further configured to obtain a first feature based on the color histogram and the luminance histogram;
a processor 610, which may be further configured to obtain a weight based on the exposure parameter;
the processor 610 may be further configured to obtain a luminance characteristic based on the first characteristic and the weight.
Optionally, the processor 610 may be further configured to input the fusion feature into the first full connection layer for processing, and obtain a first output; and inputting the first output into the second full-connection layer for processing to obtain the average reflectivity.
Optionally, the processor 610 may be further configured to splice the color histogram and the luminance histogram to obtain a color luminance histogram; and performing feature extraction on the color brightness histogram to obtain a first feature.
Optionally, the processor 610 may be further configured to adjust the exposure parameter for the shot based on a proportional relationship between the average reflectivity and a reflectivity preset value.
It is to be understood that, in the embodiment of the present application, the input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics Processing Unit 6041 processes image data of a still picture or a video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes at least one of a touch panel 6071 and other input devices 6072. A touch panel 6071, also referred to as a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 609 may include volatile memory or nonvolatile memory, or the memory 609 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 609 in the embodiments of the subject application include, but are not limited to, these and any other suitable types of memory.
Processor 610 may include one or more processing units; optionally, the processor 610 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing method for adjusting an exposure parameter, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the foregoing method for adjusting an exposure parameter, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing embodiment of the method for adjusting an exposure parameter, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method for adjusting exposure parameters, comprising:
obtaining semantic features of a first picture, and obtaining brightness features based on exposure parameters, a color histogram and a brightness histogram of the first picture;
performing feature fusion on the semantic features and the brightness features to obtain fusion features;
acquiring the average reflectivity of the first picture based on the fusion characteristics;
and adjusting the shot exposure parameters based on the average reflectivity.
2. The method according to claim 1, wherein the obtaining a luminance characteristic based on the exposure parameter, the color histogram, and the luminance histogram of the first picture comprises:
acquiring the color histogram and the brightness histogram;
acquiring a first feature based on the color histogram and the brightness histogram, and acquiring a weight based on the exposure parameter;
and acquiring the brightness characteristic based on the first characteristic and the weight.
3. The method for adjusting exposure parameters according to claim 1, wherein the obtaining the average reflectivity of the first picture based on the fused feature comprises:
inputting the fusion characteristics into a first full-connection layer for processing to obtain a first output;
and inputting the first output into a second full-connection layer for processing to obtain the average reflectivity.
4. The method according to claim 2, wherein the obtaining a first feature based on the color histogram and the luminance histogram includes:
splicing the color histogram and the brightness histogram to obtain a color brightness histogram;
and extracting the features of the color brightness histogram to obtain the first features.
5. The method according to any one of claims 1 to 4, wherein the adjusting the shot exposure parameters based on the average reflectivity comprises:
and adjusting the shot exposure parameters based on the proportional relation between the average reflectivity and the reflectivity preset value.
6. An apparatus for adjusting exposure parameters, comprising:
the characteristic acquisition module is used for acquiring semantic characteristics of a first picture and acquiring brightness characteristics based on exposure parameters, a color histogram and a brightness histogram of the first picture;
the feature fusion module is used for performing feature fusion on the semantic features and the brightness features to obtain fusion features;
a reflectivity obtaining module, configured to obtain an average reflectivity of the first picture based on the fusion feature;
and the parameter adjusting module is used for adjusting the shot exposure parameters based on the average reflectivity.
7. The apparatus for adjusting exposure parameters according to claim 6, wherein the feature obtaining module comprises a first obtaining sub-module; the first obtaining sub-module includes:
a histogram acquisition unit configured to acquire the color histogram and the luminance histogram;
a first feature acquisition unit configured to acquire a first feature based on the color histogram and the luminance histogram;
a weight obtaining unit configured to obtain a weight based on the exposure parameter;
a second feature acquisition unit configured to acquire the luminance feature based on the first feature and the weight.
8. The apparatus for adjusting exposure parameters according to claim 6, wherein the reflectivity obtaining module is specifically configured to input the fusion feature into a first fully-connected layer for processing, so as to obtain a first output; and inputting the first output into a second full-connection layer for processing to obtain the average reflectivity.
9. The apparatus for adjusting exposure parameters according to claim 7, wherein the first feature obtaining unit is specifically configured to splice the color histogram and the luminance histogram to obtain a color luminance histogram; and performing feature extraction on the color brightness histogram to obtain the first feature.
10. The apparatus according to any one of claims 6 to 9, wherein the parameter adjusting module is specifically configured to adjust the shot exposure parameters based on a proportional relationship between the average reflectivity and a preset reflectivity value.
11. An electronic device characterized by comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions, when executed by the processor, implementing the adjustment method of exposure parameters according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that a program or instructions are stored thereon, which when executed by a processor, implement the adjustment method of exposure parameters according to any one of claims 1 to 5.
CN202211130692.2A 2022-09-16 2022-09-16 Exposure parameter adjusting method and device, electronic equipment and storage medium Pending CN115580781A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211130692.2A CN115580781A (en) 2022-09-16 2022-09-16 Exposure parameter adjusting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211130692.2A CN115580781A (en) 2022-09-16 2022-09-16 Exposure parameter adjusting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115580781A true CN115580781A (en) 2023-01-06

Family

ID=84580684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211130692.2A Pending CN115580781A (en) 2022-09-16 2022-09-16 Exposure parameter adjusting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115580781A (en)

Similar Documents

Publication Publication Date Title
CN106165390B (en) Image processing apparatus, camera, image processing method
US7271838B2 (en) Image pickup apparatus with brightness distribution chart display capability
CN107820020A (en) Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters
CN105432069B (en) Image processing apparatus, photographic device, image processing method and program
CN110246110B (en) Image evaluation method, device and storage medium
US20240163566A1 (en) Exposure compensation method and apparatus, and electronic device
WO2023151511A1 (en) Model training method and apparatus, image moire removal method and apparatus, and electronic device
CN114937050A (en) Green curtain matting method and device and electronic equipment
CN114390201A (en) Focusing method and device thereof
JP7277158B2 (en) Setting device and method, program, storage medium
CN112651410A (en) Training of models for authentication, authentication methods, systems, devices and media
CN111901519B (en) Screen light supplement method and device and electronic equipment
CN112419218A (en) Image processing method and device and electronic equipment
WO2023071933A1 (en) Camera photographing parameter adjustment method and apparatus and electronic device
CN115580781A (en) Exposure parameter adjusting method and device, electronic equipment and storage medium
Simone et al. Survey of methods and evaluation of retinex-inspired image enhancers
CN115623313A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN111476731B (en) Image correction method, device, storage medium and electronic equipment
CN112511890A (en) Video image processing method and device and electronic equipment
CN114143448B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112399091B (en) Image processing method and device and electronic equipment
CN116862801A (en) Image processing method, device, electronic equipment and storage medium
CN117119317A (en) Image processing method, device, electronic equipment and readable storage medium
CN110266939B (en) Display method, electronic device, and storage medium
CN114125302A (en) Image adjusting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination