CN116309172A - Image enhancement method and device and electronic equipment - Google Patents

Image enhancement method and device and electronic equipment Download PDF

Info

Publication number
CN116309172A
CN116309172A CN202310298922.4A CN202310298922A CN116309172A CN 116309172 A CN116309172 A CN 116309172A CN 202310298922 A CN202310298922 A CN 202310298922A CN 116309172 A CN116309172 A CN 116309172A
Authority
CN
China
Prior art keywords
image
enhanced
output
unnatural
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310298922.4A
Other languages
Chinese (zh)
Inventor
顾博文
查林
姜建德
雒旭鹏
路文
高新波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xinxin Microelectronics Technology Co Ltd
Original Assignee
Qingdao Xinxin Microelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xinxin Microelectronics Technology Co Ltd filed Critical Qingdao Xinxin Microelectronics Technology Co Ltd
Priority to CN202310298922.4A priority Critical patent/CN116309172A/en
Publication of CN116309172A publication Critical patent/CN116309172A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image enhancement method, an image enhancement device and electronic equipment, which are used for solving the problem of low image quality such as color distortion, contrast distortion, detail loss and the like of an image in the prior art. Because the first pixel value of each pixel point in the enhanced image and the second pixel value of the pixel point corresponding to the pixel point in the image to be enhanced are obtained, the probability that the image to be enhanced is an unnatural image and the probability that the image to be enhanced is a natural image are respectively used as the weights corresponding to the second pixel value and the first pixel value, and the target pixel value of the pixel point corresponding to the pixel point in the fused target image is determined. Because the image to be enhanced and the enhanced image are fused according to the probability that the image to be enhanced is an unnatural image and a natural image, and a fused target image is generated, risks of color distortion, contrast distortion, detail loss and the like caused by directly enhancing the image can be reduced, and the image quality can be improved.

Description

Image enhancement method and device and electronic equipment
Technical Field
The present disclosure relates to computer vision image technology, and in particular, to an image enhancement method, an image enhancement device, and an electronic device.
Background
In recent years, with the increasing display brightness and display color gamut of display devices, there is an increasing demand for high-contrast, wide-color-gamut images. Due to reasons such as shooting scenes and shooting equipment, the position and illumination conditions of a light source are uncontrollable in many cases, details are hardly seen in underexposed or overexposed areas on the image, and problems such as overexposure, underexposure, low saturation, low contrast and the like may exist in many images obtained by shooting or recording at present, so that poor visual experience is caused.
In order to enable people to experience high-quality pictures on devices with high display capability without using higher-end devices to re-shoot and manufacture video resources, an image enhancement method is generally adopted in the related art to process images to enhance contrast and color. However, the image enhancement method enhances all the input images. However, unnatural images are generally produced by authors according to their own intention, without being limited by photographing devices, conditions, etc., and thus such image contrast and color have been appropriate. If the unnatural image is not selectively enhanced, problems such as color distortion, contrast distortion, detail loss and the like can result.
Disclosure of Invention
The embodiment of the application provides an image enhancement method, an image enhancement device and electronic equipment, which are used for solving the problem of low image quality such as color distortion, contrast distortion, detail loss and the like of an image in the prior art.
In a first aspect, an embodiment of the present application provides an image enhancement method, including:
the method comprises the steps of obtaining an image to be enhanced, inputting the image to be enhanced into a pre-trained non-natural image recognition model, and obtaining the non-natural image and the probability of the natural image as the image to be enhanced, which are output by the non-natural image recognition model; obtaining an enhanced image corresponding to the image to be enhanced;
and aiming at each pixel point in the enhanced image, acquiring a first pixel value of the pixel point in the enhanced image and a second pixel value of the pixel point corresponding to the pixel point in the image to be enhanced, taking the probabilities of the image to be enhanced being an unnatural image and a natural image as weights corresponding to the second pixel value and the first pixel value respectively, and determining a target pixel value of the pixel point corresponding to the pixel point in the fused target image.
In a second aspect, embodiments of the present application further provide an image enhancement apparatus, including:
The receiving and acquiring module is used for acquiring an image to be enhanced, inputting the image to be enhanced into a pre-trained unnatural image recognition model, and acquiring the probability that the image to be enhanced output by the unnatural image recognition model is an unnatural image and is a natural image; obtaining an enhanced image corresponding to the image to be enhanced;
the processing module is used for acquiring a first pixel value of the pixel point in the enhanced image and a second pixel value of the pixel point corresponding to the pixel point in the image to be enhanced aiming at each pixel point in the enhanced image, taking probabilities of the image to be enhanced being an unnatural image and a natural image as weights corresponding to the second pixel value and the first pixel value respectively, and determining a target pixel value of the pixel point corresponding to the pixel point in the fused target image.
In a third aspect, embodiments of the present application further provide an electronic device, including:
a processor and a memory;
the memory is configured to store the processor-executable instructions;
the processor is configured to execute the instructions to implement the image enhancement method as claimed in any one of the preceding claims.
In the embodiment of the application, an image to be enhanced is obtained, the image to be enhanced is input into an unnatural image recognition model which is trained in advance, and the probability that the image to be enhanced output by the unnatural image recognition model is an unnatural image and is a natural image is obtained; acquiring an enhanced image corresponding to the image to be enhanced; and aiming at each pixel point in the enhanced image, acquiring a first pixel value of the pixel point in the enhanced image and a second pixel value of the pixel point corresponding to the pixel point in the image to be enhanced, taking the probabilities of the image to be enhanced as an unnatural image and a natural image as weights corresponding to the second pixel value and the first pixel value respectively, and determining a target pixel value of the pixel point corresponding to the pixel point in the fused target image. Because in the embodiment of the application, the probabilities that the image to be enhanced is a natural image and is a non-natural image are obtained, the enhanced image corresponding to the image to be enhanced is obtained, the first pixel value of the pixel and the second pixel value of the pixel corresponding to the pixel in the image to be enhanced are obtained for each pixel in the enhanced image, the probabilities that the image to be enhanced is the non-natural image and is the natural image are respectively used as the weights corresponding to the second pixel value and the first pixel value, and the target pixel value of the pixel corresponding to the pixel in the fused target image is determined. Because the image to be enhanced and the enhanced image are fused according to the probability that the image to be enhanced is an unnatural image and a natural image, and a fused target image is generated, risks of color distortion, contrast distortion, detail loss and the like caused by directly enhancing the image can be reduced, and the image quality can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a VR device provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a vehicle-mounted device provided in an embodiment of the present application;
fig. 3 is a schematic view of a television according to an embodiment of the present application;
fig. 4 is a schematic diagram of an image enhancement process according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an unnatural image recognition model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a training process of a content enhancement model according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a content enhancement model according to an embodiment of the present disclosure;
FIG. 8 is a schematic block diagram of the inside of a content enhancement model according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a feature extraction network according to an embodiment of the present application;
FIG. 10 is a detailed schematic diagram of an image enhancement process according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a training process for an unnatural image recognition model according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of a training process for a perception enhancement model according to an embodiment of the present disclosure;
FIG. 13 is a schematic structural diagram of a perception enhancement model according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an image enhancement device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail below with reference to the attached drawings, wherein it is apparent that the described embodiments are only some, but not all embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In recent years, with the increasing display brightness and display color gamut of display devices, there is an increasing demand for high-contrast, wide-color-gamut images. Due to reasons such as shooting scenes and shooting equipment, the position and illumination conditions of a light source are uncontrollable in many cases, details are hardly seen in underexposed or overexposed areas on the image, and problems such as overexposure, underexposure, low saturation, low contrast and the like may exist in many images obtained by shooting or recording at present, so that poor visual experience is caused.
In order to enable people to experience high-quality pictures on devices with high display capability without using higher-end devices to re-shoot and manufacture video resources, an image enhancement method is generally adopted in the related art to process images to enhance contrast and color. However, the image enhancement method enhances all the input images. However, unnatural images are generally produced by authors according to their own intention, without being limited by photographing devices, conditions, etc., and thus such image contrast and color have been appropriate. If the unnatural image is not selectively enhanced, problems such as color distortion, contrast distortion, detail loss and the like can result.
In order to improve image quality, an embodiment of the application provides an image enhancement method, an image enhancement device and electronic equipment. The image enhancement method is mainly applied to scenes related to image display, and the electronic equipment can be intelligent equipment such as televisions, vehicle-mounted equipment, VR equipment, mobile phones, tablet computers or servers, wherein the televisions are shown in fig. 1, the VR equipment is shown in fig. 2, and the vehicle-mounted equipment is shown in fig. 3. If the electronic device is an intelligent device with a display, such as a television, a vehicle-mounted device, a VR device, a mobile phone, a tablet personal computer or a server, the electronic device can control the display of the electronic device to display the enhanced image. If the electronic device is a server or other intelligent devices without a display, the electronic device may control the connected display to display the enhanced image, or may send the enhanced image to a device with a display function, where the device with the display function displays the target image.
Fig. 4 is a schematic diagram of an image enhancement process according to an embodiment of the present application, where the process includes the following steps:
s401: the method comprises the steps of obtaining an image to be enhanced, inputting the image to be enhanced into a pre-trained non-natural image recognition model, and obtaining the non-natural image and the probability of the natural image as the image to be enhanced, which are output by the non-natural image recognition model; and obtaining an enhanced image corresponding to the image to be enhanced.
The image enhancement method provided by the embodiment of the application is applied to electronic equipment, and the electronic equipment can be intelligent equipment such as televisions, vehicle-mounted equipment, VR equipment, mobile phones, tablet computers or servers.
In this embodiment, the electronic device is taken as an example of a television, so that in order to improve image quality, the television may acquire an image to be enhanced, where the image to be enhanced may be acquired by an acquisition unit of the television, and may be a natural image or an unnatural image.
In order to improve the image quality, the television locally stores a pre-trained non-natural image recognition model, after receiving the image to be enhanced, the television can input the image to be enhanced into the pre-trained non-natural image recognition model, and acquire the output of the pre-trained non-natural image recognition model, wherein the output is the probability that the image to be enhanced is a non-natural image and the probability that the image to be enhanced is a natural image.
The unnatural image recognition model described in the embodiment of the application is equivalent to screening an input image. Because the natural image and the unnatural image are obviously different in texture, edge and the like, the probability that the image to be enhanced is the unnatural image and the probability that the image is the natural image can be accurately determined by adopting the unnatural image identification model.
The television can also acquire an enhanced image corresponding to the image to be enhanced, specifically, in order to acquire the enhanced image corresponding to the image to be enhanced, the image to be enhanced can be input into an image enhancement model which is trained in advance, and the output of the image enhancement model is acquired, and the output of the image enhancement model is the enhanced image corresponding to the image to be enhanced. Specifically, how to obtain an enhanced image of a certain image is the prior art, and is not described herein.
Fig. 5 is a schematic structural diagram of an unnatural image recognition model according to an embodiment of the present application.
As can be seen from fig. 5, the non-natural image recognition model is composed of 3*3 convolution layer (Conv), adaptive maximum pooling layer (Adaptive Max pooling), maximum pooling layer (Max pooling) and activation function, the activation function used in the non-natural image recognition model is Sigmoid, specifically, the structure of the non-natural image recognition model is 3×3Conv, adaptive Max pooling, 3×3Conv, max pooling, 3×3Conv and Sigmoid in order, where 2, 16, 32 and 64 appearing in fig. 5 are channel numbers.
S402: and aiming at each pixel point in the enhanced image, acquiring a first pixel value of the pixel point in the enhanced image and a second pixel value of the pixel point corresponding to the pixel point in the image to be enhanced, taking the probabilities of the image to be enhanced being an unnatural image and a natural image as weights corresponding to the second pixel value and the first pixel value respectively, and determining a target pixel value of the pixel point corresponding to the pixel point in the fused target image.
In order to improve the image quality, the television may fuse the image to be enhanced with the enhanced image, specifically, for each pixel point in the enhanced image, a first pixel value of the pixel point in the enhanced image may be obtained, and a second pixel value of the pixel point corresponding to the pixel point in the enhanced image may be obtained, where the enhanced image is obtained according to the image to be enhanced, and the sizes of the enhanced image and the image to be enhanced are consistent, so each pixel point in the enhanced image has the same pixel point as the pixel point in the enhanced image, and the pixel point with the same position as the pixel point is the pixel point corresponding to the pixel point. In order to improve the image quality, the television can take the probability that the image to be enhanced is an unnatural image as the weight corresponding to the second pixel value, and take the probability that the image to be enhanced is the natural image as the weight corresponding to the first pixel value, so as to determine the target pixel value of the corresponding pixel point in the fused target image. By the method, the target pixel value of each pixel point in the target image can be determined, so that the fused target image can be accurately and effectively determined.
After the television acquires the target image, the television can control the display of the television to display the target image, and can also control other connected displays to display the target image.
That is, if the probability of the image to be enhanced being the unnatural image is 1, the target pixel value of each pixel point in the generated target image is consistent with the pixel value of the corresponding pixel point in the image to be enhanced, that is, the generated target image is equivalent to the image to be enhanced, so that when the image to be enhanced is the unnatural image, the enhancement is not performed, and risks such as color distortion, contrast distortion, detail loss and the like caused by enhancing the unnatural image are reduced, so as to protect the detail characteristics of the image to be enhanced.
Because in the embodiment of the application, the electronic device acquires the probability that the image to be enhanced is a natural image and is a non-natural image, acquires the enhanced image corresponding to the image to be enhanced, acquires the first pixel value of the pixel and the second pixel value of the pixel corresponding to the pixel in the image to be enhanced for each pixel in the enhanced image, and respectively uses the probabilities that the image to be enhanced is the non-natural image and is the natural image as the weights corresponding to the second pixel value and the first pixel value to determine the target pixel value of the pixel corresponding to the pixel in the fused target image. Because the electronic equipment fuses the image to be enhanced and the enhanced image according to the probability that the image to be enhanced is an unnatural image and a natural image, and generates a fused target image, risks such as color distortion, contrast distortion, detail loss and the like caused by directly enhancing the image can be reduced, and the image quality can be improved.
In order to accurately acquire the enhanced image, on the basis of the foregoing embodiment, in an embodiment of the present application, the acquiring the enhanced image corresponding to the image to be enhanced includes:
inputting the image to be enhanced into a content enhancement model, and obtaining an image to be processed with enhanced contrast and color output by the content enhancement model;
inputting the image to be processed into a perception enhancement model, and obtaining an enhanced image output by the perception enhancement model; the enhanced image is an image for enhancing and collecting scene-related perception information of the image to be enhanced.
In an actual application scene, an image enhancement model is generally adopted to enhance an image, and the image enhancement model is generally a model obtained by training based on loss of content. However, the model obtained by training based on loss of content has limited image scene and category of the training data set, cannot cover the complex enhancement requirement in practical application, and the image enhancement model obtained by training has single enhancement effect and weak self-adaptation capability. In order to accurately acquire the enhanced image, so that the acquired enhanced image is closer to a scene observed by naked eyes, a trained content enhanced model and a trained perception enhanced model are stored in advance in a television, the television can input the image to be enhanced into the content enhanced model which is trained in advance, and acquire an image to be processed output by the content enhanced model, wherein the image to be processed is an image with enhanced contrast and color, and the input low-quality image to be enhanced can be primarily enhanced through the content enhanced model. After the image to be processed is obtained, the image to be processed can be input into a perception enhancement model which is trained in advance, and the output of the perception enhancement model is obtained, wherein the output of the perception enhancement model is an enhanced image, and the enhanced image is an image for enhancing and collecting scene-related perception information of the image to be enhanced. The perception enhancement model can extract the self-adaptive characteristics of the input image, so that more image related detail content is added to the image after preliminary enhancement, namely the image to be processed.
Because in the embodiment of the application, when the image to be enhanced is enhanced, the enhancement is performed through the content enhancement model and the perception enhancement model, which is equivalent to the enhancement model comprising the content enhancement sub-model and the perception enhancement sub-model, the enhancement of the image to be enhanced is realized from the content to the perception through a multi-stage progressive process. The content enhancement model is a model trained based on image content, the perception enhancement model is a model trained based on multi-dimensional characteristics of images, the perception enhancement model adds detail characteristics related to image scenes and categories for the images to be processed, and the self-adaptive capacity of the images to be enhanced is improved, so that the acquired enhanced images are higher in quality.
In order to acquire a content enhancement model after training, in the embodiment of the present application, a sample set for training is stored, wherein sample images in the sample set include images acquired in different scenes and different times, and different unnatural images, and in order to train the content enhancement model conveniently, a labeling image corresponding to each sample image is stored in the sample set, where the labeling image is an image with consistent content and high quality with the sample image, and in particular, the labeling image may be an image that is re-shot by using higher-end equipment.
After any sample image and a labeling image corresponding to the sample image in the sample set are acquired, the sample image is input into an original content enhancement model, and the original content is enhanced to output an output image corresponding to the sample image. In order to train the original content enhancement model, the television locally stores a loss function, a loss value corresponding to the sample image can be determined according to the labeling image, the output image and the loss function, the original content enhancement model is trained according to the loss value, and parameters in the original content enhancement model are specifically adjusted. And when the preset conditions are met, obtaining the trained content enhancement model. The preset condition may be that the number of loss values determined corresponding to the sample images in the sample set is greater than a set number, where the loss values are smaller than a preset threshold. The loss function is:
Figure BDA0004145526680000091
Figure BDA0004145526680000092
wherein L is c In order to achieve a loss value, the value of the loss,
Figure BDA0004145526680000093
to output the image to be processed I G For labeling the image to be processed, tanh is a hyperbolic tangent function, l ca Is color shift loss II 1 Represents the L1 norm and represents the inner product.
Fig. 6 is a schematic diagram of a training process of a content enhancement model according to an embodiment of the present application, where the process includes the following steps:
S601: any sample image in the sample set and the labeling image stored for the sample image are acquired.
S602: and inputting the sample image into the original content enhancement model, and obtaining an output image output by the original content enhancement model.
S603: and determining a corresponding loss value according to the marked image, the output image and the loss function.
S604: and training the original content enhancement model according to the loss value.
Because the content enhancement model adopts a lighter structure and a smaller number of parameters, the content enhancement model firstly learns from the content so as to initially learn the basic rule of image enhancement by the network. Therefore, at the stage, the loss function takes the content error in the sample set as the learning direction, so that the content enhancement model can output the image with the contrast and the color subjected to basic preliminary enhancement. The pixel deviation of the low-quality image and the corresponding high-quality image input in the sample set can restrict the image brightness and the saturation enhancement direction, and the cosine similarity of the two can restrict the tone enhancement direction.
And the common image enhancement model is trained by taking the pixel difference between the low-quality image and the corresponding high-quality image in the training set as loss during training, so that the image enhancement model learns the modification mode of the data distribution in the training set. However, in a real scene, performing contrast and color enhancement on an image to be enhanced is a complex process, and besides considering the data distribution of the image to be enhanced, reference needs to be made to information such as the scene and the category of the image to be enhanced, so as to achieve the enhancement effect conforming to the visual characteristics of human eyes. The perception enhancement model provided by the embodiment of the application can achieve the enhancement effect conforming to the visual characteristics of human eyes.
The content enhancement model consists of a condition coding module, a feature mapping module and a content reconstruction module. The feature mapping module further extracts image global features of the image to be enhanced, and maps the image global features to local features used for enhancing the image. The condition coding module outputs modulation information to modulate the characteristics output by the characteristic mapping module through the modulation information, so as to obtain the characteristics of more image self-adaption. And finally, analyzing the characteristics by a content reconstruction module, and reconstructing the image to be processed with the contrast and the color enhanced preliminarily. The modulation operation is to multiply and add points on the channel (channel) dimension by the 1x1 feature map with 64 dimensions output by the condition coding module and the feature map output by the feature mapping module respectively.
Fig. 7 is a schematic structural diagram of a content enhancement model according to an embodiment of the present application.
As can be seen from fig. 7, the content enhancement model is composed of 3×3conv, an activation function, and a global averaging pooling layer (Global avg pooling). The activation function used in the content enhancement model is noise linear rectification (ReLU), wherein 3×3conv, reLU, global avg pooling, 3×3conv, and 3×3conv of the first layer in fig. 7 are conditional encoding modules, 3×3conv, reLU, and 3×3conv of the second layer in fig. 7 are feature mapping modules, and 3×3conv, reLU, and 3×3conv of the right side (left and right sides in the drawing) in fig. 7 are content reconstruction modules. Where 16, 32 and 64 and 3 appear in FIG. 7 as channel numbers.
Fig. 8 is a schematic block diagram of the inside of a content enhancement model according to an embodiment of the present application.
As can be seen from fig. 8, the content enhancement module includes a condition encoding module, a feature mapping module, and a content reconstruction module, wherein the condition encoding module and the feature mapping module respectively process the input of the content enhancement model, and the content reconstruction module processes the output of the condition encoding module and the feature mapping module.
In order to accurately acquire the probability that the image to be enhanced is an unnatural image and is a natural image, in the embodiments of the present application, after the image to be enhanced is acquired, before the image to be enhanced is input into the pre-trained unnatural image recognition model, the method further includes:
inputting the image to be enhanced into a feature extraction network, and obtaining the image global feature of the image to be enhanced, which is output by the feature extraction network;
the step of inputting the image to be enhanced into a pre-trained unnatural image recognition model comprises the following steps:
and inputting the image global features of the image to be enhanced into a pre-trained unnatural image recognition model.
In order to accurately acquire the probability that the image to be enhanced is an unnatural image and is a natural image, the television locally stores a feature extraction network in advance, the television can input the image to be enhanced into the feature extraction network after receiving the image to be enhanced, and acquire the image global feature of the image to be enhanced output by the feature extraction network. The purpose of the feature extraction network is to extract image global features that can be shared by the unnatural image recognition model and the content enhancement model.
Fig. 9 is a schematic structural diagram of a feature extraction network according to an embodiment of the present application.
As can be seen from fig. 9, the feature extraction network is composed of 3×3conv and an activation function, the activation function used in the feature extraction network is a ReLU, and the specific feature extraction network has the structures of 3×3conv, reLU, 3×3conv and ReLU in sequence. Where 3, 8 and 16 appear in fig. 9 as the number of channels.
After the image global feature of the image to be enhanced is obtained, the image global feature of the image to be enhanced can be input into an unnatural image recognition model which is trained in advance, and the output of the unnatural image recognition model is obtained, namely the probability that the image to be enhanced is an unnatural image and a natural image. In addition, the television can also input the image global features of the image to be enhanced into the content enhancement model to obtain the image to be processed output by the content enhancement model.
Fig. 10 is a detailed schematic diagram of an image enhancement process according to an embodiment of the present application, where the process includes the following steps:
in fig. 10, the probability that the image to be enhanced is an unnatural image and a natural image is acquired first, and then the enhanced image corresponding to the image to be enhanced is acquired.
S1001: and acquiring an image to be enhanced.
S1002: and inputting the image to be enhanced into a feature extraction network to obtain the image global features of the image to be enhanced.
S1003: inputting the image global features of the image to be enhanced into an unnatural image recognition model, and obtaining the probability that the image to be enhanced is an unnatural image and is a natural image.
S1004: and acquiring an enhanced image corresponding to the image to be enhanced based on the content enhanced model and the perception enhanced model.
S1005: and processing the image to be enhanced and the enhanced image according to the probability that the image to be enhanced is an unnatural image and a natural image, and obtaining a corresponding fused target image.
Specifically, how to acquire the target image has been described in the above embodiments, and will not be described here again.
In order to accurately acquire the unnatural image recognition model, based on the above embodiments, in the embodiments of the present application, the unnatural image recognition model is trained by:
acquiring any one of a first sample image in a first sample set and labeling probability that the first sample image is an unnatural image and a natural image;
inputting the first sample image into an original recognition model, and acquiring the output probability that the first sample image output by the original recognition model is an unnatural image and a natural image;
Determining a first loss value corresponding to the first sample image according to the labeling probability, the output probability and a first loss function;
and training the original recognition model according to the first loss value.
In order to acquire an unnatural image recognition model, a first sample set for training is stored, wherein first sample images in the first sample set comprise images acquired at different scenes and different times and different unnatural images, and in order to train an original recognition model conveniently, the first sample set also stores labeling probabilities of the first sample images being unnatural images and natural images for each first sample image.
After any one of the first sample image and the labeling probability that the first sample image is an unnatural image and a natural image in the first sample set are obtained, the first sample image is input into an original recognition model, and the original recognition model outputs the output probability that the first sample image is an unnatural image and a natural image. In order to train the original recognition model, the trained unnatural image recognition model is obtained, a first loss function is locally stored in the television, a first loss value corresponding to the first sample image can be determined according to the labeling probability, the output probability and the first loss function, and the original recognition model is trained according to the first loss value. And when the preset conditions are met, obtaining the trained unnatural image recognition model. The preset condition may be that the number of first loss values corresponding to the first sample images in the first sample set is greater than the set number, where the first loss values are smaller than a preset threshold.
In order to obtain a trained unnatural image recognition model, in the embodiment of the present application, the first loss function is:
l=-[y i ·log(p i )+(1-y i )·log(1-p i )]
where l is the first loss function, y i Labeling probability p for first sample image i to be unnatural image i 1-y for the output probability that the first sample image i is an unnatural image i Labeling probability 1-p for the first sample image i to be a natural image i The output probability for the first sample image i being a natural image.
In the embodiment of the applicationThe first loss function may be: l= - [ y i ·log(p i )+(1-y i )·log(1-p i )]
Wherein l is a first loss value, i is the identity of the first sample image, y i Labeling probability p for first sample image i to be unnatural image i 1-y for the output probability that the first sample image i is an unnatural image i Labeling probability 1-p for the first sample image i to be a natural image i The output probability for the first sample image i being a natural image.
Since natural images and unnatural images have large differences in texture, color, etc., they are easily distinguished by an unnatural image recognition model. Unnatural images are generally designed by authors according to the intention of the authors, the details and colors of different brightness of the unnatural images are quite reasonable, and if the unnatural images are further enhanced, information of the authors which want to borrow the images is easily destroyed. Therefore, the unnatural image recognition model is provided in the embodiment of the application, and the unnatural images are distinguished, so that the visual experience of a user is better improved. If the image to be enhanced is an unnatural image, outputting the probability of the image to be enhanced being the unnatural image as 1, and generating a target image as the image to be enhanced; if the image to be enhanced is a natural image, outputting the probability of the image to be enhanced as the natural image as 1, and obtaining the generated target image as the enhanced image.
Fig. 11 is a schematic diagram of a process for training an unnatural image recognition model according to an embodiment of the present application, where the process includes the following steps:
s1101: any one of the first sample images in the first sample set and the labeling probability that the first sample image is an unnatural image and a natural image are obtained.
S1102: inputting the first sample image into an original recognition model, and obtaining the output probability that the first sample image output by the original recognition model is an unnatural image and a natural image.
S1103: and determining a first loss value corresponding to the first sample image according to the labeling probability, the output probability and the first loss function.
S1104: and training the original recognition model according to the first loss value.
In order to obtain a trained perceptual enhancement model, based on the above embodiments, in an embodiment of the present application, the perceptual enhancement model is trained by:
any second sample image in the second sample set and a labeling image to be processed corresponding to the second sample image are obtained; inputting the second sample image into a content enhancement model, and obtaining an output image to be processed output by the content enhancement model;
inputting the image to be processed into an original perception enhancement model to obtain an output enhancement image output by the original perception enhancement model;
Determining a second loss value corresponding to the second sample image according to the feature images obtained by labeling the image to be processed and the output enhanced image through a VGG network and a second loss function;
training the original perceptual enhancement model according to a second loss value determined for each second sample image correspondence in the second sample set.
In order to acquire a perception recognition model, a second sample set for training is stored in the embodiment of the present application, wherein second sample images in the second sample set include images acquired at different scenes and different times and different unnatural images, and in order to train an original perception enhancement model conveniently, labeling to-be-processed images corresponding to each second sample image are stored in the second sample set, wherein the labeling to-be-processed images corresponding to the second sample images are images acquired after adjusting acquisition equipment and subjected to contrast and color enhancement.
After any one second sample image in the second sample set and the marked image to be processed corresponding to the second sample image are obtained, the second sample image is input into the content enhancement model, and the output of the content enhancement model is obtained, namely the image to be processed is output. And inputting the image to be processed into the original perception enhancement model to obtain an output enhancement image output by the original perception enhancement model. After the output enhanced image is obtained, the image to be processed and the output enhanced image can be marked and input into a VGG network, a feature map output by the VGG network is obtained, the feature map can be a first-level feature map, and a second loss value corresponding to the second sample image is determined according to the two output feature maps and the second loss. Training the original perception enhancement model according to the first loss value. And when the preset conditions are met, obtaining the trained perception enhancement model. The preset condition may be that the number of second loss values determined corresponding to the second sample images in the second sample set is greater than the set number, where the second loss values are smaller than the preset threshold.
In order to obtain a trained perceptual enhancement model, based on the above embodiments, in an embodiment of the present application, the second loss function is:
Figure BDA0004145526680000151
wherein,,
Figure BDA0004145526680000152
and->
Figure BDA0004145526680000153
Respectively representing and labeling the image to be processed and the characteristic diagram obtained by the output enhanced image through VGG network, N l Representing the total number of elements in the feature map of the VGG network output.
In the embodiment of the present application, the second loss function may be:
Figure BDA0004145526680000154
wherein,,
Figure BDA0004145526680000155
and->
Figure BDA0004145526680000156
Respectively represent and annotate the image to be processed and the outputEnhanced image feature map obtained through VGG network, N l Representing the total number of elements in the feature map of the VGG network output, I.I 1 The L1 norm is shown, and in particular, how to obtain the L1 norm is known in the art, and is not described herein.
Typically, the neural network-modeled image enhancement model is trained with content only as a loss function. In practical applications, low-quality images often have rich scenes and categories, and the distribution of the images is very complex. The image types in the sample set are difficult to cover the actual application requirements, and if the content is only used as the training guiding direction, the image self-adaptive enhancement direction cannot be carried out for the images input in different ways, so that the model enhancement effect is single. In the embodiment of the application, the perception enhancement model is introduced, the perception error is taken as a guiding direction, specific details required by different scenes and categories are given to the primarily enhanced image output by the content enhancement model, so that the model learns to be closer to the intrinsic contrast and color enhancement rule, the expression capacity of the model is enriched, and the output enhanced image has better human eye visual effect.
FIG. 12 is a schematic diagram of a process for training a perception enhancement model according to an embodiment of the present application, the process including the following steps:
s1201: any second sample image in the second sample set and a labeling image to be processed corresponding to the second sample image are obtained; and inputting the second sample image into the content enhancement model, and obtaining an output image to be processed output by the content enhancement model.
S1202: and inputting the image to be processed into the original perception enhancement model to obtain an output enhancement image output by the original perception enhancement model.
S1203: and determining a second loss value corresponding to the second sample image according to the feature map obtained by labeling the image to be processed and the output enhanced image through the VGG network and the second loss function.
S1204: the original perceptual enhancement model is trained based on a second loss value determined for each second sample image correspondence in the second sample set.
The perception enhancement model in the embodiment of the application is formed by cascading residual modules, the general image enhancement neural model is trained only by loss based on image content, the perception enhancement model is introduced in the embodiment of the application, and the perception information is intended to be endowed to the local characteristics of the modulated image to be processed generated by the content enhancement model, so that the characteristics of scenes, categories and the like of the current input low-quality image are obtained, and therefore the perception enhancement sub-network can adaptively enhance according to the characteristics of the input image, the overall self-adaption capability of the model is improved, and the perception enhancement neural model has better performance on the complex input image in practical application.
Fig. 13 is a schematic structural diagram of a perception enhancement model according to an embodiment of the present application.
As can be seen from fig. 13, the perceptual enhancement model is composed of 3×3conv and an activation function, wherein the activation function used in the perceptual enhancement model is ReLU, and the structure of the perceptual enhancement model is 3×3conv, reLU, 3×3conv, reLU and 3×3conv in sequence. Where 64 and 3 appear in FIG. 13 are the number of channels.
In order to improve image quality, based on the above embodiments, in this embodiment of the present application, determining a target pixel value of a corresponding pixel of the pixel in the fused target image includes:
determining a first product of the first pixel value and the corresponding weight and a second product of the second pixel value and the corresponding weight; and determining the sum of the first product and the second product as a target pixel value of a corresponding pixel point of the pixel point in the fused target image.
To enhance image quality, the television may determine a first product of a first pixel value and a corresponding weight, where the corresponding weight described at this time is a probability that the image to be enhanced is a natural image, and may determine a second product of a second pixel value and a corresponding weight, where the corresponding weight described at this time is a probability that the image to be enhanced is an unnatural image. After determining the first product and the second product, a sum of the first product and the second product may be determined as a target pixel value of the pixel corresponding to the pixel in the fused target image.
That is, after the probability p that the image to be enhanced is an unnatural image and the probability 1-p that the image to be enhanced is a natural image are obtained, the enhanced image and the image to be enhanced are fused according to the proportion of 1-p and p, and a target image is obtained.
Fig. 14 is a schematic structural diagram of an image enhancement device according to an embodiment of the present application, as shown in fig. 14, the device includes:
the receiving and acquiring module 1401 is configured to acquire an image to be enhanced, input the image to be enhanced into a pre-trained non-natural image recognition model, and acquire the non-natural image and the probability of being the natural image of the image to be enhanced output by the non-natural image recognition model; obtaining an enhanced image corresponding to the image to be enhanced;
the processing module 1402 is configured to obtain, for each pixel in the enhanced image, a first pixel value of the pixel in the enhanced image and a second pixel value of the pixel corresponding to the pixel in the image to be enhanced, and determine, as weights corresponding to the second pixel value and the first pixel value, a target pixel value of the pixel corresponding to the pixel in the fused target image, where the probabilities of the image to be enhanced being an unnatural image and a natural image are respectively.
In a possible implementation manner, the receiving and acquiring module 1401 is specifically configured to input the image to be enhanced into a content enhancement model, and acquire a contrast-enhanced and color-enhanced image to be processed output by the content enhancement model; inputting the image to be processed into a perception enhancement model, and obtaining an enhanced image output by the perception enhancement model; the enhanced image is an image for enhancing and collecting scene-related perception information of the image to be enhanced.
In a possible implementation manner, the receiving and acquiring module 1401 is further configured to input the image to be enhanced into a feature extraction network, and acquire an image global feature of the image to be enhanced output by the feature extraction network;
the receiving and acquiring module 1401 is specifically configured to input the image global feature of the image to be enhanced into a pre-trained unnatural image recognition model.
In a possible implementation manner, the processing module 1402 is further configured to obtain any one of the first sample images in the first sample set and a labeling probability that the first sample image is an unnatural image and is a natural image;
inputting the first sample image into an original recognition model, and acquiring the output probability that the first sample image output by the original recognition model is an unnatural image and a natural image; determining a first loss value corresponding to the first sample image according to the labeling probability, the output probability and a first loss function; and training the original recognition model according to the first loss value.
In a possible implementation manner, the processing module 1402 is further configured to obtain any one of the second sample images in the second sample set and a labeling image to be processed corresponding to the second sample image; inputting the second sample image into a content enhancement model, and obtaining an output image to be processed output by the content enhancement model; inputting the image to be processed into an original perception enhancement model to obtain an output enhancement image output by the original perception enhancement model; determining a second loss value corresponding to the second sample image according to the feature images obtained by labeling the image to be processed and the output enhanced image through a VGG network and a second loss function; training the original perceptual enhancement model according to a second loss value determined for each second sample image correspondence in the second sample set.
In a possible implementation manner, the processing module 1402 is specifically configured to determine a first product of the first pixel value and the corresponding weight, and a second product of the second pixel value and the corresponding weight; and determining the sum of the first product and the second product as a target pixel value of a corresponding pixel point of the pixel point in the fused target image.
Based on the same inventive concept, fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 15, including: one or more (including two) processors 1501 and a communication interface 1502.
Optionally, the electronic device further comprises a memory 1503, which memory 1503 may comprise read-only memory and random access memory, and provides operating instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
In some embodiments, as shown in FIG. 15, memory 1503 stores elements, execution modules or data structures, or a subset thereof, or an extended set thereof. As shown in fig. 15, the processor 1501 controls the processing operation of the headend device by calling the operation instruction stored in the memory 1503 to perform a corresponding operation, and the processor may also be referred to as a central processing unit (central processing unit, CPU).
As shown in fig. 15, the memory 1503 may include read-only memory and random access memory, and provides instructions and data to the processor. A portion of the memory 1503 may also include NVRAM. Such as an in-application communication interface and memory, are coupled together by bus system 1504, where bus system 1504 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus system 1504 in fig. 15.
The methods disclosed in some embodiments of the present application may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software.
On the basis of the above embodiments, the embodiments of the present application further provide a computer readable storage medium, where a computer program executable by an electronic device is stored, where the program when executed on the electronic device causes the electronic device to implement the methods disclosed in some embodiments of the present application.
Since the principle of the above-mentioned computer readable storage medium for solving the problem is similar to that of the image enhancement method, the implementation of the above-mentioned computer readable storage medium may refer to the embodiment of the method, and the repetition is omitted.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A method of image enhancement, the method comprising:
the method comprises the steps of obtaining an image to be enhanced, inputting the image to be enhanced into a pre-trained non-natural image recognition model, and obtaining the non-natural image and the probability of the natural image as the image to be enhanced, which are output by the non-natural image recognition model; obtaining an enhanced image corresponding to the image to be enhanced;
and aiming at each pixel point in the enhanced image, acquiring a first pixel value of the pixel point in the enhanced image and a second pixel value of the pixel point corresponding to the pixel point in the image to be enhanced, taking the probabilities of the image to be enhanced being an unnatural image and a natural image as weights corresponding to the second pixel value and the first pixel value respectively, and determining a target pixel value of the pixel point corresponding to the pixel point in the fused target image.
2. The method of claim 1, wherein the acquiring the enhanced image corresponding to the image to be enhanced comprises:
inputting the image to be enhanced into a content enhancement model, and obtaining an image to be processed with enhanced contrast and color output by the content enhancement model;
inputting the image to be processed into a perception enhancement model, and obtaining an enhanced image output by the perception enhancement model; the enhanced image is an image for enhancing and collecting scene-related perception information of the image to be enhanced.
3. The method of claim 1, wherein after the obtaining the image to be enhanced, before the inputting the image to be enhanced into the pre-trained unnatural image recognition model, the method further comprises:
inputting the image to be enhanced into a feature extraction network, and obtaining the image global feature of the image to be enhanced, which is output by the feature extraction network;
the step of inputting the image to be enhanced into a pre-trained unnatural image recognition model comprises the following steps:
and inputting the image global features of the image to be enhanced into a pre-trained unnatural image recognition model.
4. The method of claim 1, wherein the unnatural image recognition model is trained by:
acquiring any one of a first sample image in a first sample set and labeling probability that the first sample image is an unnatural image and a natural image;
inputting the first sample image into an original recognition model, and acquiring the output probability that the first sample image output by the original recognition model is an unnatural image and a natural image;
determining a first loss value corresponding to the first sample image according to the labeling probability, the output probability and a first loss function;
And training the original recognition model according to the first loss value.
5. The method of claim 4, wherein the first loss function is:
l c =-[y i ·log(p i )+(1-y i )·log(1-p i )]
wherein said y i Labeling probability p for first sample image i to be unnatural image i 1-y for the output probability that the first sample image i is an unnatural image i Labeling probability for first sample image i to be natural imageRate 1-p i The output probability for the first sample image i being a natural image.
6. The method of claim 2, wherein the perceptual enhancement model is trained by:
any second sample image in the second sample set and a labeling image to be processed corresponding to the second sample image are obtained; inputting the second sample image into a content enhancement model, and obtaining an output image to be processed output by the content enhancement model;
inputting the image to be processed into an original perception enhancement model to obtain an output enhancement image output by the original perception enhancement model;
determining a second loss value corresponding to the second sample image according to the feature images obtained by labeling the image to be processed and the output enhanced image through a VGG network and a second loss function;
Training the original perceptual enhancement model according to a second loss value determined for each second sample image correspondence in the second sample set.
7. The method of claim 6, wherein the second loss function is:
Figure FDA0004145526590000031
wherein,,
Figure FDA0004145526590000032
and->
Figure FDA0004145526590000033
Respectively representing and labeling the image to be processed and the characteristic diagram obtained by the output enhanced image through VGG network, N l Representing the total number of elements in the feature map of the VGG network output.
8. The method of claim 1, wherein determining the target pixel value of the corresponding pixel in the fused target image comprises:
determining a first product of the first pixel value and the corresponding weight and a second product of the second pixel value and the corresponding weight; and determining the sum of the first product and the second product as a target pixel value of a corresponding pixel point of the pixel point in the fused target image.
9. An image enhancement device, the device comprising:
the receiving and acquiring module is used for acquiring an image to be enhanced, inputting the image to be enhanced into a pre-trained unnatural image recognition model, and acquiring the probability that the image to be enhanced output by the unnatural image recognition model is an unnatural image and is a natural image; obtaining an enhanced image corresponding to the image to be enhanced;
The processing module is used for acquiring a first pixel value of the pixel point in the enhanced image and a second pixel value of the pixel point corresponding to the pixel point in the image to be enhanced aiming at each pixel point in the enhanced image, taking probabilities of the image to be enhanced being an unnatural image and a natural image as weights corresponding to the second pixel value and the first pixel value respectively, and determining a target pixel value of the pixel point corresponding to the pixel point in the fused target image.
10. An electronic device, the electronic device comprising:
a processor and a memory;
the memory is configured to store the processor-executable instructions;
the processor is configured to execute the instructions to implement the image enhancement method of any of claims 1-8.
CN202310298922.4A 2023-03-24 2023-03-24 Image enhancement method and device and electronic equipment Pending CN116309172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310298922.4A CN116309172A (en) 2023-03-24 2023-03-24 Image enhancement method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310298922.4A CN116309172A (en) 2023-03-24 2023-03-24 Image enhancement method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116309172A true CN116309172A (en) 2023-06-23

Family

ID=86802994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310298922.4A Pending CN116309172A (en) 2023-03-24 2023-03-24 Image enhancement method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116309172A (en)

Similar Documents

Publication Publication Date Title
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN109102483B (en) Image enhancement model training method and device, electronic equipment and readable storage medium
CN111739027B (en) Image processing method, device, equipment and readable storage medium
CN113518185B (en) Video conversion processing method and device, computer readable medium and electronic equipment
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN111985281B (en) Image generation model generation method and device and image generation method and device
CN112348747A (en) Image enhancement method, device and storage medium
CN111681177B (en) Video processing method and device, computer readable storage medium and electronic equipment
CN113763296A (en) Image processing method, apparatus and medium
CN114973049B (en) Lightweight video classification method with unified convolution and self-attention
CN112906721B (en) Image processing method, device, equipment and computer readable storage medium
CN111833360A (en) Image processing method, device, equipment and computer readable storage medium
CN115311178A (en) Image splicing method, device, equipment and medium
CN110910400A (en) Image processing method, image processing device, storage medium and electronic equipment
CN115967823A (en) Video cover generation method and device, electronic equipment and readable medium
Wang et al. Learning a self‐supervised tone mapping operator via feature contrast masking loss
CN110689478B (en) Image stylization processing method and device, electronic equipment and readable medium
CN113658065A (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN113706400A (en) Image correction method, image correction device, microscope image correction method, and electronic apparatus
CN113052768B (en) Method, terminal and computer readable storage medium for processing image
CN113538304A (en) Training method and device of image enhancement model, and image enhancement method and device
CN112257729A (en) Image recognition method, device, equipment and storage medium
CN116485944A (en) Image processing method and device, computer readable storage medium and electronic equipment
CN116309172A (en) Image enhancement method and device and electronic equipment
CN111383289A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination