CN111383187B - Image processing method and device and intelligent terminal - Google Patents

Image processing method and device and intelligent terminal Download PDF

Info

Publication number
CN111383187B
CN111383187B CN201811646524.2A CN201811646524A CN111383187B CN 111383187 B CN111383187 B CN 111383187B CN 201811646524 A CN201811646524 A CN 201811646524A CN 111383187 B CN111383187 B CN 111383187B
Authority
CN
China
Prior art keywords
image
neural network
network model
loss function
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811646524.2A
Other languages
Chinese (zh)
Other versions
CN111383187A (en
Inventor
潘澄
关婧玮
俞大海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201811646524.2A priority Critical patent/CN111383187B/en
Publication of CN111383187A publication Critical patent/CN111383187A/en
Application granted granted Critical
Publication of CN111383187B publication Critical patent/CN111383187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention is suitable for the field of image processing, and provides an image processing method, an image processing device and an intelligent terminal.

Description

Image processing method and device and intelligent terminal
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image processing method and device and an intelligent terminal.
Background
Image magnification technology is a classical problem in the fields of computer vision and image processing, and has important academic and industrial research values. The object of the image magnification is to obtain its corresponding high resolution image from a given low resolution image, so that the content information of the image is preserved or even enhanced, while giving a better visual effect to the person. Currently, the mainstream image super-resolution methods can be divided into three categories: interpolation-based methods; a reconstruction-based method; a deep learning-based method.
However, although the mainstream image processing method solves the basic requirement of image amplification, the problems of jaggy effect, blurring effect, excessive smoothness and the like are commonly existed due to the imperfect design link, and particularly, the image processing method based on deep learning is excellent in test concentration, but the effect is not satisfactory when amplifying the actual image, and the image quality problems of excessively smooth or unnatural edges and the like exist.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide an image processing method, an image processing device, and an intelligent terminal, so as to solve the problem of image quality such as too smooth or unnatural edges when an image is amplified in the existing image processing method.
A first aspect of an embodiment of the present invention provides an image processing method, including:
training the first neural network model based on a preset loss function;
Training the second neural network model based on a preset loss function and the trained first neural network model;
And inputting the image to be processed into the trained second neural network model, and outputting the target image.
A second aspect of an embodiment of the present invention provides an image processing apparatus including:
The first neural network model training unit is used for training the first neural network model based on a preset loss function;
the second neural network model training unit is used for training the second neural network model based on a preset loss function and the trained first neural network model;
And the image processing unit is used for inputting the image to be processed into the trained second neural network model and outputting the target image.
A third aspect of an embodiment of the present invention provides an intelligent terminal, including:
The image processing device comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the image processing method provided in the first aspect of the embodiment of the invention when executing the computer program.
Wherein the computer program comprises:
The first neural network model training unit is used for training the first neural network model based on a preset loss function;
the second neural network model training unit is used for training the second neural network model based on a preset loss function and the trained first neural network model;
And the image processing unit is used for inputting the image to be processed into the trained second neural network model and outputting the target image.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the image processing method provided in the first aspect of the embodiments of the present invention.
Wherein the computer program comprises:
The first neural network model training unit is used for training the first neural network model based on a preset loss function;
the second neural network model training unit is used for training the second neural network model based on a preset loss function and the trained first neural network model;
And the image processing unit is used for inputting the image to be processed into the trained second neural network model and outputting the target image.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: the first neural network model is trained based on a preset loss function, the second neural network model is trained based on the preset loss function and the trained first neural network model, after the second neural network model is trained, an image to be processed is input into the second neural network model to output a target image, and the second neural network model is trained through the preset loss function and the trained first neural network model, so that the image quality output by the second neural network model is better, image noise is removed, and the image after image amplification is not too smooth or has unnatural edges.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an implementation of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a specific implementation of a method for performing image preprocessing on a training sample image according to an embodiment of the present invention;
FIG. 3 is a flowchart of a specific implementation of a method for training a first neural network according to an embodiment of the present invention;
FIG. 4 is a flowchart of a specific implementation of a method for adjusting parameters of a first neural network model according to an embodiment of the present invention;
FIG. 5 is a flowchart of a specific implementation of a method for fine-tuning parameters of a first neural network model according to an embodiment of the present invention;
FIG. 6 is a flowchart of a specific implementation of a method for adjusting parameters of a second neural network according to an embodiment of the present invention;
FIG. 7 is a flowchart of a specific implementation of a method for adjusting parameters of a second neural network model according to an embodiment of the present invention;
FIG. 8 is a flowchart of a specific implementation of a method for fine-tuning parameters of a second neural network model according to an embodiment of the present invention;
Fig. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
Fig. 10 is a schematic diagram of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples. Referring to fig. 1, fig. 1 shows an implementation flow of an image processing method according to an embodiment of the present invention, which is described in detail below:
in step S101, a first neural network model is trained based on a preset loss function.
In the embodiment of the present invention, the preset loss function is a multi-stage loss function composed of a pixel loss function, a characteristic loss function, and a generation countermeasure network loss function GAN loss.
Here, the pixel loss function is to measure the difference of two images from the pixel level. The functions that calculate pixel loss here include, but are not limited to, L1, L2norm, and MSE loss. Here, the MSE loss function is taken as the pixel loss function in the embodiment of the present invention, and the specific formula is as follows:
the I ref and the G θ(G)(Iin)x,y respectively correspond to a first image HR' and a fifth image HR in the second neural network model, and the fifth image HR is a high-resolution image which is output after the fourth image LR is amplified by the second neural network model; the I ref and the G θ(G)(Iin)x,y respectively correspond to a third image LR 'and a fourth image LR in the first neural network model, wherein the fourth image LR is a low-resolution image which is output after the first image HR' is reduced by the first neural network model; w and H represent the width and height of the pixel extraction matrix, respectively, and x and y represent the position coordinates at the corresponding width and height, respectively.
Here, the difference between the two images can be measured at the pixel level by weighted averaging the squares of the differences of each pixel in the two images.
In the embodiment of the invention, the characteristic loss function is to extract the content characteristics in the image for comparison. The feature information meeting the conditions is acquired by using a deep neural network or other modes, and after the feature information is converted into corresponding feature vectors, the difference between the feature information and the feature vector is compared according to the converted feature vectors. Here, the manner of extracting the content features in the image includes, but is not limited to, obtaining feature information such as brightness, contrast, and structure by using VGG network, SSIM network, etc., and using a network denoted by Φ, and using a feature loss function as follows:
The I ref and the G θ(G)(Iin)x,y respectively correspond to a first image HR' and a fifth image HR in the second neural network model, and the fifth image HR is a high-resolution image which is output after the fourth image LR is amplified by the second neural network model; the I ref and the G θ(G)(Iin)x,y respectively correspond to a third image LR 'and a fourth image LR in the first neural network model, wherein the fourth image LR is a low-resolution image which is output after the first image HR' is reduced by the first neural network model; w and H represent the width and height of the feature extraction matrix respectively, and x and y represent the position coordinates on the corresponding width and height respectively; phi represents the neural network used to extract the content features of the image.
Generating the antagonism type network loss function GAN loss is to help the first neural network or the second neural network model generate an image similar to the distribution of the target image based on the concept of antagonism, in the process of the first neural network or the second neural network model, the distinguishing loss of the first neural network or the second neural network model is calculated through the preset label value and the real label value, so that the parameters of the first neural network or the second neural network model are adjusted and optimized, the distinguishing loss of the first neural network or the second neural network model reaches the minimum value or the preset value, the output image is the image similar to the distribution of the target image, so that a better training effect can be achieved on the first neural network or the second neural network model, and finally, the second neural network model can output the image close to the training target, namely the image is the image closest to the real image.
Here, the first neural network model is a neural network model that performs reduction processing on an original high-resolution image in order to be able to provide a reduced image with similar actual noise, that is, a reduced image output via the first neural network model is a low-resolution image with actual noise. In order to enable the first neural network model to provide a low-resolution image with real noise, the image input into the first neural network model is a noisy high-resolution image, and in the training process of the first neural network model, each parameter in the first neural network model is adjusted so that the output reduced image can have the same noise as the real image, and therefore the training effect on the second neural network model can be further improved, and the image output by the second neural network model has higher image quality and is closer to the real image.
Optionally, before step S101, the method further includes the following steps:
Acquiring a training sample image HR *;
And performing image preprocessing on the training sample image HR * to obtain at least two training sample image copies of the training sample image HR *, wherein the at least two training sample image copies form an image group.
Optionally, the image copy of the training sample is a first image HR ', a second image LR * or a third image LR', please refer to fig. 2, fig. 2 shows a specific implementation flow of a method for performing image preprocessing on a training sample image according to an embodiment of the present invention, which is described in detail below
In step S201, the training sample image HR * is subjected to noise reduction processing, so as to obtain a corresponding first image HR'.
In the embodiment of the present invention, the purpose of performing the noise reduction processing on the training sample image HR * is to obtain a noise-free high-resolution image, that is, the first image HR' is specifically a noise-free high-resolution image. The noiseless high-resolution image can be specifically obtained by denoising the image through a single-frame denoising algorithm such as BM3D, or can be obtained by taking any one of a plurality of images shot in the same scene and containing different noises as a reference frame, aligning the rest images with the reference frame, splicing the rest images together, and then carrying out spatial convolution operation to output the high-resolution image.
In step S202, an image downsampling process is performed on the training sample image HR * to obtain a corresponding second image LR *.
In the embodiment of the present invention, the second image LR * is specifically a noisy low-resolution image, which is an image obtained by performing image downsampling processing on the training sample image HR *.
In step S203, the training sample image HR * is subjected to random clipping processing, so as to obtain a corresponding third image LR'.
In the embodiment of the present invention, the third image LR' is also a noisy low-resolution image, which is an image obtained by performing random clipping processing on the training sample image HR *.
It is understood that steps S201 to S203 are not sequentially performed, and steps S201 to S203 may be performed simultaneously.
In an embodiment of the invention, the first image HR ', the second image LR *, and the third image LR' form a set of images for training the first neural network model and the second neural network model. The first neural network model or the second neural network model is trained through the images of the image group, so that the trained second neural network model can output amplified images with better quality and higher resolution.
Optionally, referring to fig. 3, fig. 3 shows a specific implementation flow of a method for training a first neural network according to an embodiment of the present invention, which is described in detail below:
In step S301, the image group is input into the first neural network model for pre-training based on the pixel loss function and the feature loss function, so as to adjust each parameter in the first neural network model.
In the embodiment of the invention, in order to solve the problem that the data distribution of the image and the actual image used for training the neural network model is far away, so that the applicability of the neural network model to the actual image is not high, for example, the common mode of obtaining the image B from the image A is through image downsampling, so that each pixel point in the image B is obtained by fusing pixel points in a plurality of images A, that is, the information amount in the image B is far more than the actual size of the image A, a lot of information of the image A is hidden in the image B, if the neural network is trained by using the set of the images B, the neural network can fully mine hidden information in the image B for modeling the image C, thereby obtaining a good super-resolution effect, but the application effect of the neural network to the actual image is very poor.
Here, the primary purpose of the first neural network model is to provide a low resolution image that resembles a real image.
Optionally, referring to fig. 4, fig. 4 shows a specific implementation flow of a method for adjusting parameters of a first neural network model according to an embodiment of the present invention, which is described in detail below:
in step S401, the first image HR' is input to the first neural network model for processing, and then a fourth image LR is obtained.
In the embodiment of the invention, in order to improve the authenticity of the training sample image and avoid the defect that the effect of the image added with the noise manually in practical application is not in line with the expected effect and the image sample data close to the real noise cannot be obtained, so that the training effect is not obvious.
Here, in the process of training the first neural network model, the noisy high-resolution image, i.e., the first image HR', is first input into the first neural network model to perform the reduction processing, and then a corresponding noisy low-resolution image, i.e., the fourth image LR, is obtained.
In step S402, a first value of the fourth image LR and the second image LR * is calculated based on the pixel loss function and the feature loss function.
In the embodiment of the invention, after the fourth image LR is obtained through the first neural network model, firstly, the loss value, namely the first value, of the second image LR * obtained by performing image downsampling processing on the training sample image HR * is calculated based on the pixel loss function and the feature loss function, then, parameters in the first neural network model are adjusted according to the first value, so that the first value reaches the lowest value or reaches a preset value, the purpose of optimizing the first neural network model is achieved, and the image output by the first neural network model is a lower-resolution image which is more similar to reality.
In step S403, respective parameters in the first neural network model are adjusted based on the first numerical value.
In the embodiment of the invention, the parameters of the first neural network model are adjusted by calculating the first numerical value according to the fourth image LR and the second image LR * for a plurality of times, so that the parameters of the first neural network model are optimal, and a lower-resolution image which is more similar to a real image can be output.
In step S302, the pre-trained first neural network model is trained based on generating the antagonistic network loss function GAN loss to achieve fine tuning of the respective parameters in the first neural network model.
In the embodiment of the invention, the GAN loss is based on the countermeasure idea to help the neural network output the image similar to the target data distribution, and is calculated by the output of the distinguishing network, the more difficult the distinguishing network correctly distinguishes the image output by the neural network from the reference image, the smaller the GAN loss, and in the training process, the distinguishing network can perform iterative operation, so that each parameter of the neural network is optimal, namely the GAN loss reaches the minimum value.
Optionally, referring to fig. 5, fig. 5 shows a specific implementation flow of a method for fine tuning parameters of a first neural network model according to an embodiment of the present invention, which is described in detail below:
in step S501, a second value of the fourth image LR and the third image LR' is calculated based on the generation of the countermeasure network loss function GAN loss.
In the embodiment of the present invention, the fourth image LR and the third image LR 'are input into the distinguishing network to calculate GAN loss, i.e. the second values of the fourth image LR and the third image LR'.
In step S502, respective parameters in the first neural network model are fine-tuned based on the second values.
In the embodiment of the invention, because the abstraction degree of the GAN loss is very high, if the GAN loss is used as the loss function of the neural network model alone, it is difficult to obtain an ideal training effect, so when the GAN loss is used to adjust the parameters of the first neural network model or the second neural network model, the parameters of the first neural network model or the second neural network model are adjusted firstly based on the pixel loss function and the feature loss function, and then the parameters of the first neural network model or the second neural network model are finely adjusted based on the GAN loss, so that the first neural network model or the second neural network model can achieve a better training effect.
In step S102, the second neural network model is trained according to the preset loss function and the trained first neural network model.
In the embodiment of the invention, the second neural network model is a network model for amplifying the low-resolution image, and aims to output the image with better image quality, so that the output image effect meets the expectations of people and is more real. In order to enable the second neural network model to better simulate a noisy image, a higher-quality image meeting the expectations of people is output, and after a noisy low-resolution image, namely an image output by the first neural network model, is input into the second neural network model, each parameter of the second neural network model is adjusted based on a preset loss function, so that the second neural network model can output an image with higher image quality and closer to a real image.
Here, the first neural network model and the second neural network model implement the transformation of the image resolution, and it can be implemented by using convolutional neural networks, residual networks and other networks, so long as the functional requirements of the transformation of the image resolution can be met, and the internal structure of the transformation is not limited. For example, the existing super-resolution network structure can be modified, such as BatchNorm layers are added/removed, skip connection is added/reduced, so that the network is easier to train, and the training purpose is achieved.
Here, step S102 specifically includes:
And training the second neural network model according to the preset loss function and the fourth image LR output by the trained first neural network model.
Optionally, referring to fig. 6, fig. 6 shows a specific implementation flow of a method for adjusting parameters of a second neural network according to an embodiment of the present invention, which is described in detail below:
In step S601, the image group and the fourth image LR output by the trained first neural network model are input into the second neural network model for training based on the pixel loss function and the feature loss function, so as to implement adjustment of each parameter in the second neural network model.
In the embodiment of the invention, after the first neural network model is trained and the expected effect is achieved, the fourth image LR output by the first neural network model is input into the second neural network model for training, so that the second neural network model can output a noisy high-resolution image, namely a high-resolution image close to a real image after training.
In the training process of the second neural network model, the loss values of noisy high-resolution images, namely the fifth image HR and the first image HR', output by the second neural network model are calculated based on the pixel loss function and the characteristic loss function, and each parameter in the second neural network model is adjusted according to the loss values so that the loss values reach the minimum value, thereby ensuring that the second neural network model can better simulate the noise environment of a real image and outputting the high-resolution image which is closer to the real image.
Optionally, referring to fig. 7, fig. 7 shows a specific implementation flow of a method for adjusting parameters of a second neural network model according to an embodiment of the present invention, which is described in detail below:
In step S701, the fourth image LR output from the trained first neural network model is input to the second neural network model and processed, and a fifth image HR is obtained.
In step S702, third values of the fifth image HR and the first image HR' are calculated based on the pixel loss function and the feature loss function.
In an embodiment of the invention, the fifth image HR is a noisy high resolution image output by the second neural network model. The difference between the two images is determined by calculating the pixel loss function and the feature loss function as a loss value of the fifth image HR and the noiseless high-resolution image, i.e. the first image HR', i.e. the third value, to determine whether an adjustment of the parameters of the second neural network model is required.
In step S703, respective parameters in the second neural network model are adjusted based on the third numerical value.
In the embodiment of the invention, when the third numerical value does not reach the minimum value or the preset value, each parameter in the second neural network model is further adjusted so that the third numerical value reaches the minimum value or the preset value, thereby achieving the purpose of optimizing the second neural network model and enabling the second neural network model to output a high-resolution image which is more similar to the reality.
In step S602, the trained second neural network model is trained based on generating the countermeasure network loss function GAN loss, so as to achieve fine tuning of each parameter in the second neural network model.
In the embodiment of the invention, because the abstraction degree of the GAN loss is very high, if the GAN loss is used as the loss function of the neural network model alone, it is difficult to obtain an ideal training effect, so when the GAN loss is used to adjust the parameters of the first neural network model or the second neural network model, the parameters of the first neural network model or the second neural network model are adjusted firstly based on the pixel loss function and the feature loss function, and then the parameters of the first neural network model or the second neural network model are finely adjusted based on the GAN loss, so that the first neural network model or the second neural network model can achieve a better training effect.
Optionally, referring to fig. 8, fig. 8 shows a specific implementation flow of a method for fine tuning parameters of a second neural network model according to an embodiment of the present invention, which is described in detail below:
In step S801, fourth values of the fifth image HR and the first image HR' are calculated based on the generation of the countermeasure network loss function GAN loss.
In step S802, respective parameters in the second neural network model are adjusted based on the fourth numerical value.
In the embodiment of the invention, the second neural network model is a network model for finally outputting the amplified image, and after each parameter in the second neural network model is adjusted through the loss value calculated based on the fifth image HR and the first image HR' output by the first neural network model, the second neural network model can output a high-quality high-resolution image meeting the user expectation, and the high-resolution image is closer to a real image.
In step S103, the image to be processed is input into the trained second neural network model, and the target image is output.
In the embodiment of the invention, after the second neural network model is trained, namely after the pixel loss function and the characteristic loss function are performed, and the antagonism type network loss function is generated to adjust or fine tune each parameter in the second neural network model, the second neural network model can simulate real image data distribution, so that an algorithm of the second neural network model has real application scenes such as denoising, deblurring and the like, and after an image to be processed is input into the trained second neural network model, an image with better image quality can be output, and the method is suitable for image restoration or amplification in different practical application occasions.
In the embodiment of the invention, the first neural network model is trained based on the preset loss function, the second neural network model is trained based on the preset loss function and the trained first neural network model, after the second neural network model is trained, the image to be processed is input into the second neural network model to output the target image, and the second neural network model is trained through the preset loss function and the trained first neural network model, so that the image quality output by the second neural network model is better, the image noise is removed, and the image after the image amplification processing is not smooth or has unnatural edges.
It should be understood that the sequence number of each step in the above embodiment does not mean the execution sequence, and the execution sequence of each process should be controlled by its function and internal logic, and should not limit the implementation process of the embodiment of the present invention in any way.
Corresponding to an image processing method described in the above embodiments, fig. 9 shows a schematic diagram of an image processing apparatus provided in an embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown.
Referring to fig. 9, the apparatus includes:
a first neural network model training unit 91, configured to train the first neural network model based on a preset loss function;
A second neural network model training unit 92, configured to train the second neural network model based on the preset loss function and the trained first neural network model;
The image processing unit 93 is configured to input the image to be processed into the trained second neural network model, and output the target image.
Optionally, the apparatus further includes:
the training sample image acquisition unit is used for acquiring training sample images;
The image preprocessing unit is used for carrying out image preprocessing on the training sample images to obtain at least two training sample image copies of the training sample images, and the at least two training sample image copies form an image group.
Optionally, the training sample image copy is a first image HR ', a second image LR * or a third image LR', and the image preprocessing unit includes:
The first image preprocessing subunit is used for carrying out noise reduction processing on the training sample image HR * to obtain a corresponding first image HR';
The second image preprocessing subunit is used for performing image downsampling processing on the training sample image HR * to obtain a corresponding second image LR *;
And the third image preprocessing subunit is used for performing random clipping processing on the training sample image HR * to obtain a corresponding third image LR'.
Optionally, the preset loss function is a multi-stage loss function composed of a pixel loss function, a feature loss function, and a generation countermeasure network loss function GAN loss, wherein:
The pixel loss function is:
The I ref and the G θ(G)(Iin)x,y respectively correspond to a first image HR' and a fifth image HR in the second neural network model, and the fifth image HR is a high-resolution image which is output after the fourth image LR is amplified by the second neural network model; the I ref and the G θ(G)(Iin)x,y respectively correspond to a third image LR 'and a fourth image LR in the first neural network model, wherein the fourth image LR is a low-resolution image which is output after the first image HR' is reduced by the first neural network model; w and H represent the width and height of the pixel extraction matrix respectively, and x and y represent the position coordinates on the corresponding width and height respectively;
the characteristic loss function is:
Wherein I ref and G θ(G)(Iin)x,y correspond to the first image HR' and the fifth image HR, respectively, in the second neural network model; the fifth image HR is a high-resolution image which is output after the fourth image LR is amplified by the second neural network model; i ref and G θ(G)(Iin)x,y correspond to the third image LR' and the fourth image LR, respectively, in the first neural network model; the fourth image LR is a low-resolution image which is output after the first image HR' is reduced by the first neural network model; w and H represent the width and height of the feature extraction matrix respectively, and x and y represent the position coordinates on the corresponding width and height respectively; phi represents the neural network used to extract the content features of the image.
Optionally, the first neural network model training unit 91 includes:
The first parameter adjustment subunit is used for inputting the image group into the first neural network model for pre-training based on the pixel loss function and the characteristic loss function so as to realize adjustment of each parameter in the first neural network model;
And the first parameter fine tuning subunit is used for training the pre-trained first neural network model based on generating an antagonistic network loss function GAN loss so as to realize fine tuning of each parameter in the first neural network model.
Optionally, the first parameter adjustment subunit is specifically configured to:
inputting the first image HR' into a first neural network model for processing to obtain a fourth image LR;
Calculating a first value of the fourth image LR and the second image LR * based on the pixel loss function and the feature loss function;
and adjusting each parameter in the first neural network model based on the first numerical value.
Optionally, the first parameter adjustment subunit is further specifically configured to:
calculating a second value of the fourth image LR and the third image LR' based on generating the countermeasure network loss function GAN loss;
and fine-tuning each parameter in the first neural network model based on the second numerical value.
Optionally, the second neural network model training unit 92 includes:
The second parameter adjustment subunit is used for inputting the fourth image LR output by the first neural network model into the second neural network model for training based on the pixel loss function and the characteristic loss function so as to realize adjustment of each parameter in the second neural network model;
And the second parameter fine tuning subunit is used for training the trained second neural network model based on generating the countermeasure network loss function GAN loss so as to realize fine tuning of each parameter in the second neural network model.
Optionally, the second parameter adjustment subunit is specifically configured to:
Inputting a fourth image LR output by the trained first neural network model into a second neural network model for processing to obtain a fifth image HR;
a third numerical value calculation subunit configured to calculate a third numerical value of the fifth image HR and the first image HR' based on the pixel loss function and the feature loss function;
based on the third values, respective parameters in the second neural network model are adjusted.
Optionally, the second parameter tuning subunit is specifically configured to:
calculating a fourth value of the fifth image HR and the first image HR' based on generating an antagonistic network loss function GAN loss;
And fine tuning each parameter in the second neural network model based on the fourth numerical value.
In the embodiment of the invention, the first neural network model is trained based on the preset loss function, the second neural network model is trained based on the preset loss function and the trained first neural network model, after the second neural network model is trained, the image to be processed is input into the second neural network model to output the target image, and the second neural network model is trained through the preset loss function and the trained first neural network model, so that the image quality output by the second neural network model is better, the image noise is removed, and the image after the image amplification processing is not smooth or has unnatural edges.
Fig. 10 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 10, the intelligent terminal 10 of this embodiment includes: a processor 100, a memory 101 and a computer program 102 stored in the memory 101 and executable on the processor 100. The processor 100, when executing the computer program 102, implements the steps in the respective image processing method embodiments described above, for example, steps S101 to S103 shown in fig. 1. Or the processor 100, when executing the computer program 102, performs the functions of the units in the system embodiments described above, for example the functions of the modules 91 to 93 shown in fig. 9.
Illustratively, the computer program 102 may be partitioned into one or more units that are stored in the memory 101 and executed by the processor 100 to accomplish the present invention. The one or more elements may be a series of computer program instruction segments capable of performing a specific function describing the execution of the computer program 102 in the intelligent terminal 10. For example, the computer program 102 may be divided into a first neural network model training unit 91, a second neural network model training unit 92, and an image processing unit 93, each unit specifically functioning as follows:
a first neural network model training unit 91, configured to train the first neural network model based on a preset loss function;
A second neural network model training unit 92, configured to train the second neural network model based on the preset loss function and the trained first neural network model;
The image processing unit 93 is configured to input the image to be processed into the trained second neural network model, and output the target image.
Alternatively, the computer program 102 may be divided into a training sample image acquisition unit and an image preprocessing unit, where the specific functions of the units are as follows:
the training sample image acquisition unit is used for acquiring training sample images;
The image preprocessing unit is used for carrying out image preprocessing on the training sample images to obtain at least two training sample image copies of the training sample images, and the at least two training sample image copies form an image group.
Optionally, the training sample image copy is a first image HR ', a second image LR * or a third image LR', and the image preprocessing unit includes:
The first image preprocessing subunit is used for carrying out noise reduction processing on the training sample image HR * to obtain a corresponding first image HR';
The second image preprocessing subunit is used for performing image downsampling processing on the training sample image HR * to obtain a corresponding second image LR *;
And the third image preprocessing subunit is used for performing random clipping processing on the training sample image HR * to obtain a corresponding third image LR'.
Optionally, the preset loss function is a multi-stage loss function composed of a pixel loss function, a feature loss function, and a generation countermeasure network loss function GAN loss, wherein:
The pixel loss function is:
Wherein I ref and G θ(G)(Iin)x,y correspond to the first image HR' and the fifth image HR, respectively, in the second neural network model; the fifth image HR is a high-resolution image which is output after the fourth image LR is amplified by the second neural network model; i ref and G θ(G)(Iin)x,y correspond to the third image LR' and the fourth image LR, respectively, in the first neural network model; the fourth image LR is a low-resolution image which is output after the first image HR' is reduced by the first neural network model; w and H represent the width and height of the pixel extraction matrix respectively, and x and y represent the position coordinates on the corresponding width and height respectively;
the characteristic loss function is:
Wherein I ref and G θ(G)(Iin)x,y correspond to the first image HR' and the fifth image HR, respectively, in the second neural network model; the fifth image HR is a high-resolution image which is output after the fourth image LR is amplified by the second neural network model; i ref and G θ(G)(Iin)x,y correspond to the third image LR' and the fourth image LR, respectively, in the first neural network model; the fourth image LR is a low-resolution image which is output after the first image HR' is reduced by the first neural network model; w and H represent the width and height of the feature extraction matrix respectively, and x and y represent the position coordinates on the corresponding width and height respectively; phi represents the neural network used to extract the content features of the image.
Optionally, the first neural network model training unit 91 includes:
The first parameter adjustment subunit is used for inputting the image group into the first neural network model for pre-training based on the pixel loss function and the characteristic loss function so as to realize adjustment of each parameter in the first neural network model;
And the first parameter fine tuning subunit is used for training the pre-trained first neural network model based on generating an antagonistic network loss function GAN loss so as to realize fine tuning of each parameter in the first neural network model.
Optionally, the first parameter adjustment subunit is specifically configured to:
inputting the first image HR' into a first neural network model for processing to obtain a fourth image LR;
Calculating a first value of the fourth image LR and the second image LR * based on the pixel loss function and the feature loss function;
and adjusting each parameter in the first neural network model based on the first numerical value.
Optionally, the first parameter adjustment subunit is further specifically configured to:
calculating a second value of the fourth image LR and the third image LR' based on generating the countermeasure network loss function GAN loss;
and fine-tuning each parameter in the first neural network model based on the second numerical value.
Optionally, the second neural network model training unit 92 includes:
The second parameter adjustment subunit is used for inputting the fourth image LR output by the first neural network model into the second neural network model for training based on the pixel loss function and the characteristic loss function so as to realize adjustment of each parameter in the second neural network model;
And the second parameter fine tuning subunit is used for training the trained second neural network model based on generating the countermeasure network loss function GAN loss so as to realize fine tuning of each parameter in the second neural network model.
Optionally, the second parameter adjustment subunit is specifically configured to:
Inputting a fourth image LR output by the trained first neural network model into a second neural network model for processing to obtain a fifth image HR;
a third numerical value calculation subunit configured to calculate a third numerical value of the fifth image HR and the first image HR' based on the pixel loss function and the feature loss function;
based on the third values, respective parameters in the second neural network model are adjusted.
Optionally, the second parameter tuning subunit is specifically configured to:
calculating a fourth value of the fifth image HR and the first image HR' based on generating an antagonistic network loss function GAN loss;
And fine tuning each parameter in the second neural network model based on the fourth numerical value.
The intelligent terminal 10 may be a desktop computer, a notebook computer, a server, a mainframe computer, or the like. The intelligent terminal 10 may include, but is not limited to, a processor 100, a memory 101. It will be appreciated by those skilled in the art that fig. 10 is merely an example of the intelligent terminal 10 and is not intended to limit the intelligent terminal 10, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the terminal may further include input-output devices, network access devices, buses, etc.
The Processor 100 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 101 may be an internal storage unit of the smart terminal 10, for example, a hard disk or a memory of the smart terminal 10. The memory 101 may also be an external storage device of the smart terminal 10, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the smart terminal 10. Further, the memory 101 may also include both an internal storage unit and an external storage device of the smart terminal 10. The memory 101 is used for storing the computer program and other programs and data required by the terminal. The memory 101 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the system is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system/intelligent terminal and method may be implemented in other manners. For example, the system/intelligent terminal embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, systems or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or system capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (13)

1. An image processing method, the method comprising:
training a first neural network model based on a preset loss function, wherein the first neural network model is a neural network model for performing reduction processing on an original high-resolution image, and the first neural network model outputs a low-resolution image with real noise;
Training a second neural network model based on a preset loss function and the trained first neural network model, wherein the second neural network model is a neural network model for amplifying a low-resolution image, and can better simulate the noise environment of a real image and output a high-resolution image which is closer to the real image;
inputting the image to be processed into a trained second neural network model, and outputting a target image;
Wherein the preset loss function is constructed based on one or more of a pixel loss function, a feature loss function, and a generation of an antagonistic network loss function GAN loss.
2. The method of claim 1, comprising, prior to the step of training the first neural network model based on a preset loss function:
Acquiring training sample images
For the training sample imageImage preprocessing is carried out to obtain the training sample image/>At least two copies of the training sample images, the at least two copies of the training sample images constituting an image set.
3. The method of claim 2, wherein the training sample image copy is a first imageSecond image/>Or third image/>The pair of training sample images/>Image preprocessing is carried out to obtain the training sample image/>Is a step of at least two training sample image copies, comprising:
For the training sample image Noise reduction processing is carried out to obtain a corresponding first image/>
For the training sample imagePerforming image downsampling processing to obtain a corresponding second image/>
For the training sample imagePerforming random clipping processing to obtain a corresponding third image/>
4. The method of claim 3, wherein,
The pixel loss function is:
wherein, And/>Corresponding to the first image/>, respectively, in the second neural network modelAnd fifth image/>Fifth image/>For the fourth image/>, via the second neural network modelA high-resolution image output after amplification; And/> Corresponding to the third image/>, respectively, in the first neural network modelAnd fourth image/>Fourth image/>To/>, to a first image via a first neural network modelPerforming reduced output of the low resolution image; /(I)And/>Representing the width and height of the pixel extraction matrix, respectively,/>And/>Representing the position coordinates on the corresponding width and height respectively;
the characteristic loss function is:
wherein, And/>Corresponding to the first image/>, respectively, in the second neural network modelAnd fifth image/>Fifth image/>For the fourth image/>, via the second neural network modelA high-resolution image output after amplification; And/> Corresponding to the third image/>, respectively, in the first neural network modelAnd fourth image/>Fourth image/>To/>, to a first image via a first neural network modelPerforming reduced output of the low resolution image; /(I)And/>Representing the width and height of the feature extraction matrix, respectively,/>And/>Representing the position coordinates on the corresponding width and height respectively; /(I)A neural network is used to extract the content features of the image.
5. The method of claim 4, wherein the training the first neural network model based on the preset loss function comprises:
Inputting the image group into a first neural network model for pre-training based on the pixel loss function and the characteristic loss function so as to realize adjustment of each parameter in the first neural network model;
The pre-trained first neural network model is trained based on generating the countermeasure network loss function GAN loss to achieve adjustment of various parameters in the first neural network model.
6. The method of claim 5, wherein the step of inputting the image set into a first neural network model for pre-training based on a pixel loss function and a feature loss function comprises:
First image is formed Inputting the first neural network model for processing to obtain a fourth image/>
Calculating a fourth image based on the pixel loss function and the feature loss functionAnd a second image/>Is a first value of (a);
and adjusting each parameter in the first neural network model based on the first numerical value.
7. The method of claim 6, wherein the training the pre-trained first neural network model based on generating an antagonistic network loss function GAN loss comprises:
calculating a fourth image based on generating a countermeasure network loss function GAN loss And third image/>Is a second value of (2);
and adjusting each parameter in the first neural network model based on the second numerical value.
8. The method of claim 4, wherein the training the second neural network model based on the pre-set loss function and the trained first neural network model comprises:
based on the pixel loss function and the characteristic loss function, the image group and a fourth image output by the trained first neural network model are combined Inputting a second neural network model for training so as to realize adjustment of each parameter in the second neural network model;
Based on the generation of the countermeasure network loss function GAN loss, the trained second neural network model is trained to achieve adjustment of various parameters in the second neural network model.
9. The method of claim 8, wherein the fourth image output by the set of images and the trained first neural network model is based on a pixel loss function and a feature loss functionThe step of inputting a second neural network model for training, comprising:
Outputting a fourth image of the trained first neural network model Inputting the second neural network model for processing to obtain a fifth image/>
Calculating a fifth image based on the pixel loss function and the feature loss functionAnd the first image/>A third value of (2);
based on the third values, respective parameters in the second neural network model are adjusted.
10. The method of claim 8, wherein the step of training the trained second neural network model based on generating an antagonistic network loss function GAN loss comprises:
calculating a fifth image based on generating a countermeasure network loss function GAN loss And the first image/>A fourth value of (2);
Based on the fourth value, each parameter in the second neural network model is adjusted.
11. An image processing apparatus, characterized in that the apparatus comprises:
The first neural network model training unit is used for training a first neural network model based on a preset loss function, wherein the first neural network model is a neural network model for performing reduction processing on an original high-resolution image, and the first neural network model outputs a low-resolution image with real noise;
the second neural network model training unit is used for training the second neural network model based on a preset loss function and the trained first neural network model, wherein the second neural network model is a neural network model for amplifying the low-resolution image, and can better simulate the noise environment of the real image and output a high-resolution image which is closer to the real image;
The image processing unit is used for inputting the image to be processed into the trained second neural network model and outputting a target image;
Wherein the preset loss function is constructed based on one or more of a pixel loss function, a feature loss function, and a generation of an antagonistic network loss function GAN loss.
12. A smart terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the image processing method according to any one of claims 1 to 10 when the computer program is executed by the processor.
13. A computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the image processing method according to any one of claims 1 to 10.
CN201811646524.2A 2018-12-29 2018-12-29 Image processing method and device and intelligent terminal Active CN111383187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811646524.2A CN111383187B (en) 2018-12-29 2018-12-29 Image processing method and device and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811646524.2A CN111383187B (en) 2018-12-29 2018-12-29 Image processing method and device and intelligent terminal

Publications (2)

Publication Number Publication Date
CN111383187A CN111383187A (en) 2020-07-07
CN111383187B true CN111383187B (en) 2024-04-26

Family

ID=71218347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811646524.2A Active CN111383187B (en) 2018-12-29 2018-12-29 Image processing method and device and intelligent terminal

Country Status (1)

Country Link
CN (1) CN111383187B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222815A (en) * 2021-04-26 2021-08-06 北京奇艺世纪科技有限公司 Image adjusting method and device, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778867A (en) * 2016-12-15 2017-05-31 北京旷视科技有限公司 Object detection method and device, neural network training method and device
CN108280811A (en) * 2018-01-23 2018-07-13 哈尔滨工业大学深圳研究生院 A kind of image de-noising method and system based on neural network
CN108376387A (en) * 2018-01-04 2018-08-07 复旦大学 Image deblurring method based on polymerization expansion convolutional network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11024009B2 (en) * 2016-09-15 2021-06-01 Twitter, Inc. Super resolution using a generative adversarial network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778867A (en) * 2016-12-15 2017-05-31 北京旷视科技有限公司 Object detection method and device, neural network training method and device
CN108376387A (en) * 2018-01-04 2018-08-07 复旦大学 Image deblurring method based on polymerization expansion convolutional network
CN108280811A (en) * 2018-01-23 2018-07-13 哈尔滨工业大学深圳研究生院 A kind of image de-noising method and system based on neural network

Also Published As

Publication number Publication date
CN111383187A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
US20210248715A1 (en) Method and system for end-to-end image processing
CN108122197B (en) Image super-resolution reconstruction method based on deep learning
US10360664B2 (en) Image processing apparatus and method using machine learning
CN107403421B (en) Image defogging method, storage medium and terminal equipment
JP6961139B2 (en) An image processing system for reducing an image using a perceptual reduction method
US10198801B2 (en) Image enhancement using self-examples and external examples
CN110675336A (en) Low-illumination image enhancement method and device
CN110533607B (en) Image processing method and device based on deep learning and electronic equipment
CN110136055B (en) Super resolution method and device for image, storage medium and electronic device
US11663707B2 (en) Method and system for image enhancement
CN108600783B (en) Frame rate adjusting method and device and terminal equipment
CN111784570A (en) Video image super-resolution reconstruction method and device
CN111951172A (en) Image optimization method, device, equipment and storage medium
CN110728626A (en) Image deblurring method and apparatus and training thereof
Guan et al. Srdgan: learning the noise prior for super resolution with dual generative adversarial networks
CN112801904A (en) Hybrid degraded image enhancement method based on convolutional neural network
CN113592776A (en) Image processing method and device, electronic device and storage medium
CN114511449A (en) Image enhancement method, device and computer readable storage medium
Ren et al. Multiscale structure guided diffusion for image deblurring
CN111383187B (en) Image processing method and device and intelligent terminal
CN110766153A (en) Neural network model training method and device and terminal equipment
CN111383172B (en) Training method and device of neural network model and intelligent terminal
CN111583124A (en) Method, device, system and storage medium for deblurring images
JP5562812B2 (en) Transmission / reception switching circuit, radio apparatus, and transmission / reception switching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant