CN111489289B - Image processing method, image processing device and terminal equipment - Google Patents

Image processing method, image processing device and terminal equipment Download PDF

Info

Publication number
CN111489289B
CN111489289B CN201910260259.2A CN201910260259A CN111489289B CN 111489289 B CN111489289 B CN 111489289B CN 201910260259 A CN201910260259 A CN 201910260259A CN 111489289 B CN111489289 B CN 111489289B
Authority
CN
China
Prior art keywords
model
image
current
generation
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910260259.2A
Other languages
Chinese (zh)
Other versions
CN111489289A (en
Inventor
史方
王标
黄梓琪
樊强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changxin Intelligent Control Network Technology Co ltd
Original Assignee
Changxin Intelligent Control Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changxin Intelligent Control Network Technology Co ltd filed Critical Changxin Intelligent Control Network Technology Co ltd
Priority to CN201910260259.2A priority Critical patent/CN111489289B/en
Publication of CN111489289A publication Critical patent/CN111489289A/en
Application granted granted Critical
Publication of CN111489289B publication Critical patent/CN111489289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, an image processing device and terminal equipment, wherein the method comprises the following steps: acquiring an image to be processed to be reconstructed in super resolution; performing super-resolution reconstruction on the image to be processed by using the trained first model; wherein the training process of the first model comprises: acquiring respective first sample images from the real world; for each first sample image, inputting the first sample image into a trained second model to obtain an output sample image which is output by the second model and corresponds to the first sample image, wherein the second model is used for reducing the resolution of the input image, and the image output by the second model is an image which simulates the real world; and training the first model based on each first sample image and each output sample image to obtain a trained first model. The application can solve the technical problem that the existing image super-resolution reconstruction method has low accuracy in information recovery.

Description

Image processing method, image processing device and terminal equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a computer readable storage medium.
Background
The image super-resolution reconstruction is a technology for recovering a low-resolution image into a high-resolution image with more abundant information, and is widely applied to the aspects of safety monitoring, focus analysis and the like.
However, the current image super-resolution reconstruction method is used for reconstructing a low-resolution image (such as an image acquired by a low-definition camera) from the real world, so that the recovery accuracy of the information is not high, and the subsequent safety monitoring, focus analysis and the like are not facilitated.
Disclosure of Invention
In view of the above, the present application provides an image processing method, an image processing apparatus, a terminal device, and a computer readable storage medium, which can solve the technical problem that the existing image super-resolution reconstruction method has low accuracy for information recovery to a certain extent.
The first aspect of the present application provides an image processing method, including:
acquiring an image to be processed to be reconstructed in super resolution;
performing super-resolution reconstruction on the image to be processed by using a trained first model to obtain a super-resolution reconstructed image, wherein the first model is a model for super-resolution reconstruction;
the training process of the first model comprises the following steps:
Acquiring respective first sample images from the real world;
for each first sample image, inputting the first sample image into a trained second model to obtain an output sample image which is output by the second model and corresponds to the first sample image, wherein the second model is used for reducing the resolution of the image input into the second model, and the image output by the second model is an image from the real world;
and training the initial first model based on each first sample image and each output sample image output by the trained second model to obtain a trained first model.
A second aspect of the present application provides an image processing apparatus including:
the image acquisition module to be processed is used for acquiring an image to be processed to be reconstructed in super resolution;
the super-resolution reconstruction module is used for performing super-resolution reconstruction on the image to be processed by using a first trained model to obtain a super-resolution reconstructed image, wherein the first model is a model for super-resolution reconstruction;
wherein, utilize first model training module to train above-mentioned first model, this first model training module includes:
A first sample acquisition unit configured to acquire respective first sample images from the real world;
a low resolution sample generating unit, configured to input, for each first sample image, the first sample image to a trained second model, and obtain an output sample image output by the second model, where the output sample image corresponds to the first sample image, the second model is used to reduce resolution of an image input to the second model, and the image output by the second model is an image that simulates a real world;
the first model training unit is used for training the initial first model based on each first sample image and each output sample image output by the trained second model to obtain a trained first model.
A third aspect of the present application provides a terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, said processor implementing the steps of the method of the first aspect as described above when said computer program is executed.
A fourth aspect of the application provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
A fifth aspect of the application provides a computer program product comprising a computer program which, when executed by one or more processors, implements the steps of the method of the first aspect as described above.
From the above, the present application provides an image processing method: and performing super-resolution reconstruction on the image to be processed to be subjected to super-resolution reconstruction by using the trained first model to obtain a super-resolution reconstructed image, wherein the first model is a model for super-resolution reconstruction. The training process of the first model in the application comprises the following steps: first, acquiring respective first sample images from the real world; secondly, for each first sample image, reducing the resolution of the first sample image by using a trained second model to obtain an output sample image corresponding to the first sample image, wherein the image output by the second model is an image simulating the real world, and thus, each output sample image output by the second model is a low resolution image simulating the real world; then, based on each first sample image and each output sample image, an initial first model is trained, resulting in a trained first model.
Currently, the training process for a model for super-resolution reconstruction (for convenience of the following description, this model is called X) is generally as follows: firstly, acquiring high-resolution sample images from the real world, and respectively downsampling each high-resolution sample image to obtain each low-resolution sample image; and then training the model X by utilizing each high-resolution sample image and each low-resolution sample image to obtain a trained model X for super-resolution reconstruction. Since in the conventional training method, the low-resolution sample image for training the model X is obtained by downsampling, that is, the low-resolution sample image is generated from the high-resolution sample image from the real world, which is not itself from the real world, the information recovery accuracy of the trained model X obtained by the conventional training method to the low-resolution image from the real world is not high. However, the low-resolution sample image for training the model X is generated by simulating the characteristics of the low-resolution image from the real world by the second model after training, so that compared with the traditional mode, the method can solve the technical problem that the existing image super-resolution reconstruction method is low in information recovery accuracy to a certain extent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an implementation of an image processing method according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of a training process of a first model according to a first embodiment of the present application;
FIG. 3 is a schematic diagram of a training method of a second model according to a first embodiment of the present application;
FIG. 4 is a specific implementation of step S203 provided in the first embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus according to a second embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal device according to a third embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The image processing method provided by the embodiment of the application can be applied to a terminal device, and the terminal device includes, but is not limited to: smart phones, tablet computers, smart wearable devices, desktop computers, etc.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
In order to illustrate the above technical solution of the present application, the following description will be made by specific examples.
Example 1
Referring to fig. 1, an image processing method according to a first embodiment of the present application is described below, and includes:
in step S101, a to-be-processed image to be reconstructed with super resolution is acquired;
in the embodiment of the present application, the image to be processed may be a photograph taken by a user through a camera, for example, a photograph of a building, a landscape, a person, or a food taken by the user using a camera APP in a smart phone; or, the images received by the user through other application programs, for example, the images sent by other WeChat contacts received by the user in the WeChat; alternatively, the image may be an image downloaded by the user from the internet, such as an image downloaded by the user in a browser through a common carrier network; or may be a frame of image in a video, such as one of an animation or a television show viewed by the user. The source of the image to be processed is not limited herein.
In step S102, performing super-resolution reconstruction on the image to be processed by using a trained first model to obtain a super-resolution reconstructed image, where the first model is a model for super-resolution reconstruction;
before the above step S102 is performed, a first model for super-resolution reconstruction needs to be trained in advance. For example, if steps S101 to S102 are performed by the smart phone X, the trained first model may be stored in the smart phone X before the smart phone X leaves the factory.
In a first embodiment of the present application, if the image to be processed in step S101 is an image including a face, the method further includes, after step S102, the steps of: and (3) carrying out face recognition on the super-resolution reconstructed image obtained in the step (S102) to determine identity information of a face contained in the image to be processed (in order to further improve the accuracy of the subsequent face recognition, the first sample image and the second sample image which are described later are preferably face images, and each face image can be allowed to be partially shielded, but the face is preferably displayed to the greatest extent.
The training method of the first model according to the embodiment of the present application is shown in fig. 2, and includes the following steps:
In step S201, respective first sample images from the real world are acquired;
in an embodiment of the present application, the first sample image is a high resolution image from the real world. The high-resolution image from the real world can be a high-resolution image directly acquired by a camera; or the image after compression and decompression can be the high-resolution image acquired by the camera; alternatively, an image obtained by adding some noise (such as salt and pepper noise) to a high-resolution image acquired by the camera may be used.
In step S202, for each first sample image, inputting the first sample image to a trained second model, to obtain an output sample image corresponding to the first sample image output by the second model, where the second model is used to reduce the resolution of the image input to the second model, and the image of the second model is an image simulating an image from the real world;
in an embodiment of the present application, the output sample image is a low resolution image simulating a real world image. Each first sample image corresponds to an output sample image, that is, if the first sample image a corresponds to an output sample image B, the image content of the first sample image a is approximately the same as that of the output sample image B, but the resolution of the first sample image a is greater than that of the output sample image B.
The training method of the second model described in this step S202 is shown in the following fig. 3.
In step S203, training an initial first model based on each first sample image and each output sample image output by the trained second model, to obtain a trained first model;
according to the embodiment of the application, the initial first model is trained by utilizing each paired image pair (each image pair comprises a first sample image and an output sample image corresponding to the first sample image), so that a trained first model capable of realizing super-resolution reconstruction is obtained. The specific implementation process of this step S203 may be referred to later fig. 4.
One method of training the second model is discussed below with reference to fig. 3, but those skilled in the art will appreciate that the method of training the second model is not limited to fig. 3. In the example shown in fig. 3, the second model is a second generation model in a second GAN model, where the second GAN model includes a second generation model and a second discriminant model, and as shown in fig. 3, the training method of the second model may include the following steps:
in step S301, respective second sample images from the real world are acquired, the resolution of any one of the first sample images being greater than the resolution of any one of the second sample images;
In the example shown in fig. 3, the second model is trained with respective first sample images from the real world and respective second sample images from the real world, wherein the resolution of the first sample images is greater than the resolution of the second sample images, e.g. 500 x 1000 for each first sample image and 20 x 20 for each second sample image, i.e. the second sample images are low resolution images from the real world. The low-resolution image from the real world can be a low-resolution image directly acquired by a camera; or, the image can be an image obtained by compressing and decompressing the low-resolution image acquired by the camera; alternatively, an image obtained by adding some noise (such as salt and pepper noise) to the low-resolution image acquired by the camera may be used.
Further, each second sample image is not necessarily in one-to-one correspondence with each first sample image, that is, assuming that there are two of the first sample images, namely, image a and image B, the image contents of which are the face of xiao Zhao and the face of xiao Sun, respectively, but there may be three of the second sample images, namely, image C, image D and image E, the image contents of which may be the face of xiao Li, the face of the small week and the face of xiao Wu, respectively.
In step S302, for each first sample image, inputting the first sample image to a second generation model in the current second GAN model, to obtain a second generation image with reduced resolution output by the current second generation model;
in step S303, training a current second discrimination model based on each second generated image output by the current second generated model and each second sample image from the real world to obtain a second discrimination model after the current training, wherein the second discrimination model is used for judging whether the image input to the second discrimination model is an image from the real world;
the above steps S302 to S303 can be implemented: and keeping the current second generation model unchanged, and continuously training the second judgment model by using the current second generation model to obtain a second judgment model after the training.
In step S304, according to the discrimination result of the second discrimination model after the current training on each second generated image output by the current second generated model, determining a first loss function value of the current second generated model, where the first loss function value is used to describe the simulation similarity of the current second generated model to the image from the real world, and the first loss function value is inversely related to the simulation similarity of the current second generated model to the image from the real world;
In step S305, a current loss function value of the second generation model is determined based on the first loss function value;
in step S306, continuously adjusting each parameter of the current second generation model until the loss function value of the current second generation model is smaller than the second preset loss value, to obtain a second generation model after the current training;
the above steps S304 to S306 can be implemented: and continuously adjusting each parameter of the current second generation model by using the second judgment model after the current training to obtain the second generation model after the current training, so that the second generation model after the current training can simulate an image from the real world.
The specific determination manner of "determine the first loss function value of the current second generation model" in the above step S304 may be:
based on a Hinge Loss function (Hinge Loss), determining a first Loss function value of a current second generation model according to a judging result of the second judgment model after the current training on each second generation image output by the current second generation model, wherein the Hinge Loss specifically comprises the following steps:
wherein ,d, judging a second generated image according to the second judging model after the training 2 (x) And judging the second sample image by the second judging model after the training. The range Loss is a Loss function commonly used in the neural network training process at present, and it is not repeated here how to determine the first Loss function value of the current second generation model according to the range Loss formula given above.
The step S305 may specifically be: the first loss function value is directly determined as the current loss function value of the second generation model.
In step S307, it is determined whether the number of times of returning to step S302 reaches the second preset number of times, if yes, step S308 is executed, and if not, step S302 is executed again;
in step S308, the current second generated model is used as a trained second discriminant model;
through this step S307, loop iterative training of the second generation model and the second discrimination model in the second GAN model may be implemented, so that the finally obtained second model may better simulate a low resolution image from the real world.
In addition, in the process of training the second model shown in fig. 3, the following steps may be further included between step S303 and step S305: a second loss function value of the current second generation model is determined from each second generation image and the second sample image, wherein the second loss function value is used for describing the learning degree of the current second generation model on the second sample image from the real world, and the second loss function value is inversely related to the learning degree of the second sample image from the real world. Accordingly, step S305 specifically includes: and determining a current loss function value of the second generation model based on the first loss function value and the second loss function value.
Wherein the determining the second loss function value of the current second generation model according to each second generation image and the second sample image may include:
for each second generated image, the mean square error of the second generated image and one (any one may be) second sample image is calculated, and the mean square error calculation formula is as follows:
wherein the G 2 (i, j) is the pixel value of the pixel point at the position (i, j) in the second generated image, the S 2 (i, j) is the pixel value of the pixel point at position (i, j) in the second sample image, the second generated imageThe image sizes of the image and the second sample image are M multiplied by N;
and determining a second loss function value of the current second generation model according to the mean square error corresponding to each second generation image (the second loss function value can be an average value of the mean square error corresponding to each second generation image).
That is, the degree of learning of the low-resolution sample image from the real world by the current second generation model can be evaluated by calculating the mean square error of the second generation image and a certain second sample image.
The "determining a current loss function value of the second generation model based on the first loss function value and the second loss function value" includes:
Determining a current loss function value of the second generation model based on the first loss function value and the second loss function value based on a loss function calculation formula, wherein the loss function calculation formula is as follows:
L=λ 0 ×L 11 ×L 2
wherein ,λ0 And lambda is 1 Is a super parameter greater than 0, L 1 For the first loss function value, L 2 In general, λ is the second loss function value 0 ×L 11 ×L 2
One specific implementation of step S203 in fig. 2 is described below using fig. 4. In fig. 4, the first model according to the present application is a first generation model in a first GAN model, where the first GAN includes the first generation model and a first discriminant model. As shown in fig. 4, this step S203 may include:
in step S401, for each output sample image, inputting the output sample image to a first generation model in the current first GAN model, to obtain a first generation image with improved resolution output by the current first generation model;
in step S402, training a current first discrimination model based on each first generated image output by the current first generated model and each first sample image from the real world to obtain a first discrimination model after the current training, wherein the first discrimination model is used for judging whether an image input to the first discrimination model is an image from the real world;
In step S403, determining a first loss function value of the current first generation model according to each first generation image and each first sample image, where the first loss function value is used to describe a loss degree of the current first generation model on the image content in the super-resolution reconstruction process, and the first loss function value is positively related to the loss degree;
in step S404, determining a second loss function value of the current first generation model according to the discrimination result of the first discrimination model after the current training on each first generation image output by the current first generation model, where the second loss function value is used to describe the simulation similarity degree of the current first generation model on the image from the real world, and the second loss function value is inversely related to the simulation similarity degree;
in step S405, determining a current loss function value of the first generation model based on the current first loss function value of the first generation model and the current second loss function value of the first generation model;
in step S406, continuously adjusting each parameter of the current first generation model until the loss function value of the current first generation model is smaller than the first preset loss value, to obtain a first generation model after the current training;
In step S407, it is determined whether the number of times of returning to step S401 reaches the first preset number of times, if yes, step S408 is executed, and if no, step S401 is executed again;
in step S408, the current first generation model is used as the trained first model.
The training process for the first model shown in fig. 4 is substantially the same as the training process for the second model shown in fig. 3. A specific calculation method of the first loss function value of the first generation model in step S403 in embodiment 4 of the present application is discussed below:
for each first generated image, a mean square error of the first generated image and the corresponding first sample image (the first generated image and the corresponding first sample image have substantially the same image content) is calculated, and the mean square error calculation formula is as follows:
wherein the G 1 (i, j) is the pixel value of the pixel point at the position (i, j) in the first generated image, the S 1 (i, j) is the pixel value of the pixel point at the position (i, j) in the first sample image, and the image sizes of the second generated image and the second sample image are p×q;
a current first generation model first loss function value (the second loss function value may be an average value of the mean square error corresponding to each second generation image) is determined according to the mean square error corresponding to each first generation image.
It will be readily apparent to those skilled in the art that the specific implementation of this step S403 is substantially the same as the calculation of the "second loss function value" in fig. 3.
The following discusses a specific calculation method of the second loss function value of the first generation model in step S404 in embodiment 4 of the present application:
based on a Hinge Loss function (Hinge Loss), determining a second Loss function value of the current first generation model according to the discrimination results of the first discrimination model after the current training on each first generation image output by the current first generation model, wherein the Hinge Loss specifically comprises the following steps:
wherein ,d, judging the first generated image by the first judging model after the training 1 (x) For this trainingAnd judging results of the first sample image by the first judging model.
It will be readily apparent to those skilled in the art that the specific implementation of this step S404 is substantially the same as the calculation of the "first loss function value" in fig. 3.
In addition, the manner of calculating the loss function value of the first generation model in step S405 is also substantially the same as the manner of calculating the loss function value of the second generation model in fig. 3, and the current loss function value of the first generation model may be determined in step S405 based on the following loss function calculation formula:
L=λ 2 ×L 13 ×L 2
wherein ,λ2 And lambda is 3 Is a super parameter greater than 0, L 1 ' is the first loss function value, L, as shown in FIG. 4 2 ' is the second loss function value described in FIG. 4, generally λ 3 ×L 2 ‘>λ 2 ×L 1 ‘。
It will be readily understood by those skilled in the art that the training procedure of the first model for super-resolution reconstruction is not only one shown in fig. 4, but also the prior art discloses a specific implementation of step S203 other than fig. 4, and that other specific implementations of step S203 disclosed in the prior art are also within the scope of the present application.
Currently, the training process for a model for super-resolution reconstruction (for convenience of the following description, this model is called X) is generally as follows: acquiring high-resolution sample images from the real world, and respectively downsampling each high-resolution sample image to obtain each low-resolution sample image; and then training the model X by utilizing each high-resolution sample image and each low-resolution sample image to obtain a trained model X for super-resolution reconstruction. Since in the conventional training method, the low-resolution sample image for training the model X is obtained by downsampling, that is, the low-resolution sample image is generated from the high-resolution sample image from the real world, which is not itself from the real world, the information recovery accuracy of the trained model X obtained by the conventional training method to the low-resolution image from the real world is not high. However, the low-resolution sample image for training the model X is generated by simulating the characteristics of the low-resolution image from the real world by the second model after training, so that compared with the traditional mode, the method can solve the technical problem that the existing image super-resolution reconstruction method is low in information recovery accuracy to a certain extent.
It should be understood that, the sequence numbers of the steps in the above method embodiments do not mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present application.
Example two
A second embodiment of the present application provides an image processing apparatus, for convenience of description, only a portion related to the present application is shown, and as shown in fig. 5, the image processing apparatus 500 includes:
the image to be processed acquisition module 501 is configured to acquire an image to be processed to be reconstructed in super resolution;
the super-resolution reconstruction module 502 is configured to perform super-resolution reconstruction on the image to be processed by using a first trained model to obtain a super-resolution reconstructed image, where the first model is a model for super-resolution reconstruction;
wherein, utilize first model training module to train above-mentioned first model, this first model training module includes:
a first sample acquisition unit configured to acquire respective first sample images from the real world;
a low resolution sample generating unit, configured to input, for each first sample image, the first sample image to a trained second model, and obtain an output sample image output by the second model, where the output sample image corresponds to the first sample image, the second model is used to reduce resolution of an image input to the second model, and the image output by the second model is an image that simulates a real world;
The first model training unit is used for training the initial first model based on each first sample image and each output sample image output by the trained second model to obtain a trained first model.
Optionally, the image to be processed is an image including a human face;
accordingly, the image processing apparatus 500 further includes:
and the face recognition module is used for recognizing the face of the image reconstructed by the super resolution and determining the identity information of the face contained in the image to be processed.
Optionally, the first model is a first generation model in a first generation-impedance network GAN model, where the first GAN model includes a first generation model and a first discriminant model;
correspondingly, the first model training unit comprises:
the first discriminant model training subunit is used for inputting each output sample image into a first generation model in the current first GAN model to obtain a first generation image with improved resolution output by the current first generation model;
training a current first discrimination model based on each first generated image output by the current first generation model and each first sample image from the real world to obtain a first discrimination model after the current training, wherein the first discrimination model is used for judging whether the image input to the first discrimination model is an image from the real world;
The first generation model training subunit is used for determining a first loss function value of the current first generation model according to each first generation image and each first sample image, wherein the first loss function value is used for describing the loss degree of the current first generation model on the image content in the super-resolution reconstruction process, and the first loss function value is positively correlated with the loss degree of the image content;
determining a second loss function value of the current first generation model according to the discrimination results of the first discrimination model after the current training on each first generation image output by the current first generation model, wherein the second loss function value is used for describing the simulation similarity degree of the current first generation model on the image from the real world, and the second loss function value is inversely related to the simulation similarity degree of the current first generation model on the image from the real world;
determining a current first generation model loss function value based on the current first generation model loss function value and the current first generation model second loss function value;
continuously adjusting each parameter of the current first generation model until the loss function value of the current first generation model is smaller than a first preset loss value to obtain a first generation model after the training;
The circulation subunit is used for triggering the first discriminant model training subunit to continue to run after the first generation model after the training is obtained until the return times reach a first preset times;
and the first model determining subunit is used for taking the current first generation model as a trained first model after the first preset times are reached.
Optionally, the second model is a second generation model in a second GAN model, where the second GAN model includes a second generation model and a second discrimination model;
accordingly, training the second model by using a second model training module, the second model training module comprising:
a second discriminant model training unit for acquiring respective second sample images from the real world, the resolution of any one of the first sample images being greater than the resolution of any one of the second sample images;
for each first sample image, inputting the first sample image into a second generation model in the current second GAN model to obtain a second generation image with reduced resolution output by the current second generation model;
training a current second discrimination model based on each second generated image output by the current second generated model and each second sample image from the real world to obtain a second discrimination model after the current training, wherein the second discrimination model is used for judging whether the image input to the second discrimination model is the image from the real world;
The second generation model training unit is used for determining a first loss function value of the current second generation model according to the discrimination result of the second discrimination model after the current training on each second generation image output by the current second generation model, wherein the first loss function value is used for describing the simulation similarity degree of the current second generation model on the image from the real world, and the first loss function value is inversely related to the simulation similarity degree of the current second generation model on the image from the real world;
determining a current second generated model loss function value based on the current second generated model first loss function value;
continuously adjusting each parameter of the current second generation model until the loss function value of the current second generation model is smaller than a second preset loss value to obtain a second generation model after the training;
the circulating unit is used for triggering the second discriminant model training subunit to continue to run after the second generated model after the training is obtained until the return times reach a second preset times;
and the second model determining unit is used for taking the current second generation model as a trained second model after the second preset times are reached.
Optionally, the second generation model training unit is further configured to determine a second loss function value of the current second generation model according to each second generation image and the second sample image, where the second loss function value is used to describe a learning degree of the current second generation model on the second sample image from the real world, and the second loss function value is inversely related to the learning degree of the current second generation model on the second sample image from the real world;
accordingly, the determining a current loss function value of the second generation model based on the first loss function value includes:
and determining a current loss function value of a second generation model based on the first loss function value and the second loss function value.
Optionally, the determining the first loss function value of the current second generation model according to the discrimination result of the second discrimination model after the current training on each second generation image output by the current second generation model includes:
based on a Hinge Loss function (Hinge Loss), determining a first Loss function value of a current second generation model according to a judging result of the second judgment model after the current training on each second generation image output by the current second generation model, wherein the Hinge Loss is specifically:
wherein ,d, judging a second generated image according to the second judging model after the training 2 (x) The judgment result of the second sample image is obtained by the second judgment model after the training;
the determining a second loss function value of the current second generation model according to each second generation image and the second sample image includes:
for each second generated image, calculating the mean square error of the second generated image and a second sample image, wherein the mean square error calculation formula is as follows:
wherein the G 2 (i, j) is the pixel value of the pixel point at the position (i, j) in the second generated image, the S 2 (i, j) is the pixel of the pixel point at the position (i, j) in the second sample imageThe image sizes of the second generated image and the second sample image are M multiplied by N;
determining a second loss function value of the current second generation model according to the mean square error corresponding to each second generation image;
the determining a current loss function value of the second generation model based on the first loss function value and the second loss function value includes:
determining a current loss function value of a second generation model based on a loss function calculation formula based on the first loss function value and the second loss function value, wherein the loss function calculation formula is:
L=λ 0 ×L 11 ×L 2
wherein ,λ0 And lambda is 1 Is super-parameter, L 1 For the first loss function value, L 2 Is the second loss function value.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
Example three
Fig. 6 is a schematic diagram of a terminal device according to a third embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in the memory 61 and executable on the processor 60. The steps of the various method embodiments described above, such as steps S101 through S102 shown in fig. 1, are implemented when the processor 60 executes the computer program 62. Alternatively, the processor 60, when executing the computer program 62, performs the functions of the modules/units of the apparatus embodiments, such as the functions of the modules 501 to 502 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 62 in the terminal device 6. For example, the computer program 62 may be divided into a pending image acquisition module and a super resolution reconstruction module, each of which functions as follows:
Acquiring an image to be processed to be reconstructed in super resolution;
performing super-resolution reconstruction on the image to be processed by using a trained first model to obtain a super-resolution reconstructed image, wherein the first model is a model for super-resolution reconstruction;
the training process of the first model comprises the following steps:
acquiring respective first sample images from the real world;
for each first sample image, inputting the first sample image into a trained second model to obtain an output sample image which is output by the second model and corresponds to the first sample image, wherein the second model is used for reducing the resolution of the image input into the second model, and the image output by the second model is an image from the real world;
and training the initial first model based on each first sample image and each output sample image output by the trained second model to obtain a trained first model.
The terminal device 6 may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the terminal device 6 and is not limiting of the terminal device 6, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, for example, a hard disk or a memory of the terminal device 6. The memory 61 may be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided in the terminal device 6. Further, the memory 61 may include both the internal storage unit and the external storage device of the terminal device 4. The memory 61 is used for storing the computer program and other programs and data required for the terminal device. The above-described memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units described above is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment may be implemented. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium described above can be appropriately increased or decreased according to the requirements of the jurisdiction's legislation and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. An image processing method, comprising:
acquiring an image to be processed to be reconstructed in super resolution;
performing super-resolution reconstruction on the image to be processed by using a trained first model to obtain a super-resolution reconstructed image, wherein the first model is a model for super-resolution reconstruction;
wherein the training process of the first model comprises:
acquiring respective first sample images from the real world;
for each first sample image, inputting the first sample image into a trained second model to obtain an output sample image which is output by the second model and corresponds to the first sample image, wherein the second model is used for reducing the resolution of the image input into the second model, and the image output by the second model is an image from the real world;
Training an initial first model based on each first sample image and each output sample image output by the trained second model to obtain a trained first model;
the second model is a second generation model in a second GAN model, and the second GAN model comprises a second generation model and a second discrimination model;
accordingly, the training process of the second model is as follows:
acquiring respective second sample images from the real world, the resolution of any one of the first sample images being greater than the resolution of any one of the second sample images;
for each first sample image, inputting the first sample image into a second generation model in the current second GAN model to obtain a second generation image with reduced resolution output by the current second generation model;
training a current second discrimination model based on each second generated image output by the current second generated model and each second sample image from the real world to obtain a second discrimination model after the current training, wherein the second discrimination model is used for judging whether the image input to the second discrimination model is the image from the real world;
Determining a first loss function value of the current second generation model according to the discrimination results of the second discrimination model after the current training on each second generation image output by the current second generation model, wherein the first loss function value is used for describing the simulation similarity degree of the current second generation model on the image from the real world, and the first loss function value is inversely related to the simulation similarity degree of the current second generation model on the image from the real world;
determining a current second generated model loss function value based on the current second generated model first loss function value;
continuously adjusting each parameter of the current second generation model until the loss function value of the current second generation model is smaller than a second preset loss value to obtain a second generation model after the training;
after the second generation model after the training is obtained, returning to the step of inputting the first sample image to the second generation model in the current second GAN model for each first sample image, and obtaining a second generation image with reduced resolution output by the current second generation model and a subsequent step until the return times reach a second preset times;
And after the second preset times are reached, taking the current second generation model as a trained second model.
2. The image processing method according to claim 1, wherein the image to be processed is an image containing a human face;
accordingly, after the step of obtaining the super-resolution reconstructed image, the image processing method further includes:
and carrying out face recognition on the image reconstructed by the super resolution, and determining identity information of the face contained in the image to be processed.
3. The image processing method of claim 1, wherein the first model is a first generation model in a first generation-resistant network GAN model, the first GAN model comprising a first generation model and a first discriminant model;
correspondingly, the training the initial first model based on each first sample image and each output sample image output by the trained second model to obtain a trained first model includes:
for each output sample image, inputting the output sample image into a first generation model in the current first GAN model to obtain a first generation image with improved resolution output by the current first generation model;
Training a current first discrimination model based on each first generated image output by the current first generation model and each first sample image from the real world to obtain a first discrimination model after the current training, wherein the first discrimination model is used for judging whether the image input to the first discrimination model is an image from the real world;
determining a first loss function value of the current first generation model according to each first generation image and each first sample image, wherein the first loss function value is used for describing the loss degree of the current first generation model on the image content in the super-resolution reconstruction process, and the first loss function value is positively related to the loss degree of the current first generation model on the image content;
determining a second loss function value of the current first generation model according to the discrimination results of the first discrimination model after the current training on each first generation image output by the current first generation model, wherein the second loss function value is used for describing the simulation similarity degree of the current first generation model on the image from the real world, and the second loss function value is inversely related to the simulation similarity degree of the current first generation model on the image from the real world;
Determining a current first generation model loss function value based on the current first generation model loss function value and the current first generation model second loss function value;
continuously adjusting each parameter of the current first generation model until the loss function value of the current first generation model is smaller than a first preset loss value to obtain a first generation model after the training;
returning to the step and the subsequent steps of inputting the output sample image to the current first generation model in the first GAN model after the first generation model after the training is obtained, and obtaining the first generation image with improved resolution output by the current first generation model until the return times reach a first preset times;
and after the first preset times are reached, taking the current first generation model as a trained first model.
4. The image processing method according to claim 1, wherein after the step of obtaining the second discrimination model after the present training and before the step of determining the loss function value of the current second generation model based on the first loss function value, the training process of the second model further includes:
Determining a second loss function value of the current second generation model according to each second generation image and the second sample image, wherein the second loss function value is used for describing the learning degree of the current second generation model on the second sample image from the real world, and the second loss function value is inversely related to the learning degree of the current second generation model on the second sample image from the real world;
accordingly, the determining a current loss function value of the second generation model based on the first loss function value includes:
and determining a current loss function value of a second generation model based on the first loss function value and the second loss function value.
5. The method of claim 4, wherein determining the first loss function value of the current second generation model according to the discrimination result of the second discrimination model after the current training for each second generation image output by the current second generation model, comprises:
based on a Hinge Loss function (Hinge Loss), determining a first Loss function value of a current second generation model according to a judging result of the second judgment model after the current training on each second generation image output by the current second generation model, wherein the Hinge Loss is specifically:
wherein , for the second discrimination model after this training to discriminate the second generated image, the +.>The judgment result of the second sample image is obtained by the second judgment model after the training;
the determining a second loss function value of the current second generation model according to each second generation image and the second sample image includes:
for each second generated image, calculating the mean square error of the second generated image and a second sample image, wherein the mean square error calculation formula is as follows:
wherein the saidFor the pixel value of the pixel at position (i, j) in the second generated image, said +.>The pixel values of the pixel points at the positions (i, j) in the second sample image are the pixel values of the pixel points at the positions (i, j), and the image sizes of the second generated image and the second sample image are M multiplied by N;
determining a second loss function value of the current second generation model according to the mean square error corresponding to each second generation image;
the determining a current loss function value of the second generation model based on the first loss function value and the second loss function value includes:
determining a current loss function value of a second generation model based on a loss function calculation formula based on the first loss function value and the second loss function value, wherein the loss function calculation formula is:
L=λ 0 ×L 11 ×L 2
wherein ,λ0 And lambda is 1 Is super-parameter, L 1 For the first loss function value, L 2 Is the second loss function value.
6. An image processing apparatus, comprising:
the image acquisition module to be processed is used for acquiring an image to be processed to be reconstructed in super resolution;
the super-resolution reconstruction module is used for performing super-resolution reconstruction on the image to be processed by using a first trained model to obtain a super-resolution reconstructed image, wherein the first model is a model for super-resolution reconstruction;
wherein training the first model with a first model training module, the first model training module comprising:
a first sample acquisition unit configured to acquire respective first sample images from the real world;
a low resolution sample generating unit, configured to input, for each first sample image, the first sample image to a trained second model, and obtain an output sample image output by the second model, where the output sample image corresponds to the first sample image, the second model is used to reduce resolution of an image input to the second model, and the image output by the second model is an image that simulates a real world;
The first model training unit is used for training an initial first model based on each first sample image and each output sample image output by the trained second model to obtain a trained first model;
the second model is a second generation model in a second GAN model, and the second GAN model comprises a second generation model and a second discrimination model;
accordingly, the training process of the second model is as follows:
acquiring respective second sample images from the real world, the resolution of any one of the first sample images being greater than the resolution of any one of the second sample images;
for each first sample image, inputting the first sample image into a second generation model in the current second GAN model to obtain a second generation image with reduced resolution output by the current second generation model;
training a current second discrimination model based on each second generated image output by the current second generated model and each second sample image from the real world to obtain a second discrimination model after the current training, wherein the second discrimination model is used for judging whether the image input to the second discrimination model is the image from the real world;
Determining a first loss function value of the current second generation model according to the discrimination results of the second discrimination model after the current training on each second generation image output by the current second generation model, wherein the first loss function value is used for describing the simulation similarity degree of the current second generation model on the image from the real world, and the first loss function value is inversely related to the simulation similarity degree of the current second generation model on the image from the real world;
determining a current second generated model loss function value based on the current second generated model first loss function value;
continuously adjusting each parameter of the current second generation model until the loss function value of the current second generation model is smaller than a second preset loss value to obtain a second generation model after the training;
after the second generation model after the training is obtained, returning to the step of inputting the first sample image to the second generation model in the current second GAN model for each first sample image, and obtaining a second generation image with reduced resolution output by the current second generation model and a subsequent step until the return times reach a second preset times;
And after the second preset times are reached, taking the current second generation model as a trained second model.
7. The image processing apparatus according to claim 6, wherein the image to be processed is an image containing a human face;
accordingly, the image processing apparatus further includes:
and the face recognition module is used for recognizing the face of the image reconstructed by the super resolution and determining the identity information of the face contained in the image to be processed.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when the computer program is executed.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 5.
CN201910260259.2A 2019-04-02 2019-04-02 Image processing method, image processing device and terminal equipment Active CN111489289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910260259.2A CN111489289B (en) 2019-04-02 2019-04-02 Image processing method, image processing device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910260259.2A CN111489289B (en) 2019-04-02 2019-04-02 Image processing method, image processing device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111489289A CN111489289A (en) 2020-08-04
CN111489289B true CN111489289B (en) 2023-09-12

Family

ID=71794299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910260259.2A Active CN111489289B (en) 2019-04-02 2019-04-02 Image processing method, image processing device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111489289B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781310A (en) * 2021-09-17 2021-12-10 北京金山云网络技术有限公司 Image processing method, and training method and device of image processing model
CN116188276A (en) * 2023-05-04 2023-05-30 深圳赛陆医疗科技有限公司 Image processing method, image processing apparatus, and storage medium for gene samples

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN107633218A (en) * 2017-09-08 2018-01-26 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN108491809A (en) * 2018-03-28 2018-09-04 百度在线网络技术(北京)有限公司 The method and apparatus for generating model for generating near-infrared image
CN108898549A (en) * 2018-05-29 2018-11-27 Oppo广东移动通信有限公司 Image processing method, picture processing unit and terminal device
CN109509152A (en) * 2018-12-29 2019-03-22 大连海事大学 A kind of image super-resolution rebuilding method of the generation confrontation network based on Fusion Features

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491771A (en) * 2017-09-21 2017-12-19 百度在线网络技术(北京)有限公司 Method for detecting human face and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN107633218A (en) * 2017-09-08 2018-01-26 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN108491809A (en) * 2018-03-28 2018-09-04 百度在线网络技术(北京)有限公司 The method and apparatus for generating model for generating near-infrared image
CN108898549A (en) * 2018-05-29 2018-11-27 Oppo广东移动通信有限公司 Image processing method, picture processing unit and terminal device
CN109509152A (en) * 2018-12-29 2019-03-22 大连海事大学 A kind of image super-resolution rebuilding method of the generation confrontation network based on Fusion Features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习 的图像超分辨率重构算法研究;吴科永;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20190215;第2-4章 *

Also Published As

Publication number Publication date
CN111489289A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN108805047B (en) Living body detection method and device, electronic equipment and computer readable medium
CN110675336A (en) Low-illumination image enhancement method and device
CN112102204B (en) Image enhancement method and device and electronic equipment
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN108805265B (en) Neural network model processing method and device, image processing method and mobile terminal
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN111783146B (en) Image processing method and device based on privacy protection and electronic equipment
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
WO2021082819A1 (en) Image generation method and apparatus, and electronic device
CN104902143B (en) A kind of image de-noising method and device based on resolution ratio
CN113658065B (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN108447040A (en) histogram equalization method, device and terminal device
CN110910326B (en) Image processing method and device, processor, electronic equipment and storage medium
CN110766153A (en) Neural network model training method and device and terminal equipment
CN111489289B (en) Image processing method, image processing device and terminal equipment
CN105979283A (en) Video transcoding method and device
CN110619334A (en) Portrait segmentation method based on deep learning, architecture and related device
CN112241934B (en) Image processing method and related equipment
CN110717864A (en) Image enhancement method and device, terminal equipment and computer readable medium
CN112633218B (en) Face detection method, face detection device, terminal equipment and computer readable storage medium
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN112561826A (en) Image deblurring method, device and equipment based on artificial intelligence and storage medium
CN111754412A (en) Method and device for constructing data pairs and terminal equipment
CN114119377B (en) Image processing method and device
CN115311152A (en) Image processing method, image processing apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211201

Address after: 241000 12th floor, advertising creative complex building, Wuhu advertising industrial park, middle Beijing Road, Jiujiang District, Wuhu City, Anhui Province

Applicant after: CHANGXIN INTELLIGENT CONTROL NETWORK TECHNOLOGY CO.,LTD.

Address before: 518000 room 1002, phase II, international student entrepreneurship building, No. 29, South Ring Road, gaoxinyuan, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: TONGGUAN TECHNOLOGY (SHENZHEN) CO.,LTD.

GR01 Patent grant
GR01 Patent grant