CN113344832A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113344832A
CN113344832A CN202110593707.8A CN202110593707A CN113344832A CN 113344832 A CN113344832 A CN 113344832A CN 202110593707 A CN202110593707 A CN 202110593707A CN 113344832 A CN113344832 A CN 113344832A
Authority
CN
China
Prior art keywords
image
processing
processed
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110593707.8A
Other languages
Chinese (zh)
Inventor
徐青松
李青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruisheng Software Co Ltd
Original Assignee
Hangzhou Ruisheng Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruisheng Software Co Ltd filed Critical Hangzhou Ruisheng Software Co Ltd
Priority to CN202110593707.8A priority Critical patent/CN113344832A/en
Publication of CN113344832A publication Critical patent/CN113344832A/en
Priority to PCT/CN2022/093586 priority patent/WO2022247702A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

An image processing method, an image processing apparatus, an electronic device, and a storage medium. The image processing method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises a target area; performing first sharpening processing on an image to be processed through a first neural network model to obtain a first intermediate image corresponding to the image to be processed, wherein the sharpness of the first intermediate image is greater than that of the image to be processed; performing second sharpening processing on an intermediate target area corresponding to the target area in the first intermediate image through a second neural network model to obtain a second intermediate image corresponding to the intermediate target area; and carrying out synthesis processing on the first intermediate image and the second intermediate image to obtain a synthetic image corresponding to the image to be processed. The image processing method optimizes the target area and synthesizes the optimized target area with the first intermediate image, so that the definition of the synthesized image can be improved, and the image with high definition and richer details can be obtained.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
Embodiments of the present disclosure relate to an image processing method, an image processing apparatus, an electronic device, and a non-transitory computer-readable storage medium.
Background
Owing to the rapid development of hardware, the mobile phone has been updated for generations in short years, and the photos taken by the old mobile phone become blurred on the screen with large resolution. In addition, old photos of the old age are limited in shooting technology at that time, low in image definition and not rich in image details. Under such a scenario, the original image with lower definition needs to be subjected to sharpening processing to obtain an image with higher definition.
Disclosure of Invention
At least one embodiment of the present disclosure provides an image processing method, including: acquiring an image to be processed, wherein the image to be processed comprises a target area; performing first sharpening processing on the image to be processed through a first neural network model to obtain a first intermediate image corresponding to the image to be processed, wherein the sharpness of the first intermediate image is greater than that of the image to be processed; performing second sharpening processing on an intermediate target area corresponding to the target area in the first intermediate image through a second neural network model to obtain a second intermediate image corresponding to the intermediate target area; and synthesizing the first intermediate image and the second intermediate image to obtain a synthesized image corresponding to the image to be processed.
For example, in an image processing method provided by at least one embodiment of the present disclosure, before performing a second sharpening process on an intermediate target region corresponding to the target region in the first intermediate image through a second neural network model to obtain a second intermediate image corresponding to the intermediate target region, the image processing method further includes: and carrying out recognition processing on the first intermediate image through a third neural network model to obtain the intermediate target area corresponding to the target area in the first intermediate image.
For example, in an image processing method provided by at least one embodiment of the present disclosure, the definition of the second intermediate image is greater than the definition of the intermediate target region.
For example, in an image processing method provided by at least one embodiment of the present disclosure, synthesizing the first intermediate image and the second intermediate image to obtain a synthesized image corresponding to the image to be processed includes: performing tone processing on the second intermediate image based on the tone of the first intermediate image to obtain a third intermediate image, wherein the tone of the third intermediate image tends to the tone of the first intermediate image; and carrying out image combination processing on the first intermediate image and the third intermediate image to obtain the composite image.
For example, in an image processing method provided by at least one embodiment of the present disclosure, the target region is a human face region.
For example, in an image processing method provided in at least one embodiment of the present disclosure, the first neural network model is different from the second neural network model.
For example, in an image processing method provided by at least one embodiment of the present disclosure, before performing a first sharpening process on the image to be processed by using a first neural network model, the image processing method further includes: acquiring a sample image; performing fuzzy processing on the sample image to obtain an image to be trained, wherein the definition of the image to be trained is smaller than that of the sample image; and training a first neural network model to be trained and a second neural network model to be trained on the basis of the sample image and the image to be trained so as to obtain the first neural network model and the second neural network model.
For example, in an image processing method provided by at least one embodiment of the present disclosure, blurring the sample image to obtain an image to be trained includes: obtaining a texture slice, wherein the size of the texture slice is the same as the size of the sample image; performing first blurring processing on the sample image to obtain a first blurred image, wherein the definition of the first blurred image is smaller than that of the sample image; carrying out color mixing processing on the first blurred image and the texture slice to obtain a second blurred image; and carrying out second fuzzy processing on the second fuzzy image to obtain the image to be trained.
For example, in an image processing method provided by at least one embodiment of the present disclosure, acquiring a texture slice includes: acquiring at least one preset texture image; randomly selecting a preset texture image from the at least one preset texture image as a target texture image; in response to the size of the target texture image being the same as the size of the sample image, treating the target texture image as the texture slice; and in response to the size of the target texture image being larger than that of the sample image, randomly cutting the target texture image based on the size of the sample image to obtain a slice region with the same size as that of the sample image, and taking the slice region as the texture slice.
For example, in an image processing method provided by at least one embodiment of the present disclosure, the first blurring process includes a gaussian blurring process, a noise addition process, or a combined process based on any number of the gaussian blurring processes and the noise addition processes in any order; the second blurring process includes the gaussian blurring process, the noise addition process, or a combined process configured based on the gaussian blurring process and the noise addition process in an arbitrary order and in an arbitrary number.
For example, in an image processing method provided by at least one embodiment of the present disclosure, performing a first blurring process on the sample image to obtain a first blurred image includes: performing the Gaussian blur processing on the sample image to obtain the first blurred image; performing second blurring processing on the second blurred image to obtain the image to be trained, including: performing the noise addition processing on the second blurred image to obtain an intermediate blurred image; and performing the Gaussian blur processing on the intermediate blurred image to obtain the image to be trained.
For example, in an image processing method provided by at least one embodiment of the present disclosure, performing color mixing processing on the first blurred image and the texture slice to obtain a second blurred image includes: and performing color filtering processing on the first blurred image and the texture slice to obtain a second blurred image.
For example, in an image processing method provided by at least one embodiment of the present disclosure, performing color mixing processing on the first blurred image and the texture slice to obtain a second blurred image includes: highlighting the texture slice and the first blurred image to obtain the second blurred image.
At least one embodiment of the present disclosure provides an image processing apparatus including: an image acquisition unit configured to acquire an image to be processed, wherein the image to be processed includes a target area; the first processing unit is configured to perform first sharpening processing on the image to be processed through a first neural network model to obtain a first intermediate image corresponding to the image to be processed, wherein the sharpness of the first intermediate image is greater than that of the image to be processed; the second processing unit is configured to perform second sharpening processing on an intermediate target region corresponding to the target region in the first intermediate image through a second neural network model to obtain a second intermediate image corresponding to the intermediate target region; a synthesizing unit configured to perform synthesizing processing on the first intermediate image and the second intermediate image to obtain a synthesized image corresponding to the image to be processed.
For example, in an image processing apparatus provided by at least one embodiment of the present disclosure, the synthesizing unit includes a tone processing module configured to perform tone processing on the second intermediate image based on a tone of the first intermediate image to obtain a third intermediate image, and an image merging processing module configured to obtain a third intermediate image whose tone tends to that of the first intermediate image; the image merging processing module is configured to perform image merging processing on the first intermediate image and the third intermediate image to obtain the composite image.
At least one embodiment of the present disclosure provides an electronic device, including: a memory non-transiently storing computer executable instructions; a processor configured to execute the computer-executable instructions, wherein the computer-executable instructions, when executed by the processor, implement the image processing method according to any embodiment of the present disclosure.
At least one embodiment of the present disclosure provides a non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement an image processing method according to any one of the embodiments of the present disclosure.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
Fig. 1 is a schematic flow chart of an image processing method according to at least one embodiment of the present disclosure;
fig. 2 is a schematic diagram of an image to be processed according to at least one embodiment of the disclosure;
fig. 3 is a schematic diagram of a first intermediate image provided by at least one embodiment of the present disclosure;
FIG. 4A is a schematic illustration of an intermediate target area provided by at least one embodiment of the present disclosure;
fig. 4B is a schematic diagram of a second intermediate image provided by at least one embodiment of the present disclosure;
fig. 5A is a schematic diagram of a third intermediate image provided by at least one embodiment of the present disclosure;
fig. 5B is a schematic diagram of a composite image according to an embodiment of the disclosure;
FIG. 6A illustrates a schematic flow diagram of a blur process provided by at least one embodiment of the present disclosure;
fig. 6B is a schematic diagram of a texture slice according to at least one embodiment of the present disclosure;
FIG. 7A is a sample image provided by at least one embodiment of the present disclosure;
FIG. 7B is an image to be trained provided by at least one embodiment of the present disclosure;
fig. 8 is a schematic block diagram of an image processing apparatus according to at least one embodiment of the present disclosure;
fig. 9 is a schematic diagram of an electronic device according to at least one embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a non-transitory computer-readable storage medium provided in at least one embodiment of the present disclosure;
fig. 11 is a schematic diagram of a hardware environment according to at least one embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly. To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of some known functions and components have been omitted from the present disclosure.
For old photos stored for years or other images with insufficient definition, the images can be subjected to image definition processing to enable the details of the images to be vivid, and the occurrence of deep learning enables semantic level operation on the images to be possible, so that the images can be subjected to semantic level operation by utilizing a convolutional neural network, and high-definition detail reproduction of the images is realized.
For a face image, different from other objects such as a landscape and an object, details of the face image are very rich, such as face texture features, and the like, so that in the process of carrying out sharpening processing on an image including the face image, the definition of the face image obtained by adopting a traditional mode is not enough, the texture features are not clear enough, and image noise often occurs.
At least one embodiment of the present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a non-transitory computer-readable storage medium, the image processing method including: acquiring an image to be processed, wherein the image to be processed comprises a target area; performing first sharpening processing on an image to be processed through a first neural network model to obtain a first intermediate image corresponding to the image to be processed, wherein the sharpness of the first intermediate image is greater than that of the image to be processed; performing second sharpening processing on an intermediate target area corresponding to the target area in the first intermediate image through a second neural network model to obtain a second intermediate image corresponding to the intermediate target area; and carrying out synthesis processing on the first intermediate image and the second intermediate image to obtain a synthetic image corresponding to the image to be processed.
According to the image processing method, after the first sharpening processing is carried out on the image to be processed, the second sharpening processing is carried out on the target area, the special optimization is carried out on the target area, and the optimized target area and the first intermediate image are synthesized, so that the definition of the synthesized image can be improved, and the image with high definition and richer details can be obtained.
The image processing method provided by the embodiment of the disclosure can be applied to a mobile terminal (for example, a mobile phone, a tablet computer, etc.), improves the definition of a synthesized image on the basis of improving the processing speed, and can also realize real-time definition processing on an image acquired by the mobile terminal.
It should be noted that the image processing method provided by the embodiment of the present disclosure is applicable to the image processing apparatus provided by the embodiment of the present disclosure, and the image processing apparatus may be configured on an electronic device. The electronic device may be a personal computer, a mobile terminal, and the like, and the mobile terminal may be a hardware device having various operating systems, such as a mobile phone and a tablet computer.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings, but the present disclosure is not limited to these specific embodiments.
Fig. 1 is a schematic flowchart of an image processing method according to at least one embodiment of the present disclosure. Fig. 2 is a schematic diagram of an initial image according to at least one embodiment of the present disclosure.
As shown in fig. 1, an image processing method according to at least one embodiment of the present disclosure includes steps S10 to S40.
In step S10, an image to be processed is acquired.
For example, the image to be processed includes a target region.
Step S20, performing a first sharpening process on the image to be processed through the first neural network model to obtain a first intermediate image corresponding to the image to be processed.
For example, the sharpness of the first intermediate image is greater than the sharpness of the image to be processed.
And step S30, performing second sharpening processing on the intermediate target area corresponding to the target area in the first intermediate image through the second neural network model to obtain a second intermediate image corresponding to the intermediate target area.
In step S40, the first intermediate image and the second intermediate image are subjected to synthesis processing to obtain a synthesized image corresponding to the image to be processed.
For example, the image to be processed acquired in step S10 may be various types of images, such as a landscape image, a person image, an article image, and the like, where the landscape image may include landscape objects such as mountains, rivers, plants, animals, and sky, the person image is an image including a person (e.g., a human face and the like), and the article image may include an article object such as a vehicle and a house. Of course, the person image may include a region corresponding to a landscape object, an object, or the like, in addition to the face region. For example, in some embodiments, the image to be processed may be a human image, such as a human identification photo, for example, in other embodiments, the image to be processed may also be a human image with a landscape object or an object.
For example, the shape of the image to be processed may be rectangular or the like. The shape, size and the like of the image to be processed can be set by the user according to the actual situation.
For example, the image to be processed may be a blurred image with low sharpness, and for example, the image to be processed may be an image captured by an image capture device (e.g., a digital camera or a mobile phone, etc.) with low sharpness on a screen with large resolution. For example, the image to be processed may be obtained by scanning or the like, and for example, the image to be processed may be obtained by scanning or taking an old photograph that is older. For another example, the image to be processed may be a high definition image that is image compressed to facilitate transmission of the resulting image.
The image to be processed can be a gray image or a color image. For example, to avoid the influence of data quality, data imbalance and the like of the image to be processed on the image processing, before the image to be processed is processed, the image processing method provided by at least one embodiment of the disclosure may further include an operation of preprocessing the image to be processed. The preprocessing may include, for example, processing such as clipping, Gamma (Gamma) correction, or noise reduction filtering on the image to be processed. The preprocessing can eliminate irrelevant information or noise information in the image to be processed, so that the image to be processed can be better processed subsequently.
For example, the target region may be a region including a target, and the target may be a human face, so that the target region may be a human face region. It should be noted that other objects may be selected as the target according to the image processing requirement, for example, an animal, a vehicle, or the like may be selected as the target, and in this case, the target area is an area including an animal (for example, a cat) or an area including a vehicle, which is not limited by the present disclosure.
For example, as shown in fig. 2, the image to be processed may be a human image and include a human face, the target region is a human face region including the human face in the image to be processed, the human image is obtained by scanning or shooting an old photo of a relatively long age, and as can be seen from fig. 2, the image to be processed has low definition, missing image details and image noise.
For example, in step S20, a first sharpness process is performed on the image to be processed through the trained first neural network model, so as to obtain a first intermediate image with higher sharpness, that is, the sharpness of the first intermediate image is greater than that of the image to be processed.
For example, the first neural network model may adopt a pix2pixHD (pixel to pixel HD) model, where the pix2pixHD model performs a first sharpening process on an image to be processed by using a multi-level generator (coarse-to-fine generator) and a multi-scale discriminator (multi-scale discriminator), and generates a high-resolution and high-definition first intermediate image. The pix2pixHD model generator comprises a global generator network part and a local enhancement network part, wherein the global generator network part adopts a U-Net structure, the features output by the global generator network part are fused with the features extracted by the local enhancement network part and serve as input information of the local enhancement network part, and the local enhancement network part outputs high-resolution and high-definition images based on the fused information.
The training process for the first neural network model is described later, and is not described in detail here.
Fig. 3 is a schematic diagram of a first intermediate image obtained after performing a first sharpening process on the to-be-processed image shown in fig. 2 according to at least one embodiment of the present disclosure. As shown in fig. 3, compared to the image to be processed shown in fig. 2, the sharpness of the first intermediate image after the first sharpening process is greatly improved, but the sharpening is directed to the global sharpening of the image to be processed, and cannot be specifically optimized for a target region, such as a face region, for example, high-definition details of the target region cannot be provided, and the obtained first intermediate image may also have image noise such as mottled lines.
For example, other properties (e.g., size, etc.) of the first intermediate image and the image to be processed are all completely or substantially the same except for the difference in sharpness.
For example, in step S30, the intermediate target region is a region in the first intermediate image that corresponds to the target region. The size of the intermediate target area is the same as the size of the target area, and the relative position of the intermediate target area in the first intermediate image is completely or substantially the same as the relative position of the target area in the image to be processed.
For example, in step S30, the second sharpening process is performed on the intermediate target region corresponding to the target region obtained from the first intermediate image, so as to further enrich the image details of the intermediate target region based on the first sharpening process, improve the sharpness of the intermediate target region, eliminate the image noise existing in the intermediate target region, and obtain a second intermediate image with higher sharpness and richer image details.
For example, the sharpness of the second intermediate image is greater than the sharpness of the intermediate target region. For example, the second intermediate image has no image noise such as mottled lines and noise points, and the texture, lines and the like of the second intermediate image are clearer and richer than those of the intermediate target area.
For example, in some embodiments, an intermediate target region is extracted via a second neural network model and subjected to a second sharpening process to obtain a second intermediate image.
For example, in other embodiments, the position of the target region in the image to be processed is relatively fixed, for example, the image to be processed is a certificate photo, the target region is a human face region, and the human face region is generally located at the center position of the certificate photo, so that the intermediate target region in the first intermediate image can be extracted according to the position information of the target region in the image to be processed, and the second sharpening process is performed on the intermediate target region through the second neural network model to obtain the second intermediate image.
For example, in other embodiments, before step S20, an image processing method provided by at least one embodiment of the present disclosure may further include: and performing identification processing on the first intermediate image through a third neural network model to obtain an intermediate target area corresponding to the target area in the first intermediate image.
For example, the target region may be a face region, the third neural network model may be a face recognition model, and the third neural network model may be trained to recognize the face region in the first intermediate image to obtain an intermediate target region, that is, a region including a face portion in the first intermediate image. It should be noted that, when the target area is another object, for example, a vehicle, the third neural network model may be trained to recognize the object (i.e., the vehicle) in the image to be recognized, so that the first intermediate image may be subjected to a recognition process by the third neural network model to obtain an intermediate target area including the object (i.e., the vehicle), which is not limited by the present disclosure.
For example, in other embodiments, the intermediate target region may also be extracted by means of manual extraction or the like, and the second intermediate image is obtained by performing a second sharpening process on the intermediate target region through the second neural network model, which is not limited in this disclosure.
For example, the first neural network model and the second neural network model may be the same, or the first neural network model and the second neural network model may be different. For example, the second neural network model may be a SPADE (spatial-Adaptive Normalization) model, which may solve a problem that information in an input semantic image is easily lost in a conventional Normalization layer.
The training process for the second neural network model is described later, and is not described in detail here.
For example, one or more of the first, second, and third neural network models may be convolutional neural network models.
Fig. 4A is a schematic diagram of an intermediate target area provided in at least one embodiment of the present disclosure, and fig. 4B is a schematic diagram of a second intermediate image provided in at least one embodiment of the present disclosure.
For example, the first intermediate image shown in fig. 3 is subjected to recognition processing by a third neural network model to obtain an intermediate target region shown in fig. 4A; next, the intermediate target region shown in fig. 4A is subjected to a second sharpening process by the second neural network model to obtain a second intermediate image shown in fig. 4B. As shown in fig. 4A and 4B, the texture features of the second intermediate image after the second sharpening process are richer and the sharpness is higher, and the black lines from the nose to the mouth of the face in the intermediate target region are removed. For example, as shown in fig. 4B, in the second intermediate image, details such as wrinkles originally existing on the face are reflected, so that the face more conforms to the features of the real face.
For example, the tones of the first intermediate image obtained by the first sharpening process and the second intermediate image obtained by the second sharpening process may not be uniform, and if the first intermediate image and the second intermediate image are directly synthesized, the obtained synthesized image may have multiple tones, so that the first intermediate image and the second intermediate image need to be subjected to the tone processing first to make the tones of the two images consistent, for example, the tones of the two images are uniform or consistent, and at this time, the image merging process is performed to obtain the synthesized image with uniform tones.
For example, step S40 may include: tone-processing the second intermediate image based on the tone of the first intermediate image to obtain a third intermediate image, for example, the tone of the third intermediate image tends to that of the first intermediate image; and carrying out image combination processing on the first intermediate image and the third intermediate image to obtain a composite image.
For example, any algorithm or tool that can implement the tone adjustment may be used to perform the tone processing on the second intermediate image based on the tone of the first intermediate image, which is not limited by the present disclosure.
It should be noted that, in the above description, the tone of the second intermediate image is adjusted to be consistent with the tone of the first intermediate image, but the present disclosure is not limited thereto as long as the tone of the first intermediate image and the tone of the second intermediate image can be made consistent, for example, in other embodiments, step S40 may include: tone-processing the first intermediate image based on the tone of the second intermediate image to obtain a fourth intermediate image, for example, the tone of the fourth intermediate image tends to that of the second intermediate image; and carrying out image combination processing on the second intermediate image and the fourth intermediate image to obtain a composite image.
For example, in some embodiments, arranging all pixels in the composite image into n rows and m columns, and performing image merging processing on the first intermediate image and the third intermediate image to obtain the composite image in step S40 may include: for the pixel of row t1, column t2 in the first intermediate image: in response to the pixel of the t1 th row and t2 th column in the first intermediate image not being located in the intermediate target region, taking the pixel value of the pixel of the t1 th row and t2 th column in the first intermediate image as the pixel value of the pixel of the t1 th row and t2 th column in the synthesized image; in response to the pixel in the t1 th row and t2 th column in the first intermediate image being located in the intermediate target region, the pixel value of the second intermediate pixel in the third intermediate image, which is the pixel in the third intermediate image corresponding to the pixel in the t1 th row and t2 th column in the first intermediate image, is taken as the pixel value of the pixel in the t1 th row and t2 th column in the synthesized image, where n, m, t1, and t2 are positive integers, and t1 is equal to or less than n, and t2 is equal to or less than m.
For example, when the second intermediate image and the fourth intermediate image are subjected to image merging processing to obtain a composite image, the image merging processing procedure is the same as the above-described procedure, and is not described herein again.
It should be noted that the image merging process may also adopt other merging manners, and the disclosure does not limit this.
For example, the composite image may be a color image, e.g., the pixel values of the pixels in the color image may include a set of RGB pixel values, or the composite image may be a monochrome image, e.g., the pixel values of the pixels of the monochrome image may be the pixel values of one color channel.
Fig. 5A is a schematic diagram of a third intermediate image according to at least one embodiment of the present disclosure, and fig. 5B is a schematic diagram of a composite image according to an embodiment of the present disclosure, for example, fig. 5B is a composite image obtained by performing an image processing method according to at least one embodiment of the present disclosure on the image to be processed shown in fig. 2.
As shown in fig. 5A, the tone of the third intermediate image after the tone processing is consistent with the tone of the first intermediate image shown in fig. 3.
As shown in fig. 5B, the image details of the synthesized image are richer, the definition is higher, and only one color tone exists in the synthesized image, compared to the image to be processed.
For example, before performing the first sharpening process on the image to be processed by the first neural network model, an image processing method provided by at least one embodiment of the present disclosure further includes: acquiring a sample image; performing fuzzy processing on the sample image to obtain an image to be trained, for example, the definition of the image to be trained is smaller than that of the sample image; and training the first neural network model to be trained and the second neural network model to be trained on the basis of the sample image and the image to be trained so as to obtain the first neural network model and the second neural network model.
For example, the sample image may be an image with a definition greater than a definition threshold, and the definition threshold may be set by a user according to actual conditions. For example, the sample image includes a sample target region, e.g., a face region. For example, when a first neural network model to be trained and a second neural network model to be trained are trained, an image to be trained may be used as an input of the neural network model, a sample image may be used as a target output of the neural network model, and the first neural network model to be trained and the second neural network model to be trained are trained.
For example, the training process of the neural network model may include: processing an image to be trained by using a neural network model to be trained to obtain a training output image; calculating a loss value of the neural network model to be trained through a loss function corresponding to the neural network model to be trained based on the training output image and the sample image; correcting parameters of the neural network model to be trained based on the loss value; and when the loss function corresponding to the neural network model to be trained does not meet the preset condition, continuously inputting the image to be trained to repeatedly execute the training process. Here, the neural network model to be trained may be the first neural network model to be trained or the second neural network model to be trained described above.
For example, in one example, the predetermined condition corresponds to a minimization of a loss function of the neural network model to be trained upon input of a number of images to be trained. In another example, the predetermined condition is that the number of training times or training periods corresponding to the neural network model to be trained reaches a predetermined number, which may be millions, as long as the number of images to be trained is sufficiently large.
For example, the first neural network model and the second neural network model may be separately trained using the above-mentioned training process, in this case, the sample image corresponding to the second neural network model needs to include the sample target region, and the sample image corresponding to the first neural network model may not include the sample target region.
For example, the first neural network model and the second neural network model may be trained simultaneously based on the same sample image and the image to be trained, where the sample image needs to include the sample target region. For example, at this time, the first neural network model and the second neural network model adopt different structures, for example, the first neural network model is a pix2pixHD model, the second neural network model is a SPADE model, the first neural network model is trained based on the whole of the sample image, and the second neural network model is trained based on only the sample target region in the sample image, so that the first neural network model can perform first sharpening on the whole of the image to be processed, and the second neural network model can perform second sharpening on the target region.
Fig. 6A illustrates a schematic flow diagram of a blur process provided by at least one embodiment of the present disclosure. As shown in fig. 6A, the blurring process may include steps S501 to S504.
Step S501, a texture slice is acquired.
For example, the size of the texture slice is the same as the size of the sample image.
Step S502, a first blurring process is performed on the sample image to obtain a first blurred image.
For example, the sharpness of the first blurred image is smaller than the sharpness of the sample image.
In step S503, the first blurred image and the texture slice are color-mixed to obtain a second blurred image.
Step S504, performing second blurring processing on the second blurred image to obtain an image to be trained.
For example, step S501 may include: acquiring at least one preset texture image; randomly selecting a preset texture image from at least one preset texture image as a target texture image; in response to the size of the target texture image being the same as the size of the sample image, treating the target texture image as a texture slice; and responding to the fact that the size of the target texture image is larger than that of the sample image, randomly cutting the target texture image based on the size of the sample image to obtain a slice area with the same size as that of the sample image, and taking the slice area as a texture slice.
For example, the size of the texture slice is the same as the size of the sample image.
Fig. 6B is a schematic diagram of a texture slice according to at least one embodiment of the present disclosure. As shown in fig. 6B, the texture slice has mottled spots simulating photographic noise (e.g., film grain) and mottled lines simulating scratches, which may be randomly generated or preset, but the disclosure is not limited thereto.
For example, a plurality of preset texture images may be generated in advance, each preset texture image has randomly distributed mottled spots and mottled lines, the size of each preset texture image may be set to be larger than that of the sample image, and when a texture slice is obtained, one preset texture image is selected from the plurality of preset texture images to be used as a target texture image, and the target texture image is randomly cut to obtain a slice region with the same size as that of the sample image to be used as the texture slice. In this way, the state of the image with lower definition can be simulated more realistically.
It should be noted that, in other embodiments, the size of the target texture image may also be smaller than that of the sample image, and then based on the size of the sample image, the target texture image is enlarged so that the size of the target texture image is the same as that of the sample image, and the enlarged target texture image is the texture slice.
For example, the first blurring process includes a gaussian blurring process, a noise addition process, or a combined process configured based on an arbitrary order and an arbitrary number of gaussian blurring processes and noise addition processes; the second blurring process includes a gaussian blurring process, a noise addition process, or a combined process configured based on an arbitrary order and an arbitrary number of gaussian blurring processes and noise addition processes.
It should be noted that the Gaussian Blur (Gaussian Blur) processing includes Gaussian Blur processing with the same or different Blur parameters, the noise addition processing includes noise addition processing with the same or different noise parameters, the Blur parameters of any number of Gaussian Blur processing in the combination processing may be the same or different, and may be set according to actual needs, and similarly, the noise parameters of any number of noise addition processing in the combination processing may also be the same or different, which is not limited in this disclosure.
For example, a gaussian blur process may adjust the pixel values of pixels according to a gaussian curve to achieve image blur. The noise addition process may generate image noise, such as white gaussian noise or the like, which is synthesized with the image to achieve image blur. It should be noted that, the specific implementation manner of the gaussian blur processing and the noise addition processing may adopt any relevant technical means in the image processing, and the present disclosure does not limit this.
For example, step S502 may specifically include: and performing Gaussian blur processing on the sample image to obtain a first blurred image.
For example, step S504 may specifically include: and performing second fuzzy processing on the second fuzzy image to obtain an image to be trained, wherein the second fuzzy processing comprises the following steps: noise adding processing is carried out on the second blurred image to obtain an intermediate blurred image; and performing Gaussian blur processing on the intermediate blurred image to obtain an image to be trained.
For example, the color mixing process includes one or more of a color filter (Screen) process, an overlay Addition (Addition) process, a highlight (lightonly) process, and the like.
For example, in some embodiments, step S503 may include: the first blurred image and the texture slice are color filtered (Screen) processed to obtain a second blurred image.
For example, the pixels of the first blurred image are arranged in p rows and q columns, the pixels of the texture slice are arranged in p rows and q columns, the pixels of the second blurred image are arranged in p rows and q columns, and p and q are positive integers. For example, the number of bits of the pixel value of the pixel is 8 bits, that is, the pixel value of each channel of the pixel ranges from (0-255).
For example, when color filtering (Screen) processing is performed on the first blurred image and the texture slice, for a pixel located at t4 th column of t3 th row in the second blurred image, the calculation formula of the pixel value of the pixel is as follows:
result _ pix 255- [ (255-fig1_ pix) × (255-slice _ pix) ]/255 (formula 1)
Wherein Result _ pix is a pixel value of a pixel located at t4 column at t3 row in the second blurred image, fig1_ pix is a pixel value of a pixel located at t4 column at t3 row in the first blurred image, and slice _ pix is a pixel value of a pixel located at t4 column at t3 row in the texture slice.
For example, in other embodiments, step S503 may include: the texture slice and the first blurred image are highlighted (Lighten Only) to obtain a second blurred image.
For example, when the highlight (light Only) processing is performed on the first blurred image and the texture slice, the calculation formula of the pixel value of the pixel located at the t4 th column of the t3 th row in the second blurred image is as follows:
result _ pix ═ max (fig1_ pix, slice _ pix) (equation 2)
Wherein max (x, y) represents the maximum value of x and y, and the specific meanings of other parameters are the same as in formula 1, which is not described herein again.
For example, in other embodiments, step S503 may include: and carrying out image layer Addition (Addition) processing on the texture slice and the first blurred image to obtain a second blurred image.
For example, when the layer Addition (Addition) process is performed on the first blurred image and the texture slice, the calculation formula of the pixel value of the pixel located at the t4 th column of the t3 th row in the second blurred image is as follows:
result _ pix ═ fig1_ pix + slice _ pix (equation 3)
The specific meaning of the parameter is the same as that in formula 1, and is not described herein again.
It should be noted that the color mixing process may also adopt other blending modes (Blend Mode) as required, and the disclosure is not limited thereto.
Fig. 7A is a sample image provided by at least one embodiment of the present disclosure, and fig. 7B is an image to be trained provided by at least one embodiment of the present disclosure, for example, the image to be trained shown in fig. 7B is an image obtained after the foregoing first blurring process, color mixing process, and second blurring process are performed on the sample image shown in fig. 7A.
As shown in fig. 7A, the sample image is a high-definition image, and the image to be trained corresponding to the sample image obtained after the first blurring process, the color mixing process, and the second blurring process in the foregoing steps is as shown in fig. 7B, where the definition of the image to be trained is smaller than that of the sample image, and the image to be trained has simulated noise and scratches.
At least one embodiment of the present disclosure further provides an image processing apparatus, and fig. 8 is a schematic block diagram of an image processing apparatus provided in at least one embodiment of the present disclosure.
As shown in fig. 8, the image processing apparatus 800 may include: an image acquisition unit 801, a first processing unit 802, a second processing unit 803, and a synthesis unit 804.
For example, the modules may be implemented by hardware (e.g., circuit) modules, software modules, or any combination of the two, and the following embodiments are the same and will not be described again. These units may be implemented, for example, by a Central Processing Unit (CPU), image processor (GPU), Tensor Processor (TPU), Field Programmable Gate Array (FPGA) or other form of processing unit having data processing and/or instruction execution capabilities and corresponding computer instructions.
For example, the image acquisition unit 801 is configured to acquire an image to be processed, wherein the image to be processed includes a target region.
For example, the first processing unit 802 is configured to perform a first sharpening process on the image to be processed through the first neural network model to obtain a first intermediate image corresponding to the image to be processed, where a sharpness of the first intermediate image is greater than a sharpness of the image to be processed.
For example, the second processing unit 803 is configured to perform a second sharpening process on an intermediate target region corresponding to the target region in the first intermediate image through the second neural network model to obtain a second intermediate image corresponding to the intermediate target region.
For example, the synthesizing unit 804 is configured to perform synthesizing processing on the first intermediate image and the second intermediate image to obtain a synthesized image corresponding to the image to be processed.
For example, the image acquisition unit 801, the first processing unit 802, the second processing unit 803, and the synthesis unit 804 may include codes and programs stored in a memory; the processor may execute the code and program to implement some or all of the functions of the image acquisition unit 801, the first processing unit 802, the second processing unit 803, and the synthesis unit 804 as described above. For example, the image acquisition unit 801, the first processing unit 802, the second processing unit 803, and the synthesis unit 804 may be dedicated hardware devices for implementing some or all of the functions of the image acquisition unit 801, the first processing unit 802, the second processing unit 803, and the synthesis unit 804 described above. For example, the image acquisition unit 801, the first processing unit 802, the second processing unit 803, and the synthesis unit 804 may be one circuit board or a combination of a plurality of circuit boards for realizing the functions as described above. In the embodiment of the present application, the one or a combination of a plurality of circuit boards may include: (1) one or more processors; (2) one or more non-transitory memories connected to the processor; and (3) firmware stored in the memory executable by the processor.
It should be noted that the image acquiring unit 801 may be configured to implement step S10 shown in fig. 1, the first processing unit 802 may be configured to implement step S20 shown in fig. 1, the second processing unit 803 may be configured to implement step S30 shown in fig. 1, and the synthesizing unit 804 may be configured to implement step S40 shown in fig. 1. Therefore, for specific descriptions of functions that can be realized by the image obtaining unit 801, the first processing unit 802, the second processing unit 803, and the synthesizing unit 804, reference may be made to the description of step S10 to step S40 in the above embodiment of the image processing method, and repeated descriptions are omitted. In addition, the image processing apparatus 800 can achieve similar technical effects to the image processing method described above, and will not be described herein again.
It should be noted that, in the embodiment of the present disclosure, the image processing apparatus 800 may include more or less circuits or units, and the connection relationship between the respective circuits or units is not limited and may be determined according to actual requirements. The specific configuration of each circuit or unit is not limited, and may be configured by an analog device, a digital chip, or other suitable configurations according to the circuit principle.
At least one embodiment of the present disclosure further provides an electronic device, and fig. 9 is a schematic diagram of an electronic device provided in at least one embodiment of the present disclosure.
For example, as shown in fig. 9, the electronic device includes a processor 901, a communication interface 902, a memory 903, and a communication bus 904. The processor 901, the communication interface 902, and the memory 903 communicate with each other via a communication bus 904, and components such as the processor 901, the communication interface 902, and the memory 903 may communicate with each other via a network connection. The present disclosure is not limited herein as to the type and function of the network. It should be noted that the components of the electronic device shown in fig. 9 are only exemplary and not limiting, and the electronic device may have other components according to the actual application.
For example, memory 903 is used to store computer readable instructions non-transiently. The processor 901 is configured to implement the image processing method according to any of the above embodiments when executing the computer readable instructions. For specific implementation and related explanation of each step of the image processing method, reference may be made to the above-mentioned embodiment of the image processing method, which is not described herein again.
For example, other implementation manners of the image processing method implemented by the processor 901 executing the computer readable instructions stored in the memory 903 are the same as the implementation manners mentioned in the foregoing method embodiment, and are not described herein again.
For example, the communication bus 904 may be a peripheral component interconnect standard (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
For example, communication interface 902 is used to enable communication between an electronic device and other devices.
For example, the processor 901 and the memory 903 may be located on a server side (or cloud side).
For example, the processor 901 may control other components in the electronic device to perform desired functions. The processor 901 may be a device having data processing capability and/or program execution capability, such as a Central Processing Unit (CPU), a Network Processor (NP), a Tensor Processor (TPU), or a Graphics Processing Unit (GPU); but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The Central Processing Unit (CPU) may be an X86 or ARM architecture, etc.
For example, memory 903 may comprise any combination of one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer readable instructions may be stored on the computer readable storage medium and executed by the processor 901 to implement various functions of the electronic device. Various application programs and various data and the like can also be stored in the storage medium.
For example, in some embodiments, the electronic device may also include an image acquisition component. The image acquisition component is used for acquiring images. The memory 903 is also used to store the acquired images.
For example, the image acquisition component may be a camera of a smartphone, a camera of a tablet computer, a camera of a personal computer, a lens of a digital camera, or even a webcam.
For example, the detailed description of the process of executing the image processing by the electronic device may refer to the related description in the embodiment of the image processing method, and repeated descriptions are omitted.
Fig. 10 is a schematic diagram of a non-transitory computer-readable storage medium according to at least one embodiment of the disclosure. For example, as shown in fig. 10, the storage medium 1000 may be a non-transitory computer-readable storage medium on which one or more computer-readable instructions 1001 may be non-temporarily stored on the storage medium 1000. For example, the computer readable instructions 1001, when executed by a processor, may perform one or more steps according to the image processing method described above.
For example, the storage medium 1000 may be applied to the electronic device described above, and for example, the storage medium 1000 may include a memory in the electronic device.
For example, the storage medium may include a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a flash memory, or any combination of the above, as well as other suitable storage media.
For example, the description of the storage medium 1000 may refer to the description of the memory in the embodiment of the electronic device, and repeated descriptions are omitted.
FIG. 11 illustrates a schematic diagram of a hardware environment provided for at least one embodiment of the present disclosure. The electronic equipment provided by the disclosure can be applied to an Internet system.
The functions of the image processing apparatus and/or the electronic device referred to in the present disclosure can be realized by the computer system provided in fig. 11. Such computer systems may include personal computers, laptops, tablets, cell phones, personal digital assistants, smart glasses, smart watches, smart rings, smart helmets, and any smart portable or wearable device. The particular system in this embodiment utilizes a functional block diagram to illustrate a hardware platform that contains a user interface. Such a computer device may be a general purpose computer device or a special purpose computer device. Both computer devices may be used to implement the image processing apparatus and/or the electronic device in the present embodiment. The computer system may include any components that implement the information needed to implement the presently described image processing. For example, the computer system can be implemented by a computer device through its hardware devices, software programs, firmware, and combinations thereof. For convenience, only one computer device is depicted in fig. 11, but the related computer functions of the information required to implement image processing described in the present embodiment can be implemented in a distributed manner by a set of similar platforms, distributing the processing load of the computer system.
As shown in FIG. 11, the computer system may include a communication port 250 coupled to a network that enables data communication, e.g., the computer system may send and receive information and data via the communication port 250, i.e., the communication port 250 may enable the computer system to communicate wirelessly or wiredly with other electronic devices to exchange data. The computer system may also include a processor complex 220 (i.e., the processor described above) for executing program instructions. The processor group 220 may be composed of at least one processor (e.g., CPU). The computer system may include an internal communication bus 210. The computer system may include various forms of program storage units and data storage units (i.e., the memory or storage medium described above), such as a hard disk 270, Read Only Memory (ROM)230, Random Access Memory (RAM)240, which can be used to store various data files used in computer processing and/or communications, as well as possible program instructions executed by the processor complex 220. The computer system may also include an input/output component 260, the input/output component 260 being used to implement input/output data flow between the computer system and other components (e.g., user interface 280, etc.).
Generally, the following devices may be connected to the input/output assembly 260: input devices including, for example, touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, and the like; output devices including, for example, Liquid Crystal Displays (LCDs), speakers, vibrators, and the like; storage devices including, for example, magnetic tape, hard disk, etc.; and a communication interface.
While fig. 11 illustrates a computer system having various devices, it is to be understood that a computer system is not required to have all of the devices illustrated and that a computer system may alternatively have more or fewer devices.
For the present disclosure, there are also the following points to be explained:
(1) the drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to the common design.
(2) Thicknesses and dimensions of layers or structures may be exaggerated in the drawings used to describe embodiments of the present invention for clarity. It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" or "under" another element, it can be "directly on" or "under" the other element or intervening elements may be present.
(3) Without conflict, embodiments of the present disclosure and features of the embodiments may be combined with each other to arrive at new embodiments.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and the scope of the present disclosure should be subject to the scope of the claims.

Claims (17)

1. An image processing method comprising:
acquiring an image to be processed, wherein the image to be processed comprises a target area;
performing first sharpening processing on the image to be processed through a first neural network model to obtain a first intermediate image corresponding to the image to be processed, wherein the sharpness of the first intermediate image is greater than that of the image to be processed;
performing second sharpening processing on an intermediate target area corresponding to the target area in the first intermediate image through a second neural network model to obtain a second intermediate image corresponding to the intermediate target area;
and synthesizing the first intermediate image and the second intermediate image to obtain a synthesized image corresponding to the image to be processed.
2. The image processing method according to claim 1, wherein before performing second sharpening on an intermediate target region corresponding to the target region in the first intermediate image by a second neural network model to obtain a second intermediate image corresponding to the intermediate target region, the image processing method further comprises:
and carrying out recognition processing on the first intermediate image through a third neural network model to obtain the intermediate target area corresponding to the target area in the first intermediate image.
3. The image processing method according to claim 1, wherein the sharpness of the second intermediate image is greater than the sharpness of the intermediate target region.
4. The image processing method according to claim 1, wherein the synthesizing the first intermediate image and the second intermediate image to obtain a synthesized image corresponding to the image to be processed comprises:
performing tone processing on the second intermediate image based on the tone of the first intermediate image to obtain a third intermediate image, wherein the tone of the third intermediate image tends to the tone of the first intermediate image;
and carrying out image combination processing on the first intermediate image and the third intermediate image to obtain the composite image.
5. The image processing method according to any one of claims 1 to 4, wherein the target region is a human face region.
6. The image processing method according to any one of claims 1 to 4, wherein the first neural network model is different from the second neural network model.
7. The image processing method according to any one of claims 1 to 4, further comprising, before the first sharpening process is performed on the image to be processed by the first neural network model:
acquiring a sample image;
performing fuzzy processing on the sample image to obtain an image to be trained, wherein the definition of the image to be trained is smaller than that of the sample image;
and training a first neural network model to be trained and a second neural network model to be trained on the basis of the sample image and the image to be trained so as to obtain the first neural network model and the second neural network model.
8. The image processing method according to claim 7, wherein blurring the sample image to obtain an image to be trained comprises:
obtaining a texture slice, wherein the size of the texture slice is the same as the size of the sample image;
performing first blurring processing on the sample image to obtain a first blurred image, wherein the definition of the first blurred image is smaller than that of the sample image;
carrying out color mixing processing on the first blurred image and the texture slice to obtain a second blurred image;
and carrying out second fuzzy processing on the second fuzzy image to obtain the image to be trained.
9. The image processing method of claim 8, wherein obtaining a texture slice comprises:
acquiring at least one preset texture image;
randomly selecting a preset texture image from the at least one preset texture image as a target texture image;
in response to the size of the target texture image being the same as the size of the sample image, treating the target texture image as the texture slice;
and in response to the size of the target texture image being larger than that of the sample image, randomly cutting the target texture image based on the size of the sample image to obtain a slice region with the same size as that of the sample image, and taking the slice region as the texture slice.
10. The image processing method according to claim 8, wherein the first blurring process includes a gaussian blurring process, a noise addition process, or a combined process based on any number of the gaussian blurring process and the noise addition process in any order;
the second blurring process includes the gaussian blurring process, the noise addition process, or a combined process configured based on the gaussian blurring process and the noise addition process in an arbitrary order and in an arbitrary number.
11. The image processing method of claim 10, wherein performing a first blurring process on the sample image to obtain a first blurred image comprises: performing the Gaussian blur processing on the sample image to obtain the first blurred image;
performing second blurring processing on the second blurred image to obtain the image to be trained, including: performing the noise addition processing on the second blurred image to obtain an intermediate blurred image; and performing the Gaussian blur processing on the intermediate blurred image to obtain the image to be trained.
12. The image processing method of claim 8, wherein color mixing the first blurred image with the texture slice to obtain a second blurred image comprises:
and performing color filtering processing on the first blurred image and the texture slice to obtain a second blurred image.
13. The image processing method of claim 8, wherein color mixing the first blurred image with the texture slice to obtain a second blurred image comprises:
highlighting the texture slice and the first blurred image to obtain the second blurred image.
14. An image processing apparatus comprising:
an image acquisition unit configured to acquire an image to be processed, wherein the image to be processed includes a target area;
the first processing unit is configured to perform first sharpening processing on the image to be processed through a first neural network model to obtain a first intermediate image corresponding to the image to be processed, wherein the sharpness of the first intermediate image is greater than that of the image to be processed;
the second processing unit is configured to perform second sharpening processing on an intermediate target region corresponding to the target region in the first intermediate image through a second neural network model to obtain a second intermediate image corresponding to the intermediate target region;
a synthesizing unit configured to perform synthesizing processing on the first intermediate image and the second intermediate image to obtain a synthesized image corresponding to the image to be processed.
15. The image processing apparatus according to claim 14, wherein the synthesizing unit includes a tone processing module and an image merging processing module,
the tone processing module is configured to perform tone processing on the second intermediate image based on the tone of the first intermediate image to obtain a third intermediate image, wherein the tone of the third intermediate image tends to the tone of the first intermediate image;
the image merging processing module is configured to perform image merging processing on the first intermediate image and the third intermediate image to obtain the composite image.
16. An electronic device, comprising:
a memory non-transiently storing computer executable instructions;
a processor configured to execute the computer-executable instructions,
wherein the computer-executable instructions, when executed by the processor, implement the image processing method of any of claims 1-13.
17. A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer-executable instructions that, when executed by a processor, implement the image processing method according to any one of claims 1-13.
CN202110593707.8A 2021-05-28 2021-05-28 Image processing method and device, electronic equipment and storage medium Pending CN113344832A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110593707.8A CN113344832A (en) 2021-05-28 2021-05-28 Image processing method and device, electronic equipment and storage medium
PCT/CN2022/093586 WO2022247702A1 (en) 2021-05-28 2022-05-18 Image processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110593707.8A CN113344832A (en) 2021-05-28 2021-05-28 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113344832A true CN113344832A (en) 2021-09-03

Family

ID=77471959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110593707.8A Pending CN113344832A (en) 2021-05-28 2021-05-28 Image processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113344832A (en)
WO (1) WO2022247702A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247702A1 (en) * 2021-05-28 2022-12-01 杭州睿胜软件有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098853A1 (en) * 2013-08-28 2016-04-07 Tencent Technology (Shenzhen) Company Limited Method and apparatus for processing image
CN108737750A (en) * 2018-06-07 2018-11-02 北京旷视科技有限公司 Image processing method, device and electronic equipment
CN109087256A (en) * 2018-07-19 2018-12-25 北京飞搜科技有限公司 A kind of image deblurring method and system based on deep learning
CN110175968A (en) * 2018-02-21 2019-08-27 国际商业机器公司 Generate the artificial image used in neural network
US20200051228A1 (en) * 2017-08-31 2020-02-13 Suzhou Keda Technology Co., Ltd. Face Deblurring Method and Device
CN111488865A (en) * 2020-06-28 2020-08-04 腾讯科技(深圳)有限公司 Image optimization method and device, computer storage medium and electronic equipment
CN111614888A (en) * 2019-02-26 2020-09-01 纬创资通股份有限公司 Image blurring processing method and system
CN111754396A (en) * 2020-07-27 2020-10-09 腾讯科技(深圳)有限公司 Face image processing method and device, computer equipment and storage medium
CN111914785A (en) * 2020-08-10 2020-11-10 北京小米松果电子有限公司 Method and device for improving definition of face image and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090060373A1 (en) * 2007-08-24 2009-03-05 General Electric Company Methods and computer readable medium for displaying a restored image
CN110097110B (en) * 2019-04-26 2021-07-20 华南理工大学 Semantic image restoration method based on target optimization
CN111445415B (en) * 2020-03-30 2024-03-08 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium
CN112419179B (en) * 2020-11-18 2024-07-05 北京字跳网络技术有限公司 Method, apparatus, device and computer readable medium for repairing image
CN113344832A (en) * 2021-05-28 2021-09-03 杭州睿胜软件有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098853A1 (en) * 2013-08-28 2016-04-07 Tencent Technology (Shenzhen) Company Limited Method and apparatus for processing image
US20200051228A1 (en) * 2017-08-31 2020-02-13 Suzhou Keda Technology Co., Ltd. Face Deblurring Method and Device
CN110175968A (en) * 2018-02-21 2019-08-27 国际商业机器公司 Generate the artificial image used in neural network
CN108737750A (en) * 2018-06-07 2018-11-02 北京旷视科技有限公司 Image processing method, device and electronic equipment
CN109087256A (en) * 2018-07-19 2018-12-25 北京飞搜科技有限公司 A kind of image deblurring method and system based on deep learning
CN111614888A (en) * 2019-02-26 2020-09-01 纬创资通股份有限公司 Image blurring processing method and system
CN111488865A (en) * 2020-06-28 2020-08-04 腾讯科技(深圳)有限公司 Image optimization method and device, computer storage medium and electronic equipment
CN111754396A (en) * 2020-07-27 2020-10-09 腾讯科技(深圳)有限公司 Face image processing method and device, computer equipment and storage medium
CN111914785A (en) * 2020-08-10 2020-11-10 北京小米松果电子有限公司 Method and device for improving definition of face image and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247702A1 (en) * 2021-05-28 2022-12-01 杭州睿胜软件有限公司 Image processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2022247702A1 (en) 2022-12-01

Similar Documents

Publication Publication Date Title
US11373275B2 (en) Method for generating high-resolution picture, computer device, and storage medium
US11107205B2 (en) Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames
CN110163235B (en) Training of image enhancement model, image enhancement method, device and storage medium
CN106778928B (en) Image processing method and device
US10708525B2 (en) Systems and methods for processing low light images
US20200234414A1 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
US9639956B2 (en) Image adjustment using texture mask
CN113168684B (en) Method, system and computer readable medium for improving quality of low brightness images
CN113034358B (en) Super-resolution image processing method and related device
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
WO2018176925A1 (en) Hdr image generation method and apparatus
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
CN111383232A (en) Matting method, matting device, terminal equipment and computer-readable storage medium
CN107547803B (en) Video segmentation result edge optimization processing method and device and computing equipment
US10460487B2 (en) Automatic image synthesis method
CN111179166B (en) Image processing method, device, equipment and computer readable storage medium
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
CN109118509B (en) Blackboard writing image processing method, device, equipment and storage medium
CN113096043B (en) Image processing method and device, electronic device and storage medium
WO2022247702A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN113628259A (en) Image registration processing method and device
CN115908120B (en) Image processing method and electronic device
CN111767924A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113658050A (en) Image denoising method, denoising device, mobile terminal and storage medium
KR20210040702A (en) Mosaic generation apparatus and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination