CN111062924A - Image processing method, device, terminal and storage medium - Google Patents

Image processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN111062924A
CN111062924A CN201911302146.0A CN201911302146A CN111062924A CN 111062924 A CN111062924 A CN 111062924A CN 201911302146 A CN201911302146 A CN 201911302146A CN 111062924 A CN111062924 A CN 111062924A
Authority
CN
China
Prior art keywords
image
processed
local
detail
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911302146.0A
Other languages
Chinese (zh)
Other versions
CN111062924B (en
Inventor
胡风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911302146.0A priority Critical patent/CN111062924B/en
Publication of CN111062924A publication Critical patent/CN111062924A/en
Application granted granted Critical
Publication of CN111062924B publication Critical patent/CN111062924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, a terminal and a storage medium; the embodiment of the invention can acquire the image to be processed, wherein the image to be processed comprises the area to be processed and the reference area; performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed; editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image; local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area; performing image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image; and generating a result image according to the overall prediction image and the detail prediction image. The scheme can improve the image quality of the image generated by the image processing method.

Description

Image processing method, device, terminal and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, a terminal, and a storage medium.
Background
In recent years, sensor technology is continuously developed, the detection precision of an image sensor and the resolution of a generated image are improved, for example, in the field of cultural relic repair, a high-definition camera can collect a high-resolution high-definition image of cultural relics and historic sites, such as high-resolution photos of Dunhuang murals, large-size photos of calligraphy and painting works, and the like.
However, the current manual image processing method is inefficient, and it is difficult for a computer to perform effective image processing on these high-resolution and large-size images, so that the image quality of the image generated by the current image processing method is low.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, a terminal and a storage medium, which can improve the image quality of an image generated by the image processing method.
The embodiment of the invention provides an image processing method, which comprises the following steps:
acquiring an image to be processed, wherein the image to be processed comprises a region to be processed and a reference region;
performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed;
editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image;
local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area;
performing image detail prediction on a local to-be-processed region of the local image based on the local reference region to obtain a detail prediction image of the local image;
and generating a result image according to the overall prediction image and the detail prediction image.
In some embodiments, after acquiring the image to be processed, the method further includes:
acquiring an image processing model set;
carrying out style judgment processing on the image to be processed, and determining the image style of the image to be processed;
determining a target image processing model in the image processing model set according to the image style;
the method comprises the steps of carrying out edge information prediction on an image to be processed based on a reference region based on a target image processing model to obtain edge information corresponding to the image to be processed, carrying out image editing on the region to be processed of the image to be processed according to the edge information to obtain an overall predicted image of the image to be processed, carrying out local selection on the overall predicted image to obtain a local image of the overall predicted image, wherein the local image comprises a local region to be processed and a local reference region, carrying out image detail prediction on the local region to be processed of the local image based on the local reference region to obtain predicted image details of the local image, and finally generating a result image according to the overall predicted image and the detail predicted image.
In some embodiments, obtaining a set of image processing models comprises:
obtaining an initial model and a training sample group, wherein the training sample group comprises a plurality of training samples with the same style;
training the initial model by adopting the training sample group to obtain a trained image processing model, wherein the trained image processing model comprises an edge prediction network and a color filling network;
and marking the image processing model as the style of the training sample group, and placing the image processing model marked with the style in an image processing model set.
In some embodiments, the trained image processing model further comprises a scoring network.
In some embodiments, the trained image processing model further comprises a resolution reconstruction network.
In some embodiments, the performing, based on the reference region, edge information prediction on the image to be processed to obtain edge information corresponding to the image to be processed includes:
performing edge information prediction on the image to be processed based on the reference area to obtain edge information to be processed corresponding to the image to be processed;
and carrying out manual editing processing on the edge information to be processed to obtain the edge information corresponding to the image to be processed.
An embodiment of the present invention further provides an image processing apparatus, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be processed, and the image to be processed comprises an area to be processed and a reference area;
the edge unit is used for carrying out edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed;
the editing unit is used for editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image;
the local unit is used for locally selecting the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area;
the detail unit is used for carrying out image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image;
a result unit for generating a result image from the overall prediction image and the detail prediction image.
The embodiment of the invention also provides a terminal, which comprises a memory, a first memory and a second memory, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the steps in any image processing method provided by the embodiment of the invention.
The embodiment of the present invention further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are suitable for being loaded by a processor to perform any of the steps in the image processing method provided by the embodiment of the present invention.
The embodiment of the invention can acquire the image to be processed, wherein the image to be processed comprises the area to be processed and the reference area; performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed; editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image; local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area; performing image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image; and generating a result image according to the overall prediction image and the detail prediction image.
In the invention, the edge information of the image to be processed can be firstly acquired globally, so that the overall prediction image is predicted according to the edge information globally, then the detail prediction image of the local image in the image to be processed is predicted according to the local detail, and finally, the result image is generated according to the overall prediction image globally and the plurality of local detail prediction images. The invention can process the image from the global scope and then carry out further detail supplement from the local scope, so that the detail predicted image on the local can accurately supplement some detail information which is not predicted in the global predicted image, thereby realizing the detail supplement of the generated result image from the whole to the local, and the invention has better effect for the image with high resolution, a large amount of details and large size. Therefore, the image quality of the image generated by the image processing is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a schematic view of a scene of an image processing method according to an embodiment of the present invention;
FIG. 1b is a schematic flow chart of a first image processing method according to an embodiment of the present invention;
FIG. 1c is a schematic diagram illustrating an effect of a conventional image processing method according to an embodiment of the present invention;
FIG. 1d is a schematic structural diagram of an image to be processed according to the image processing method provided by the embodiment of the present invention;
fig. 1e is a schematic diagram of calibrating a region to be processed in the image processing method according to the embodiment of the present invention;
FIG. 1f is a schematic diagram of edge information prediction of a reference region in an image processing method according to an embodiment of the present invention;
fig. 1g is a schematic structural diagram of an SRCNN of the image processing method according to the embodiment of the present invention;
FIG. 2a is a schematic flow chart of a second image processing method according to an embodiment of the present invention;
FIG. 2b is a schematic diagram of edge prediction in an image processing method according to an embodiment of the present invention;
FIG. 2c is a schematic diagram of color filling of an image processing method according to an embodiment of the present invention;
FIG. 2d is a diagram illustrating a multi-stage iteration of an image processing method according to an embodiment of the present invention;
fig. 2e is a schematic diagram of resolution reconstruction processing of the image processing method according to the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image processing method, an image processing device, a terminal and a storage medium.
The image processing apparatus may be specifically integrated in an electronic device, and the electronic device may be a terminal, a server, or the like. The terminal can be a mobile phone, a tablet Computer, an intelligent bluetooth device, a notebook Computer, or a Personal Computer (PC), and the like; the server may be a single server or a server cluster composed of a plurality of servers.
In some embodiments, the image processing apparatus may be integrated into a plurality of electronic devices, for example, the image processing apparatus may be integrated into a plurality of servers, and the image processing method of the present invention is implemented by the plurality of servers.
In some embodiments, the terminal may also serve as a server, and the image processing apparatus may be specifically integrated in the terminal.
Referring to fig. 1a, the electronic device may acquire an image to be processed, where the image to be processed includes a region to be processed and a reference region; performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed; editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image; local selection is carried out on the overall prediction image to obtain a local image 1 and a local image 2 of the overall prediction image, wherein each local image can comprise a local to-be-processed area and a local reference area; then, image detail prediction is carried out on a local to-be-processed area of each local image based on a local reference area of each local image, and a detail prediction image 1 of the local image 1 and a detail prediction image 2 of the local image 2 are obtained; a result image is generated from the global predicted image, the detail predicted image 1, and the detail predicted image 2.
The following are detailed below.
The numbers in the following examples are not intended to limit the order of preference of the examples.
In the present embodiment, an image processing method is provided, and as shown in fig. 1b, a specific flow of the image processing method may be as follows:
101. and acquiring an image to be processed, wherein the image to be processed comprises a region to be processed and a reference region.
The image to be processed is an image to be subjected to image processing, the image to be processed comprises an area to be processed and a reference area, wherein the area to be processed is a local area to be subjected to image processing in the image to be processed; the reference region refers to a region of the image to be processed, which does not need image processing, and can be used as a reference object for assisting and referring when the image processing is performed on the region to be processed.
For example, referring to fig. 1c, the left image is an image to be processed with a resolution of 500mm × 500mm, and the right image is an image generated after a certain image processing software performs a completion process on the image to be processed, where a blank area in the left image is a region to be processed, and the region to be processed cannot represent original information of the image due to factors such as a vacancy, contamination, and distortion.
It can be seen from the drawings that due to the limitation of the prior art, the prior art cannot accurately complement and correct the image to be processed, especially for the large-size, high-resolution, and multi-detail image, the image quality generated by the prior art is low, and only the elements existing in the reference area can be repeatedly displayed in the area to be processed, so as to achieve the illusion that the image has been corrected, and the repaired image cannot accurately and correctly represent the real picture in the area to be processed of the image to be processed.
In some embodiments, the image to be processed may further include an invalid region, which refers to a region that does not need to be processed and cannot be used as a reference object in the image to be processed, for example, refer to fig. 1d, where a black region is an invalid region in the image to be processed, a white region is a reference region in the image to be processed, and a diagonal region is a region to be processed in the image to be processed.
The mode of obtaining the image to be processed has various modes, for example, obtaining the image from a database through a network; for example, a technician may also take a picture with a high-precision camera and import the picture for acquisition; as another example, the information may also be obtained in a local memory, and so on.
For example, in some embodiments, the image to be processed may be captured by a high-definition camera, and multiple captured images are merged into one large-size, high-resolution, and multi-detail image to be processed, which may be directly imported into a database or a local memory.
In some embodiments, the to-be-processed region in the to-be-processed image may be calibrated in advance, so step 101 may specifically calibrate the to-be-processed region in the to-be-processed image, and the reference region is a region of the to-be-processed image other than the to-be-processed region.
For example, referring to fig. 1e, a left image in fig. 1e is an image to be processed, which includes a plurality of damaged and stained areas, and a right image is the calibrated area to be processed. The region to be processed can be calibrated in the image to be processed by a technician and the result of the calibration is recorded in the form of a (0, 255) matrix.
102. And performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed.
The edge information refers to image edge information (edge information of image), where an edge refers to a junction between one attribute region and another attribute region in an image, and is a place where the attribute of the region of the image changes suddenly, and the edge information can be used to distinguish a target from a background.
For example, referring to fig. 1f, the left image in fig. 1f is an image to be processed, and the right image is edge information of the image to be processed.
There are various methods for predicting edge information of an image to be processed based on a reference region to obtain edge information corresponding to the image to be processed, for example, a trained neural network model is directly used to perform edge information prediction on the image to be processed based on the reference region to obtain edge information corresponding to the image to be processed.
The method for obtaining the trained neural network model is various, for example, obtaining from a database through a network; for example, it can also be obtained by training and importing by a technician; as another example, the information may also be obtained in a local memory, and so on.
As another example, in some embodiments, step 102 may include step (2.1) and step (2.2), as follows:
and (2.1) carrying out edge detection processing on the reference area to obtain reference area edge information corresponding to the reference area.
The method for performing edge detection processing on the reference region has various methods, for example, the method for extracting the edge information of the reference region corresponding to the reference region by using a differential operator, a laplacian gaussian operator, a Canny operator, a Sobel operator, and the like.
For example, a Canny operator can perform smoothing operation on a reference image by using a gaussian filter to obtain a smoothed image, then calculate the gradient amplitude and the direction of the smoothed image by using a first-order partial derivative finite difference, then perform non-maximum suppression on the gradient amplitude, and detect and connect edges by using a dual-threshold algorithm, thereby extracting the edge information of the reference region corresponding to the reference region.
And (2.2) predicting the edge information in the to-be-processed area based on the to-be-processed image and the reference area edge information to obtain the edge information corresponding to the to-be-processed image.
The trained neural network model can be directly adopted to predict the edge information of the image to be processed based on the reference region, so as to obtain the edge information corresponding to the image to be processed.
The method for obtaining the trained neural network model is various, for example, obtaining from a database through a network; for example, it can also be obtained by training and importing by a technician; as another example, the information may also be obtained in a local memory, and so on.
In some embodiments, in order to further improve the image quality of the image generated by image processing and the efficiency of image processing, step (2.2) may be implemented by using a deep learning method, so step (2.2) may specifically be to use an edge prediction network in a neural network model to predict edge information in the region to be processed based on the image to be processed and the edge information of the reference region, so as to obtain edge information corresponding to the image to be processed, where the edge prediction network may be trained by an edge training sample.
The edge prediction Network may be any Deep Neural Network (DNN), such as a Convolutional Neural Network (CNN), a Full Connection Network (FCN), and so on.
For example, a U-Net network (a CNN-based image segmentation network) may be used as an edge prediction network to predict edge information in a region to be processed based on an image to be processed and reference region edge information, so as to obtain edge information corresponding to the image to be processed.
103. And editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image.
Image editing refers to a process of changing an image, for example, image editing may refer to modification, generation, rendering, and the like of an image.
For example, image editing may refer to the correction, generation, deletion, etc. of image edges in an image; as another example, the color distribution of the image in the image may be modified, generated, filled, deleted, etc.
For example, image editing may refer to a process step of filling image colors according to image edges.
Specifically, there are various methods for editing an image of a to-be-processed area of an image to be processed according to edge information, for example, the to-be-processed area of the image to be processed can be edited by a manual method according to the edge information; for another example, the image editing may be performed on the to-be-processed region of the to-be-processed image according to the edge information by a deep learning method.
For example, in some embodiments, in order to automatically and accurately perform image processing, so as to improve the efficiency of image processing and the accuracy of the generated image, step 103 may specifically perform color filling on the region to be processed according to the color distribution in the reference region and the edge information corresponding to the image to be processed by using a color filling network, so as to obtain a preliminary overall predicted image of the image to be processed, where the color filling network is trained by a color training sample.
Wherein, the color filling network can be any deep neural network, such as CNN network,
104. And local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area.
The local selection refers to selecting a local part in an image as a local image of the image.
And locally selecting the overall prediction image to obtain at least one local image of the overall prediction image.
For example, the global predictive image is subjected to image segmentation to segment the upper half 1/2 of the global predictive image to obtain a local image 1, and the global predictive image is subjected to image segmentation to segment the lower half 1/2 of the global predictive image to obtain a local image 2.
The specific selection mode may be set by a technician, for example, the upper half of the image is intercepted, the lower half of the image is intercepted, 10 centimeters are intercepted with the center of the image as the center of a circle, and the like.
The local image may include a local to-be-processed region and a local reference region, where the local to-be-processed region and the local reference region of the local image are similar to the to-be-processed region and the reference region of the to-be-processed image, and the local to-be-processed region is one of the to-be-processed regions, but in the to-be-processed image, only the to-be-processed region may be displayed due to display resolution and display size, and the local to-be-processed region cannot be directly visualized, and when the local image of the to-be-processed image is enlarged, the local to-be-processed region in the local image may be successfully displayed.
In some embodiments, in order to prevent missing or mislabeling of a fine and invisible local to-be-processed area in the to-be-processed area in step 101, thereby improving the fineness and accuracy of an image generated by image processing, and further improving the image quality, step 104 may specifically include the following steps:
carrying out image amplification on the overall prediction image to obtain an amplified overall prediction image;
local selection is carried out in the amplified overall prediction image to obtain a local image of the overall prediction image;
and calibrating a local to-be-processed region in the local image of the overall prediction image, wherein the local reference region is a region except the local to-be-processed region in the local image.
The local to-be-processed area can be calibrated in the local image of the overall prediction image through manual calibration, and the local to-be-processed area can also be automatically and efficiently calibrated in the local image of the overall prediction image through the deep neural network, and the like.
105. And performing image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image.
The method for obtaining the detail prediction image of the local image by performing image detail prediction on the local to-be-processed area of the local image based on the local reference area is various, for example, the method for obtaining the detail prediction image of the local image by performing image detail prediction on the local to-be-processed area of the local image based on the local reference area manually by an operator; and image detail prediction can be automatically and efficiently carried out on a local to-be-processed area of the local image based on the local reference area through the deep neural network, so as to obtain a detail prediction image of the local image, and the like.
For example, in some embodiments, step 105 may be performed by a multi-scale iterative method to supplement detail information in the overall prediction image, so that the image generated by the image processing method is finer and more accurate, and the quality of the generated image is further improved, so that step 105 may include step (5.1) and step (5.2), as follows:
and (5.1) performing image detail prediction on a local to-be-processed region of the local image based on the local reference region to obtain a maximum-level detail prediction image of the local image, and storing the maximum-level detail prediction image in a detail prediction image set.
Wherein step (5.1) is similar to step 103, the maximum level detail predicted image is similar to the global predicted image, but the maximum level detail predicted image is a predicted image of the entire local image, so step (5.1) can be referred to step 103, so in some embodiments step (5.1) can include step (5.1.1), step (5.1.2), and step (5.1.3), as follows:
and (5.1.1) performing edge information prediction on the local image based on the local reference area to obtain local edge information corresponding to the local image.
And (5.1.2) carrying out image editing on the local to-be-processed area according to the local edge information to obtain a maximum-level detail prediction image of the local image.
(5.1.3) saving the maximum level of detail predicted images in the detail predicted image set.
The detail predicted image set may be a null set, or may include at least one detail predicted image. After the image is edited and a predicted image of the image is obtained, the predicted image can be saved in the detail predicted image set for performing the subsequent multi-scale iteration steps.
In some embodiments, in order to improve the richness of the generated image, the edge information corresponding to the image to be processed may be added as noise to the local edge information, so as to generate a plurality of result images for manual selection, so the step (5.1.2) may specifically include the following steps:
performing information fusion processing on the local edge information and the edge information corresponding to the image to be processed to obtain edge fusion information;
and editing the local region to be processed according to the edge fusion information to obtain a maximum-level detail prediction image of the local image.
The information fusion processing refers to performing pixel-level fusion on the edge images corresponding to the two pieces of edge information, for example, directly and simply performing pixel value averaging processing; the two edge images can be semantically fused through a deep neural network, and the like.
And (5.2) performing multi-stage iterative generation processing on the image details of the maximum-level detail predicted image based on the detail predicted image set to obtain a plurality of detail predicted images of different levels, and storing the detail predicted images of different levels in the detail predicted image set.
The multi-stage iteration is to repeatedly iterate the step of taking the previous-stage detail predicted image in the detail predicted image set as the current detail predicted image to calculate the next-stage detail predicted image of the current detail predicted image until all the fine details are predicted, so that the fineness and the accuracy of the image generated by the image processing method are improved.
Specifically, in some embodiments, step (5.2) may include step (5.2.1), step (5.2.2), and step (5.2.3), as follows:
and (5.2.1) acquiring current cycle information.
(5.2.2) when the current loop information belongs to the preset loop range, determining the current detail predicted image in the detail predicted image set.
And (5.2.3) performing image detail prediction on the current detail predicted image to obtain a next-level detail predicted image of the current detail predicted image, and storing the next-level detail predicted image in the detail predicted image set.
The current loop information may include, among others, the current loop number, previous-level detail predicted image information of the current detail predicted image, next-level detail predicted image information of the current detail predicted image, and so on.
The preset loop range may be set by a person skilled in the art, for example, the preset loop range is set to "loop 10 times", when the current loop time in the current loop information is less than 10 times, the iteration loop is continued, and when the current loop time in the current loop information is greater than or equal to 10 times, the iteration loop is stopped.
In particular, one iteration loop per multiple iterations is similar to step 102, step 103 and step 104, in some embodiments, the current detail predicted image may include a current local to-be-processed region and a current local reference region, and step (5.2.3) may specifically include the following steps:
performing edge information prediction on the current detail prediction image based on the current local reference area to obtain current local edge information corresponding to the current detail prediction image;
performing image editing on the current local to-be-processed area according to the current local edge information to obtain a preliminary detail prediction image of the current detail prediction image;
locally selecting the preliminary detail prediction image to obtain a next-level detail prediction image of the current detail prediction image;
and saving the next-level detail predicted image in the detail predicted image set.
106. And generating a result image according to the overall prediction image and the detail prediction image.
Wherein the entire information of the entire predicted image can be supplemented with the detail predicted image as the detail information.
Wherein the resulting image can be generated from the overall predicted image and the at least one detail predicted image.
For example, the detail prediction image may be adjusted to an appropriate size, and then the corresponding pixels in the overall prediction image and the detail prediction image may be simply averaged together to generate the resulting image.
As another example, step 106 may be performed using any deep neural network.
In some embodiments, in order to make details of the generated image richer and eliminate image aliasing so that the image is more high definition, step 106 may specifically include the following steps:
generating a preliminary result image according to the overall prediction image and the detail prediction image;
and carrying out resolution reconstruction processing on the preliminary result image to obtain a result image.
The Resolution reconstruction processing is to increase the Resolution of the Image or modify the Resolution of the Image, and any Super-Resolution technique (SR) may be used, for example, Resolution reconstruction using SRCNN (Super-Resolution coherent Neural Network), Resolution reconstruction using EDSR (Enhanced Deep Resolution networks for Single Image Super-Resolution), or the like.
The SRCNN can amplify the low-resolution image into a target size by using a bicubic Interpolation algorithm (Bi-Cubic Interpolation), fit nonlinear mapping through a three-layer convolution network, and finally output a high-resolution result image.
For example, referring to fig. 1g, the convolution kernels used by the three convolution layers are divided into sizes 9 × 9, 1 × 1, and 5 × 5, and the number of output features of the first two convolution kernels is 64 and 32, respectively.
In some embodiments, the detail prediction image set can be obtained in step (5.2) of step 105, so step 106 can specifically be to generate a result image from the global prediction image and the plurality of detail prediction images in the detail prediction image set.
In order to provide a score reference for human selection, in some embodiments, after step 106, the following steps may be further included:
and carrying out image scoring processing on the result image to obtain the image score of the result image.
In some embodiments, the result images may also be sorted according to the image scores, and the top N images with the highest scores may be displayed for human selection while reducing the workload of the user.
In some embodiments, the image processing model may be trained according to training images of different styles, times and types, and an appropriate image processing model may be used to perform image processing, so that the generated result image fits the style of the original image to be processed, and the image quality of the generated image is further improved, so after step 101, step a, step B, step C and step D may be further included, as follows:
A. acquiring an image processing model set;
B. carrying out style judgment processing on the image to be processed, and determining the image style of the image to be processed;
C. determining a target image processing model in the image processing model set according to the image style;
D. step 102, step 103, step 104, step 105, and step 106 are performed based on the target image processing model.
In this embodiment, any CNN may be used as an image classification model to perform the style determination process on the image to be processed.
In some embodiments, step a may specifically include the steps of:
acquiring an initial model and a training sample group, wherein the training sample group comprises a plurality of training samples with the same style;
training the initial model by adopting a training sample group to obtain a trained image processing model, wherein the trained image processing model can comprise an edge prediction network and a color filling network;
and marking the image processing model as the style of the training sample group, and placing the image processing model marked with the style in the image processing model set.
The edge prediction network can be used for edge prediction processing of any image, and the color filling network can be used for any color filling processing.
In some embodiments, the trained image processing model may also include a scoring network that may be used for image scoring, image ranking, and so forth, for manual selection.
In some embodiments, the trained image processing model may further include a resolution reconstruction network that may be used to increase the resolution of the image, making the resulting image more high definition.
As can be seen from the above, the embodiment of the present invention may acquire an image to be processed, where the image to be processed includes a region to be processed and a reference region; performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed; editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image; local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area; performing image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image; and generating a result image according to the overall prediction image and the detail prediction image.
In the invention, the edge information of the image to be processed can be firstly acquired globally, so that the overall prediction image is predicted according to the edge information globally, then the detail prediction image of the local image in the image to be processed is predicted according to the local detail, and finally, the result image is generated according to the overall prediction image globally and the plurality of local detail prediction images. The invention can process the image from the global scope and then carry out further detail supplement from the local scope, so that the detail predicted image on the local can accurately supplement some detail information which is not predicted in the global predicted image, thereby realizing the detail supplement of the generated result image from the whole to the local, and the invention has better effect for the image with high resolution, a large amount of details and large size.
Therefore, the image quality of the image generated by the image processing is improved.
The method described in the above embodiments is further described in detail below.
The image processing scheme provided by the embodiment of the invention can be applied to various image processing scenes, such as the field of cultural relics and historical sites, the field of cosmic physics, the field of geographical maps, the field of material chemistry and the like, and particularly has a good image processing effect in the field of processing picture data with large scale, high resolution and multiple details.
For example, in this embodiment, the method of the embodiment of the present invention will be described in detail by taking the repair of the historical image of the historical relic as an example.
As the cultural relics and the ancient sites receive factors such as moisture, rat and insect, oxidation and the like, the cultural relics and the ancient sites often have various large and small damages, stains and flaws, when images of the cultural relics and the ancient sites are repaired, a high-definition digital camera is needed to be used for collecting images with large size and high resolution (up to billions of pixels), and even the high-definition photos can be spliced to obtain a complete oversized photo.
At present, except manual painting repair, no method can automatically and accurately repair all stained flaws in large-size and high-resolution historical relic pictures, and the repaired part is attached to the original style of the historical relics and meets the image repair requirement of the historical relics.
As shown in fig. 2a, an image processing method can effectively solve the above problem, and the specific flow thereof is as follows:
201. and acquiring an image processing model set and an image to be processed.
Firstly, an image processing model set can be obtained through an image training method, wherein the image processing models in the image processing model set have multiple styles.
For example, training images of various styles, such as the Dunhuang fresco style, the late porcelain style, the Qialbi painting and calligraphy style, and the like, can be collected and sorted to obtain grouped training images and obtain an initial model.
And training the initial model by adopting a certain style of training image group to obtain an image processing model marked with the style, and storing the image processing model in an image processing model set.
The image processing model can comprise an edge prediction network, a color filling network, a grading network and a resolution reconstruction network.
The image to be processed can be shot by a high-definition camera, stored in a database and obtained through communication with the database through a network.
202. And acquiring a target image processing model corresponding to the image to be processed in the image processing model set, wherein the image processing model comprises an edge prediction network, a color filling network, a grading network and a resolution reconstruction network.
For example, in some embodiments, a preset image classification model may be first used to classify the image style of an image to be processed, so as to determine the style type of the image to be processed; then, the image processing model of the genre is determined as the target image processing model in the set of image processing models.
203. And performing edge information prediction on the image to be processed by adopting an edge prediction network based on the reference area to obtain edge information corresponding to the image to be processed.
For example, referring to fig. 2b, edge information of the reference region in the image to be processed may be detected first, for example, edge information of the reference region is extracted by using Canny operator.
The method for extracting the edge information of the reference region by adopting the Canny operator comprises the following steps:
carrying out gray level conversion on the reference area to obtain a gray level image of the reference area;
adopting a Gaussian filter to carry out smoothing processing on the gray level image to obtain a smooth image corresponding to the gray level image;
calculating gradient values and directions of the smooth images;
carrying out non-maximum suppression on the gradient value to obtain a suppressed image;
and detecting and connecting the edges of the suppressed images by adopting a dual-threshold algorithm to obtain edge information.
The gaussian filter may include two one-dimensional gaussian kernels and may also include a two-dimensional gaussian kernel, where the one-dimensional gaussian function is defined as follows:
Figure BDA0002322095930000161
where σ is the standard deviation sigma and (x) is a one-dimensional pixel.
The two-dimensional gaussian function is defined as follows:
Figure BDA0002322095930000162
where (x, y) is a pixel in the two-dimensional image.
Wherein the gradient values are defined as follows:
Figure BDA0002322095930000163
wherein G isxIs a gradient in the horizontal direction, GyIs the gradient in the vertical direction.
Wherein the directions are defined as follows:
θ=atan2(Gy+Gx)
wherein, the gradient can be calculated by using a Sobel operator, and the gradient is defined as follows:
Figure BDA0002322095930000171
Figure BDA0002322095930000172
where A is the reference area.
204. And editing the to-be-processed area of the to-be-processed image by adopting a color filling network according to the edge information to obtain an overall predicted image of the to-be-processed image.
For example, referring to fig. 2c, a color filling network may be used to edit the to-be-processed region of the to-be-processed image according to the edge information, so as to obtain an overall predicted image of the to-be-processed image.
For a specific image editing process, reference may be made to step 103, which is not described herein again.
205. The method comprises the steps of performing local selection on an overall predicted image to obtain a local image of the overall predicted image, wherein the local image comprises a local to-be-processed area and a local reference area, performing image detail prediction on the local to-be-processed area of the local image based on the local reference area to obtain a detail predicted image of the local image, and finally generating a preliminary result image according to the overall predicted image and the detail predicted image.
Referring to fig. 2d, step 205 may be performed in a multi-stage iterative manner.
For example, as shown in fig. 2d, in steps 201 to 203 and when the global prediction image is obtained, in step 204, an image of the upper half 1/2 of the to-be-processed image can be obtained, and the 1/2 prediction image of the to-be-processed image can be obtained by performing a first-level iteration loop on the upper half 1/2 of the global prediction image of the to-be-processed image kernel 1/2; then, the top left 1/4 portion of the predictive image of the 1/2 image to be processed is selected, the 1/4 portion and the 1/16 portion of the top left of the original image to be processed are subjected to a second level of iterative loop, so that the predictive image of the 1/16 portion of the original image to be processed can be obtained, and the like.
Thereby, local-part prediction images of each 1/2 part, 1/4 part, 1/16 part, and the like of the image to be processed can be obtained.
Specifically, each iteration cycle is to perform edge information detection on the image to obtain edge information corresponding to the image, and perform color filling processing according to the edge information and the local region to be processed to obtain a predicted image of the image.
For a specific iteration loop step, reference may be made to step 105, which is not described herein again.
206. And performing resolution reconstruction processing on the preliminary result image by adopting a resolution reconstruction network to obtain a result image.
For example, referring to fig. 2e, step 206 may be performed by using super resolution technology, such as inputting the preliminary result images (the upper left image and the lower left image of fig. 2 e) into the srnnn to obtain the result images (the upper right image and the lower right image of fig. 2 e), so that the details of the generated result images are richer, the image jaggies in the result images are weakened, and the result images are higher definition.
The specific structure of the SRCNN can refer to FIG. 1 g.
207. And carrying out image grading processing on the result image by adopting a grading network to obtain the image score of the result image.
Specifically, any trained image classification network and image recognition network can be used as a scoring network to perform image scoring processing on the result image to obtain an image score of the result image, and the image score is used for evaluating and screening the quality of the generated result image.
The embodiment of the invention can also sort the result images according to the scores, and select the top 10 images for displaying so as to facilitate the observation and selection of the user.
In some embodiments, the scoring network may also be used as a supervisor to train the image processing model.
As can be seen from the above, the embodiment of the present invention can obtain an image processing model set and an image to be processed; acquiring a target image processing model corresponding to an image to be processed from an image processing model set, wherein the image processing model comprises an edge prediction network, a color filling network, a grading network and a resolution reconstruction network; performing edge information prediction on the image to be processed based on the reference area by adopting an edge prediction network to obtain edge information corresponding to the image to be processed; adopting a color filling network to edit the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image; the method comprises the steps of performing local selection on an overall predicted image to obtain a local image of the overall predicted image, wherein the local image comprises a local to-be-processed area and a local reference area, performing image detail prediction on the local to-be-processed area of the local image based on the local reference area to obtain a detail predicted image of the local image, and finally generating a preliminary result image according to the overall predicted image and the detail predicted image; adopting a resolution reconstruction network to carry out resolution reconstruction processing on the preliminary result image to obtain a result image; and carrying out image grading processing on the result image by adopting a grading network to obtain the image score of the result image.
The embodiment of the invention can automatically and efficiently finish the repair work of the historical relic image, and through multi-stage iteration, the embodiment of the invention can effectively restore the stained and missing parts of any shape and size in the historical relic image, especially for the images with high definition, high resolution and large size. Therefore, the image quality of the image generated by the image processing is improved.
In order to better implement the method, an embodiment of the present invention further provides an image processing apparatus, where the image processing apparatus may be specifically integrated in a terminal, and the terminal may be a mobile phone, a tablet computer, an intelligent bluetooth device, a notebook computer, a personal computer, or the like.
For example, in the present embodiment, the method according to the embodiment of the present invention will be described in detail by taking an example in which an image processing apparatus is specifically integrated in a terminal.
For example, as shown in fig. 3, the image processing apparatus may include an acquisition unit 301, an edge unit 302, an editing unit 303, a local unit 304, a detail unit 305, and a result unit 306, as follows:
the acquisition unit 301:
the acquisition unit 301 may be configured to acquire an image to be processed, where the image to be processed includes a region to be processed and a reference region.
In some embodiments, the obtaining unit 301 may further be configured to calibrate a region to be processed in the image to be processed, where the reference region is a region of the image to be processed other than the region to be processed.
(II) edge cell 302:
the edge unit 302 may be configured to perform edge information prediction on the image to be processed based on the reference region, so as to obtain edge information corresponding to the image to be processed.
In some embodiments, the edge unit 302 may include a detection subunit and a prediction subunit, as follows:
(1) a detection subunit:
the detection subunit may be configured to perform edge detection processing on the reference area to obtain reference area edge information corresponding to the reference area.
(2) A predictor unit:
the prediction subunit may be configured to predict edge information in the to-be-processed region based on the to-be-processed image and the reference region edge information, so as to obtain edge information corresponding to the to-be-processed image.
In some embodiments, the predictor unit may be specifically configured to:
and predicting the edge information in the to-be-processed area by adopting an edge prediction network based on the to-be-processed image and the reference area edge information to obtain the edge information corresponding to the to-be-processed image, wherein the edge prediction network is formed by training edge training samples.
(iii) editing unit 303:
the editing unit 303 may be configured to perform image editing on a to-be-processed area of the to-be-processed image according to the edge information, so as to obtain an overall predicted image of the to-be-processed image.
In some embodiments, the editing unit 303 may be specifically configured to:
and performing color filling on the to-be-processed area by adopting a color filling network according to the color distribution in the reference area and the edge information corresponding to the to-be-processed image to obtain a preliminary overall prediction image of the to-be-processed image, wherein the color filling network is formed by training a color training sample.
(IV) local unit 304:
the local unit 304 may be configured to perform local selection on the global prediction image to obtain a local image of the global prediction image, where the local image includes a local to-be-processed region and a local reference region.
In some embodiments, the local unit 304 may be specifically configured to:
carrying out image amplification on the overall prediction image to obtain an amplified overall prediction image;
local selection is carried out in the amplified overall prediction image to obtain a local image of the overall prediction image;
and calibrating a local to-be-processed region in the local image of the overall prediction image, wherein the local reference region is a region except the local to-be-processed region in the local image.
(V) detail unit 305:
the detail unit 305 may be configured to perform image detail prediction on a local to-be-processed region of the local image based on the local reference region, so as to obtain a detail prediction image of the local image.
In some embodiments, detail unit 305 may include a maximum detail subunit and a multi-level iteration subunit, as follows
(1) Maximum detail subunit:
the method can be used for performing image detail prediction on a local to-be-processed region of a local image based on a local reference region, obtaining a maximum-level detail prediction image of the local image, and storing the maximum-level detail prediction image in a detail prediction image set.
In some embodiments, the maximum detail subunit may include a local edges submodule, an editing submodule, and a save submodule, as follows:
a. local edge sub-module:
the local edge sub-module may be configured to perform edge information prediction on the local image based on the local reference region, so as to obtain local edge information corresponding to the local image.
b. An editing submodule:
the editing submodule can be used for carrying out image editing on the local area to be processed according to the local edge information to obtain a maximum-level detail prediction image of the local image.
c. Saving the submodule:
the saving sub-module may be configured to save the maximum level detail prediction image in the detail prediction image set.
In some embodiments, the editing sub-module may be specifically configured to:
performing information fusion processing on the local edge information and the edge information corresponding to the image to be processed to obtain edge fusion information;
and editing the local region to be processed according to the edge fusion information to obtain a maximum-level detail prediction image of the local image.
(2) A multi-stage iteration subunit:
the multi-stage iteration sub-unit can be used for performing multi-stage iteration generation processing on the image details of the maximum-stage detail predicted image based on the detail predicted image set to obtain a plurality of detail predicted images of different levels, and storing the plurality of detail predicted images of different levels in the detail predicted image set.
In some embodiments, the multi-stage iteration sub-unit may include an acquisition sub-module, a determination sub-module, and, a next sub-module, as follows:
a. obtaining a submodule:
the acquisition submodule may be configured to acquire current loop information.
b. Determining a submodule:
the determining sub-module may be configured to determine the current detail predicted image in the detail predicted image set when the current loop information belongs to the preset loop range.
c. A subordinate submodule:
the lower sub-module can be used for performing image detail prediction on the current detail predicted image, obtaining a next-level detail predicted image of the current detail predicted image, and storing the next-level detail predicted image in the detail predicted image set.
In some embodiments, the current detail prediction image may include a current local to-be-processed region and a current local reference region, and may specifically be configured to:
performing edge information prediction on the current detail prediction image based on the current local reference area to obtain current local edge information corresponding to the current detail prediction image;
performing image editing on the current local to-be-processed area according to the current local edge information to obtain a preliminary detail prediction image of the current detail prediction image;
locally selecting the preliminary detail prediction image to obtain a next-level detail prediction image of the current detail prediction image;
and saving the next-level detail predicted image in the detail predicted image set.
(sixth) result unit 306:
the result unit 306 can be used to generate a result image from the overall predicted image and the detail predicted image.
In some embodiments, the result unit 306 may be specifically configured to:
generating a preliminary result image according to the overall prediction image and the detail prediction image;
and carrying out resolution reconstruction processing on the preliminary result image to obtain a result image.
In some embodiments, the multiple iteration sub-units of the detail unit 305 may obtain a detail prediction image set, so the result unit 306 may be specifically configured to:
and generating a result image according to the whole prediction image and the detail prediction image set.
In some embodiments, after the result unit 306, it may also be used to:
and carrying out image scoring processing on the result image to obtain the image score of the result image.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, the image processing apparatus of the present embodiment may acquire, by the acquisition unit, an image to be processed, where the image to be processed includes a region to be processed and a reference region; performing edge information prediction on the image to be processed by an edge unit based on the reference area to obtain edge information corresponding to the image to be processed; the editing unit edits the image of the region to be processed of the image to be processed according to the edge information to obtain an overall predicted image of the image to be processed; local selection is carried out on the overall prediction image by a local unit to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area; performing image detail prediction on a local to-be-processed area of the local image by a detail unit based on the local reference area to obtain a detail prediction image of the local image; a result image is generated by the result unit from the overall predicted image and the detail predicted image. Therefore, the image quality of the image generated by the image processing can be improved.
The embodiment of the invention also provides the electronic equipment which can be equipment such as a terminal, a server and the like.
The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and the like; the server may be a single server, a server cluster composed of a plurality of servers, or the like.
In some embodiments, the image processing apparatus may be integrated into a plurality of electronic devices, for example, the image processing apparatus may be integrated into a plurality of servers, and the image processing method of the present invention is implemented by the plurality of servers.
In some embodiments, the server may also be implemented by one terminal or by multiple terminals.
In this embodiment, a detailed description will be given by taking the electronic device of this embodiment as an example of a terminal, for example, as shown in fig. 4, which shows a schematic structural diagram of a terminal according to an embodiment of the present invention, specifically:
the terminal may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, an input module 404, and a communication module 405. Those skilled in the art will appreciate that the terminal configuration shown in fig. 4 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the terminal. In some embodiments, processor 401 may include one or more processing cores; in some embodiments, processor 401 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The terminal also includes a power supply 403 for powering the various components, and in some embodiments, the power supply 403 may be logically coupled to the processor 401 via a power management system, such that the power management system may perform functions of managing charging, discharging, and power consumption. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The terminal may also include an input module 404, the input module 404 being operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The terminal may also include a communication module 405, and in some embodiments the communication module 405 may include a wireless module, through which the terminal may wirelessly transmit over short distances, thereby providing wireless broadband internet access to the user. For example, the communication module 405 may be used to assist a user in sending and receiving e-mails, browsing web pages, accessing streaming media, and the like.
Although not shown, the terminal may further include a display unit and the like, which will not be described in detail herein. Specifically, in this embodiment, the processor 401 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
acquiring an image to be processed, wherein the image to be processed comprises a region to be processed and a reference region;
performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed;
editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image;
local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area;
performing image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image;
and generating a result image according to the overall prediction image and the detail prediction image.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
According to the method, the device and the system, the image to be processed can be obtained, and the image to be processed comprises the area to be processed and the reference area; performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed; editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image; local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area; performing image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image; and generating a result image according to the overall prediction image and the detail prediction image. Therefore, the image quality of the image generated by the image processing can be improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the embodiment of the present invention provides a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any image processing method provided by the embodiment of the present invention. For example, the instructions may perform the steps of:
according to the method, the device and the system, the image to be processed can be obtained, and the image to be processed comprises the area to be processed and the reference area; performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed; editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image; local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area; performing image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image; and generating a result image according to the overall prediction image and the detail prediction image.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image processing method provided in the embodiment of the present invention, the beneficial effects that can be achieved by any image processing method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The image processing method, the image processing apparatus, the image processing terminal and the computer-readable storage medium according to the embodiments of the present invention are described in detail, and a specific example is applied to illustrate the principles and embodiments of the present invention, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. An image processing method, comprising:
acquiring an image to be processed, wherein the image to be processed comprises a region to be processed and a reference region;
performing edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed;
editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image;
local selection is carried out on the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area;
performing image detail prediction on a local to-be-processed region of the local image based on the local reference region to obtain a detail prediction image of the local image;
and generating a result image according to the overall prediction image and the detail prediction image.
2. The image processing method of claim 1, wherein performing edge information prediction on the image to be processed based on the reference region to obtain edge information corresponding to the image to be processed comprises:
performing edge detection processing on the reference area to obtain reference area edge information corresponding to the reference area;
and predicting the edge information in the to-be-processed area based on the to-be-processed image and the reference area edge information to obtain the edge information corresponding to the to-be-processed image.
3. The image processing method of claim 2, wherein predicting edge information in a region to be processed based on the image to be processed and edge information of a reference region to obtain the edge information corresponding to the image to be processed comprises:
and predicting edge information in the to-be-processed area based on the to-be-processed image and the reference area edge information by adopting an edge prediction network to obtain the edge information corresponding to the to-be-processed image, wherein the edge prediction network is formed by training edge training samples.
4. The image processing method according to claim 1, wherein performing image detail prediction on a local to-be-processed region of the local image based on the local reference region to obtain a detail prediction image of the local image, comprises:
performing image detail prediction on a local to-be-processed region of the local image based on the local reference region to obtain a maximum-level detail prediction image of the local image, and storing the maximum-level detail prediction image in a detail prediction image set;
performing multi-level iteration generation processing on the image details based on the detail predicted image set to obtain a plurality of detail predicted images of different levels, and storing the detail predicted images of different levels in the detail predicted image set;
the generating of the resulting image from the overall predicted image and the detail predicted image includes:
and generating a result image according to the whole prediction image and the detail prediction image set.
5. The image processing method according to claim 4, wherein the performing image detail prediction on the local to-be-processed region of the local image based on the local reference region to obtain a maximum level detail prediction image of the local image, and storing the maximum level detail prediction image in a detail prediction image set comprises:
performing edge information prediction on a local image based on the local reference region to obtain local edge information corresponding to the local image;
performing image editing on the local area to be processed according to the local edge information to obtain a maximum-level detail prediction image of the local image;
and saving the maximum level detail predicted image in a detail predicted image set.
6. The image processing method according to claim 5, wherein performing image editing on the local region to be processed according to the local edge information to obtain a predicted image of the maximum level of detail of the local image, comprises:
performing information fusion processing on the local edge information and the edge information corresponding to the image to be processed to obtain edge fusion information;
editing the local region to be processed according to the edge fusion information to obtain a maximum-level detail predicted image of the local image
After generating the result image according to the overall prediction image and the detail prediction image, the method further comprises the following steps:
and carrying out image scoring processing on the result image to obtain the image score of the result image.
7. The image processing method according to claim 4, wherein the performing of the multi-stage iterative generation processing on the image details based on the detail predicted image set to obtain a plurality of detail predicted images of different levels, and storing the plurality of detail predicted images of different levels in the detail predicted image set comprises:
acquiring current cycle information;
when the current cycle information belongs to a preset cycle range, determining a current detail predicted image in the detail predicted image set;
and performing image detail prediction on the current detail prediction image to obtain a next-level detail prediction image of the current detail prediction image, and storing the next-level detail prediction image in a detail prediction image set.
8. The image processing method according to claim 7, wherein the current detail predicted image includes a current local to-be-processed region and a current local reference region, the image detail prediction is performed on the current detail predicted image to obtain a next-level detail predicted image of the current detail predicted image, and the next-level detail predicted image is saved in a detail predicted image set, comprising:
performing edge information prediction on the current detail prediction image based on the current local reference area to obtain current local edge information corresponding to the current detail prediction image;
performing image editing on the current local to-be-processed area according to the current local edge information to obtain a preliminary detail prediction image of the current detail prediction image;
locally selecting the preliminary detail prediction image to obtain a next-level detail prediction image of the current detail prediction image;
and saving the next-level detail predicted image in a detail predicted image set.
9. The image processing method according to claim 1, wherein generating a result image from the overall predicted image and the detail predicted image comprises:
generating a preliminary result image according to the overall prediction image and the detail prediction image;
and carrying out resolution reconstruction processing on the preliminary result image to obtain a result image.
10. The image processing method according to claim 1, wherein the locally selecting the global predictive image to obtain a local image of the global predictive image, the local image including a local to-be-processed region and a local reference region, comprises:
carrying out image amplification on the overall prediction image to obtain the amplified overall prediction image;
local selection is carried out in the amplified overall prediction image to obtain a local image of the overall prediction image;
and calibrating a local region to be processed in the local image of the overall prediction image, wherein the local reference region is a region except the local region to be processed in the local image.
11. The image processing method according to claim 1, wherein performing image editing on the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image comprises:
and carrying out color filling on the region to be processed by adopting a color filling network according to the color distribution in the reference region and the edge information corresponding to the image to be processed to obtain a preliminary overall predicted image of the image to be processed, wherein the color filling network is formed by training a color training sample.
12. The image processing method of claim 1, after acquiring the image to be processed, further comprising:
and calibrating a region to be processed in the image to be processed, wherein the reference region is a region except the region to be processed in the image to be processed.
13. An image processing apparatus characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be processed, and the image to be processed comprises an area to be processed and a reference area;
the edge unit is used for carrying out edge information prediction on the image to be processed based on the reference area to obtain edge information corresponding to the image to be processed;
the editing unit is used for editing the image of the to-be-processed area of the to-be-processed image according to the edge information to obtain an overall predicted image of the to-be-processed image;
the local unit is used for locally selecting the overall prediction image to obtain a local image of the overall prediction image, wherein the local image comprises a local to-be-processed area and a local reference area;
the detail unit is used for carrying out image detail prediction on a local to-be-processed area of the local image based on the local reference area to obtain a detail prediction image of the local image;
a result unit for generating a result image from the overall prediction image and the detail prediction image.
14. A terminal comprising a processor and a memory, said memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps of the image processing method according to any one of claims 1 to 12.
15. A computer readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the image processing method according to any one of claims 1 to 12.
CN201911302146.0A 2019-12-17 2019-12-17 Image processing method, device, terminal and storage medium Active CN111062924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911302146.0A CN111062924B (en) 2019-12-17 2019-12-17 Image processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911302146.0A CN111062924B (en) 2019-12-17 2019-12-17 Image processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111062924A true CN111062924A (en) 2020-04-24
CN111062924B CN111062924B (en) 2024-07-05

Family

ID=70302001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911302146.0A Active CN111062924B (en) 2019-12-17 2019-12-17 Image processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111062924B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184585A (en) * 2020-09-29 2021-01-05 中科方寸知微(南京)科技有限公司 Image completion method and system based on semantic edge fusion
CN112669338A (en) * 2021-01-08 2021-04-16 北京市商汤科技开发有限公司 Image segmentation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595024A (en) * 2011-12-16 2012-07-18 飞狐信息技术(天津)有限公司 Method and device for restoring digital video images
JP2016167253A (en) * 2015-03-03 2016-09-15 キヤノン株式会社 Image processing apparatus, image processing method and program
CN108765295A (en) * 2018-06-12 2018-11-06 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus and storage medium
CN109544465A (en) * 2018-10-23 2019-03-29 天津大学 Image damage block restorative procedure based on change of scale
CN110210514A (en) * 2019-04-24 2019-09-06 北京林业大学 Production fights network training method, image completion method, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595024A (en) * 2011-12-16 2012-07-18 飞狐信息技术(天津)有限公司 Method and device for restoring digital video images
JP2016167253A (en) * 2015-03-03 2016-09-15 キヤノン株式会社 Image processing apparatus, image processing method and program
CN108765295A (en) * 2018-06-12 2018-11-06 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus and storage medium
CN109544465A (en) * 2018-10-23 2019-03-29 天津大学 Image damage block restorative procedure based on change of scale
CN110210514A (en) * 2019-04-24 2019-09-06 北京林业大学 Production fights network training method, image completion method, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
L. YUAN, C. RUAN, H. HU: "Image Inpainting Based on Patch-GANs", IEEE ACCESS, vol. 7, 9 April 2019 (2019-04-09) *
SATOSHI IIZUKA, EDGAR SIMO-SERRA, AND HIROSHI ISHIKAWA.: "Globally and Locally Consistent Image Completion", ACM TRANSACTIONS ON GRAPHICS, vol. 36, no. 4, 20 July 2017 (2017-07-20), XP058372881, DOI: 10.1145/3072959.3073659 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184585A (en) * 2020-09-29 2021-01-05 中科方寸知微(南京)科技有限公司 Image completion method and system based on semantic edge fusion
CN112184585B (en) * 2020-09-29 2024-03-29 中科方寸知微(南京)科技有限公司 Image completion method and system based on semantic edge fusion
CN112669338A (en) * 2021-01-08 2021-04-16 北京市商汤科技开发有限公司 Image segmentation method and device, electronic equipment and storage medium
CN112669338B (en) * 2021-01-08 2023-04-07 北京市商汤科技开发有限公司 Image segmentation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111062924B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN110163198B (en) Table identification reconstruction method and device and storage medium
US20210183022A1 (en) Image inpainting method and apparatus, computer device, and storage medium
US10803554B2 (en) Image processing method and device
US11244432B2 (en) Image filtering based on image gradients
Zhu et al. A benchmark for edge-preserving image smoothing
CN112308095A (en) Picture preprocessing and model training method and device, server and storage medium
US9342870B2 (en) Tree-based linear regression for denoising
CN111126140A (en) Text recognition method and device, electronic equipment and storage medium
CN107506792B (en) Semi-supervised salient object detection method
CN109712082B (en) Method and device for collaboratively repairing picture
CN109951635A (en) It takes pictures processing method, device, mobile terminal and storage medium
CN112101386B (en) Text detection method, device, computer equipment and storage medium
CN112419202B (en) Automatic wild animal image recognition system based on big data and deep learning
EP4404148A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN102509304A (en) Intelligent optimization-based camera calibration method
Cai et al. TDPN: Texture and detail-preserving network for single image super-resolution
CN111126389A (en) Text detection method and device, electronic equipment and storage medium
CN111062924B (en) Image processing method, device, terminal and storage medium
US20180032806A1 (en) Producing a flowchart object from an image
CN101718674B (en) Method for measuring shape parameters of particles of particulate materials
CN109658453A (en) The center of circle determines method, apparatus, equipment and storage medium
CN117670860A (en) Photovoltaic glass defect detection method and device
Xiong et al. Single image super-resolution via image quality assessment-guided deep learning network
Qu et al. An algorithm of image mosaic based on binary tree and eliminating distortion error
CN113449559B (en) Table identification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022181

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant