CN112088393B - Image processing method, device and equipment - Google Patents

Image processing method, device and equipment Download PDF

Info

Publication number
CN112088393B
CN112088393B CN201880093293.9A CN201880093293A CN112088393B CN 112088393 B CN112088393 B CN 112088393B CN 201880093293 A CN201880093293 A CN 201880093293A CN 112088393 B CN112088393 B CN 112088393B
Authority
CN
China
Prior art keywords
image
resolution
frame
sample
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880093293.9A
Other languages
Chinese (zh)
Other versions
CN112088393A (en
Inventor
谭文伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN112088393A publication Critical patent/CN112088393A/en
Application granted granted Critical
Publication of CN112088393B publication Critical patent/CN112088393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof

Abstract

The embodiment of the invention discloses an image processing method, device and equipment, wherein the method comprises the following steps: acquiring a target image needing super-resolution processing; inputting the target image into a hyper-resolution network model for processing to obtain a high-resolution image; the network parameters of the hyper-division network model are obtained by adjusting according to a plurality of frames of sample images and semantic feature maps corresponding to the sample images of each frame, and the semantic feature maps are obtained by performing semantic recognition through an image semantic network model, so that the quality of the obtained high-resolution images is improved.

Description

Image processing method, device and equipment
Technical Field
The present invention relates to image processing technologies, and in particular, to an image processing method, an image processing apparatus, and an image processing device.
Background
With the development of multimedia technology, users have higher and higher requirements for multimedia information, for example, high-resolution multimedia information (picture information, video information, etc.) becomes a mainstream multimedia file.
When the terminal needs to perform high-resolution multimedia information interaction, the terminal needs to transmit high-resolution multimedia information at a high speed and a wide bandwidth, so that the information interaction cost of both parties of the interactive terminal can be greatly improved. Therefore, the user usually converts the high-resolution multimedia information into the low-resolution multimedia information, and then sends the low-resolution multimedia information to other terminals, so that the interaction cost is reduced. After receiving the low-resolution multimedia information, the receiving terminal needs to restore the low-resolution multimedia information to the high-resolution multimedia information to acquire more detailed information.
Disclosure of Invention
The invention provides an image processing method, an image processing device and image processing equipment, which are used for improving the precision of converting a low-resolution image into a high-resolution image so as to improve the quality of the high-resolution image.
In a first aspect, an embodiment of the present invention provides an image packet processing method, where the method includes: acquiring a target image needing super-resolution processing; inputting the target image into a hyper-resolution network model for processing to obtain a high-resolution image; the network parameters of the hyper-division network model are obtained by adjusting according to a plurality of frames of sample images and semantic feature maps corresponding to the sample images of each frame, and the semantic feature maps are obtained by performing semantic recognition through an image semantic network model.
In the technical scheme, because the network parameters of the super-resolution network model are obtained by adjusting according to a large number of sample images and semantic feature images of each frame of sample image, and the semantic feature images contain detail feature information and edge structure information of the sample images, the super-resolution network model is a semantically enhanced network model, namely, a low-resolution image can be converted into a semantically enhanced high-resolution image through the semantically enhanced super-resolution network model, the obtained semantically enhanced high-resolution image can provide more detail feature information and high-definition edge structure information, and the quality of the high-resolution image is improved.
Optionally, determining an error of the hyper-division network model according to the multiple frames of sample images and the semantic feature map corresponding to each frame of sample image; and when the error is larger than a preset error value, adjusting the network parameters of the hyper-division network model.
In the embodiment of the invention, in order to improve the precision of processing images by the hyper-division network, network parameters of the hyper-division network model can be adjusted according to the multiple frames of sample images and the semantic feature maps corresponding to the sample images.
Optionally, a high-resolution sub-image and a low-resolution sub-image corresponding to each frame of sample image in a plurality of frames of sample images are obtained, each frame of target sub-image is input into the image semantic network model for semantic recognition to obtain a semantic feature image corresponding to each frame of sample image, and the target sub-image is the high-resolution sub-image or the low-resolution sub-image corresponding to any one sample image in the plurality of frames of sample images; inputting each frame of the low-resolution sub-images into the hyper-division network model for processing to obtain a high-resolution characteristic image of each frame of the sample image; superposing the high-resolution sub-image of each frame of sample image and the semantic feature image of the corresponding sample image to obtain a superposed image; determining the difference degree between the high-resolution characteristic image of each frame of sample image and the superposed image of the corresponding sample image; and calculating the sum of the difference degrees, and taking the sum of the difference degrees as the error of the hyper-division network model.
In the technical scheme, a high-resolution sub-image of a sample image and a superposed image of a semantic feature image of the sample image are used as reference images, a low-resolution sub-image of the sample image is used as a training sample, and the error of a hyper-division network model is calculated according to the reference images and the training sample image, so that the hyper-division network model with low error can be obtained conveniently.
Optionally, setting a weight for an image output by the image semantic network model; processing the semantic feature image of each frame of sample image according to the weight to obtain a processed semantic feature image; and superposing the high-resolution sub-image of each frame of sample image and the processed semantic feature image of the corresponding sample image to obtain a superposed image.
In the technical scheme, the image processing device can set weight for the image output by the image semantic network model so as to obtain a super-resolution network model incapable of performance, so as to meet different image requirements of users, namely, the higher the weight value is, the more information provided by the semantic feature image in the superposed image is, the higher the definition of the superposed image is, and further, the high-resolution image output by the super-resolution network model is closer to the semantic feature image; conversely, the smaller the weight value is, the less the information provided by the semantic feature image in the superimposed image is, the lower the definition of the superimposed image is, and further, the high-resolution image output by the hyper-division network model is closer to the target sub-image.
Optionally, the image semantic network model includes a multilayer neural network, the target sub-images are input into the image semantic network model, a plurality of candidate feature images are output through semantic recognition performed by the multilayer neural network included in the image semantic network model, and one candidate feature image is output by each layer of the neural network; carrying out gray processing on each frame of candidate characteristic image to obtain a gray image; and determining a parameter value of each frame of the gray scale image, and taking the gray scale image with the maximum parameter value as a semantic feature image of a sample image corresponding to the target sub-image, wherein the parameter value is determined according to the definition of the gray scale image and/or the information quantity provided by the gray scale image.
In the technical scheme, the candidate images with higher definition and/or more provided information amount are selected from the plurality of candidate feature images as the semantic feature images, so that the quality of the semantic feature images is improved, and further, the performance of the super-resolution network model for processing the high-resolution images is improved.
Optionally, the type of the target image is obtained; determining a hyper-divided network model matched with the type of the target image; and inputting the target image into a hyper-resolution network model matched with the type of the target image for processing to obtain a high-resolution image.
In the technical scheme, in order to improve the efficiency of processing the image and the accuracy of processing the image, a hyper-division network model matched with the type of the target image can be selected to process the target image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus having a function of implementing the behaviors in the implementation manner of the first aspect. The function can be realized by hardware, and can also be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware. Based on the same inventive concept, as the principle and the advantageous effects of the image processing apparatus for solving the problems can be referred to the method implementation manner of the first aspect and the advantageous effects brought thereby, the implementation of the image processing apparatus can be referred to the method implementation manner of the first aspect, and repeated details are not repeated.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory for storing one or more programs; the processor is configured to call a program stored in the memory to implement the scheme in the method design of the first aspect, and for the implementation and the beneficial effects of the forwarding plane device for solving the problem, reference may be made to the implementation and the beneficial effects of the method of the first aspect, and repeated details are not described here.
Drawings
In order to more clearly illustrate the technical solution in the embodiment of the present invention, the drawings required to be used in the embodiment of the present invention will be described below.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of another hyper-separation network model and an image semantic network model according to an embodiment of the present invention.
FIG. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described below with reference to the drawings.
The image processing device of the embodiment of the invention can be arranged in any electronic equipment and is used for carrying out high-resolution image conversion operation on the image. Including, but not limited to, smart mobile devices (e.g., mobile phones, palm tops, media players, etc.), wearable devices, head-mounted devices, personal computers, server computers, hand-held or laptop devices, and the like.
The following further describes the image processing method and related apparatus provided in the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention, where the method may be executed by an image processing apparatus, and a detailed explanation of the image processing apparatus is as described above. As shown in fig. 1, the image processing method may include the following steps.
S101, acquiring a target image needing super-resolution processing.
In the embodiment of the present invention, the image processing apparatus may acquire the target image that needs to be super-resolution processed from a local database, or download the target image that needs to be super-resolution processed from a network. The target image refers to an image with a resolution lower than a preset resolution value, and the target image may refer to a shot image or any frame image in a shot video.
And S102, inputting the target image into a hyper-resolution network model for processing to obtain a high-resolution image.
The network parameters of the hyper-division network model are obtained by adjusting according to a plurality of frames of sample images and semantic feature maps corresponding to the sample images of each frame, and the semantic feature maps are obtained by performing semantic recognition through an image semantic network model.
In the embodiment of the invention, because the network parameters of the super-resolution network model are obtained by adjusting according to the multi-frame sample images and the semantic feature images of each frame of sample images, and the semantic feature images comprise the detail feature information and the edge structure information of the sample images, the image processing device can input the target images into the super-resolution network model for processing to obtain high-resolution images so as to improve the quality of the high-resolution images. The high-resolution image may be an image with a resolution greater than a preset resolution value, and the high-resolution image may provide more detail feature information and edge structure information for a user.
The hyper-division network model and the image semantic network model may be formed by a convolutional neural network, in the convolutional neural network, there are usually a plurality of convolutional layers, each convolutional layer includes a plurality of convolutional kernels, the convolutional kernels are three-dimensional and include C, H and W three-dimensional data, C, H and W respectively represent depth, height and width of the data. The convolution kernel is essentially a combination of a series of weights. The image conversion error of the hyper-division network model can be reduced by adjusting the weight of the convolution kernel in the hyper-division network model, and the semantic recognition error of the image semantic network model can be reduced by adjusting the weight of the convolution kernel in the image semantic network model.
The network parameters of the hyper-division network model refer to the weights of convolution kernels in the hyper-division network model.
In one embodiment, to improve the efficiency of acquiring a high-resolution image, the image processing apparatus may perform preprocessing on the target image, and input the preprocessed target image into the hyper-resolution network model for processing, so as to obtain the high-resolution image. For example, the preprocessing includes cropping the target image to extract a region in which the user is interested in the target image, e.g., a region in which a human face is cropped; alternatively, the pre-processing includes scaling the target image to obtain a size suitable for the hyper-resolution network model processing.
In one embodiment, the image processing device may acquire a type of a target image, determine a hyper-segmentation network model matching the type of the target image, and input the target image into the hyper-segmentation network model matching the type of the target image for processing, so as to obtain a high-resolution image.
In order to improve the efficiency of acquiring a high-resolution image and the accuracy of acquiring a high-resolution image, the image processing apparatus may acquire a type of a target image, classify the target image according to contents included in the target image, the type of the target image including a person image type, a scene image type, or an animal image type, and classify the target image according to a state of the target image, the type of the target image including a still image type or a moving image type. And determining a hyper-resolution network model matched with the type of the target image according to the relationship between the image type and the hyper-resolution network model, and inputting the target image into the hyper-resolution network model matched with the type of the target image for processing to obtain a high-resolution image. For example, the target image is of a character image type, a hyper-division network model matched with the character image type is obtained, and the target image is input into the matched hyper-division network model for processing to obtain a high-resolution image. The network parameters of the matched hyper-division network model are adjusted through a plurality of frames of semantic feature images comprising character sample images and corresponding to each frame of character sample images.
In one embodiment, the image processing apparatus may train different types of hyper-division network models according to different types of sample images and semantic feature images corresponding to the sample images, for example, a hyper-division network model suitable for processing images including animals is trained by using multiple frames of sample images including animals and semantic feature images corresponding to each frame of sample images.
It can be seen that, by implementing the method described in fig. 1, since the network parameters of the hyper-resolution network model are obtained by adjusting according to a large number of sample images and the semantic feature images of each frame of sample image, and the semantic feature images include the detail feature information and the edge structure information of the sample images, the high-resolution images obtained by the hyper-resolution network model can provide more detail feature information and provide high-definition edge structure information, thereby improving the quality of the high-resolution images.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention, where the method can be executed by an image processing apparatus, and a detailed explanation of the image processing apparatus is as described above. The embodiment of the present invention is different from the embodiment described in fig. 1 in that the embodiment of the present invention calculates an error of the hyper-division network model through the multiple frames of sample images and the semantic feature image of each frame of sample image, and adjusts the network parameter of the hyper-division network model when the error is greater than a preset error value, so as to obtain the hyper-division network model with the error less than or equal to the preset error value. As shown in fig. 2, the image processing method may include the following steps.
S201, determining the error of the hyper-division network model according to the multiple frames of sample images and the semantic feature map corresponding to each frame of sample image.
In an embodiment of the present invention, the image processing apparatus may determine the error of the hyper-division network model according to the multiple frames of sample images and the semantic feature map corresponding to each frame of the sample images, and in an embodiment, the step S201 includes steps S11 to S15.
And S11, acquiring a high-resolution sub-image and a low-resolution sub-image corresponding to each frame of sample image in the multi-frame sample images.
And S12, inputting each frame of target sub-image into the image semantic network model for semantic recognition to obtain a semantic feature image corresponding to each frame of sample image, wherein the target sub-image is a high-resolution sub-image or a low-resolution sub-image corresponding to any sample image in the multi-frame sample images.
And S13, inputting each frame of low-resolution sub-image into the hyper-division network model for processing to obtain a high-resolution characteristic image of each frame of sample image.
And S14, superposing the high-resolution sub-image of each frame of sample image and the semantic feature image of the corresponding sample image to obtain a superposed image.
S15, determining the difference between the high-resolution characteristic image of each frame of sample image and the superposed image of the corresponding sample image; and calculating the sum of the difference degrees, and taking the sum of the difference degrees as the error of the hyper-division network model.
In steps S11 to S15, the image processing apparatus may perform sampling processing on each frame of sample image in the multiple frames of sample images to obtain a low-resolution sub-image corresponding to each frame of sample image, and perform enhancement processing on each frame of sample image to obtain a high-resolution sub-image corresponding to each frame of sample image. And inputting each frame of low-resolution sub-image into a hyper-division network model for processing to obtain a high-resolution feature image of each frame of sample image, and inputting each frame of target sub-image into an image semantic network model for semantic recognition to obtain a semantic feature image corresponding to each frame of sample image, wherein the semantic feature image comprises detail feature information and edge structure information of the sample image.
Further, superposing the high-resolution sub-image of each frame of sample image and the semantic feature image of the corresponding sample image to obtain a superposed image, wherein the superposed image is a high-resolution image with enhanced semantic meaning, and comparing the superposed image with the high-resolution feature image of each frame of sample image to obtain the difference degree between the high-resolution feature image of each frame of sample image and the superposed image of the corresponding sample image. The larger the difference is, the smaller the similarity between the high-resolution characteristic image obtained by the hyper-resolution network model and the superimposed image (i.e. the high-resolution image with enhanced semantics) is, i.e. the quality of the high-resolution characteristic image obtained by the hyper-resolution network model is poorer; on the contrary, the smaller the difference is, the greater the similarity between the high-resolution feature image obtained by the hyper-resolution network model and the superimposed image (i.e. the high-resolution image with enhanced semantics), i.e. the better the quality of the high-resolution feature image obtained by the hyper-resolution network model. Therefore, the sum of the difference degrees can be calculated, and is used as the error of the super-resolution network model, wherein the error of the super-resolution network model refers to the error of the super-resolution network model for converting the image into the high-resolution image, and the larger the error is, the poorer the quality of the high-resolution image obtained by processing the super-resolution network model is; conversely, the smaller the error is, the better the quality of the high-resolution image obtained by the super-resolution network model processing is.
For example, as shown in FIG. 2, assume that the hyper-divide network model consists of two successive convolution layers, each convolution layer comprising N convolution kernels, k x k, where N may be [20100 ]]K may be 3 or 5. The image processing device can acquire a high-resolution sub-image and a low-resolution sub-image of each frame of sample image in N frames of sample images, input the low-resolution sub-image of each frame of sample image into the hyper-division network model for processing to obtain a high-resolution feature image corresponding to each frame of sample image, extract feature information of each frame of high-resolution feature image, and the identifier is f W (x j ),x j Representing the sample image of the j-th frame. Inputting the target sub-image into a hyper-division network to perform S operation to obtain a semantic feature image, overlapping the semantic feature image and the high-resolution sub-image to obtain an overlapped image, extracting feature information of the overlapped image, and marking the feature information as f s (y j )+z j ,y j Target image corresponding to sample image of frame j, f s (y j ) Feature information indicating semantic feature image of target image corresponding to sample image of j-th frame, z j Feature information representing a high resolution sub-image of the sample image of the j-th frame. Feature information of each frame of high-resolution feature imageComparing the information with the characteristic information of the corresponding superposed image to obtain the difference between the high-resolution characteristic image of each frame of sample image and the superposed image of the corresponding sample image; and calculating the difference sum, and taking the difference sum as the error of the hyper-division network model, wherein the difference sum is marked as W. The error of the hyper-divided network can be represented by equation (1).
Figure GPA0000296074750000071
Wherein, formula (1) MSE (f) W (x j ),f s (y j )+z j ) And representing the difference degree of the characteristic information of the superposed image of the j frame sample image and the characteristic information of the high-resolution characteristic image of the j frame sample image.
In an embodiment, the image processing apparatus may set a weight for an image output by the image semantic network model, process the semantic feature image of each frame of the sample image according to the weight to obtain a processed semantic feature map, and superimpose the high-resolution sub-image of each frame of the sample image with the processed semantic feature image of the corresponding sample image to obtain a superimposed image.
The image processing device can set weight for the image output by the image semantic network model according to the scene or the requirement of a user, process the semantic feature image corresponding to each frame of sample image according to the weight to obtain a processed semantic feature image, and superpose the high-resolution sub-image of each frame of sample image and the processed semantic feature image of the corresponding sample image to obtain a superposed image. The higher the weight value is, the more information provided by the semantic feature image in the superimposed image is, the higher the definition of the superimposed image is, and further, the high-resolution image output by the hyper-division network model is closer to the semantic feature image; on the contrary, the smaller the weight value is, the less the information provided by the semantic feature image in the superimposed image is, and the definition of the superimposed image is lower, so that the high-resolution image output by the hyper-division network model is closer to the target sub-image.
For example, assume thatSetting the weight of an image output by a semantic network model to be lambda, processing a semantic feature image corresponding to each frame of sample image according to the weight of lambda to obtain a processed semantic feature image, superposing a high-resolution sub-image of each frame of sample image and the processed semantic feature image of the corresponding sample image to obtain a superposed image, extracting feature information of the superposed image, wherein the feature information can be marked as lambda f s (y j )+z j ,λf s (y j ) For the feature information of the processed semantic feature image, further, the error of the hyper-division network can be represented by equation (2).
Figure GPA0000296074750000072
In one embodiment, step S12 includes: inputting a target sub-image into the image semantic network model, performing semantic recognition through a multilayer neural network included in the image semantic network model to output a plurality of frames of candidate characteristic images, outputting one frame of candidate characteristic image by each layer of neural network, performing gray level processing on each frame of candidate characteristic image to obtain a gray level image, determining a parameter value of each frame of gray level image, and taking the gray level image with the maximum parameter value as the semantic characteristic image of a sample image corresponding to the target sub-image, wherein the parameter value is determined according to the definition of the gray level image and/or the information content provided by the gray level image.
In order to output a semantic feature image with higher quality, the image processing device can input the target sub-image into the image semantic network model, and perform semantic recognition through a multilayer neural network included in the image semantic network model to output a plurality of frames of candidate feature images. And carrying out gray level processing on each frame of candidate characteristic image to obtain a gray level image, determining a parameter value of each frame of gray level image, and taking the gray level image with the maximum parameter value as a semantic characteristic image of a sample image corresponding to the target sub-image, namely taking the gray level image with clear edge structure and capable of providing rich detail characteristic information as the semantic characteristic image, so that network parameters of the hyper-distributed network model can be trained through the semantic characteristic image with higher quality to obtain the hyper-distributed network model capable of outputting high-quality high-resolution images.
S202, judging whether the error is smaller than or equal to a preset error value. The image processing device may determine whether the error is less than or equal to a preset error value, and when the error is less than or equal to the preset error value, it indicates that the hyper-division network model can output a high-resolution image with higher quality, and may perform step S204; otherwise, when the error is greater than the preset error value, which indicates that the hyper-segmentation network model cannot output a high-resolution image with higher quality, step S205 may be performed.
S203, adjusting the network parameters of the hyper-division network model.
When the error is greater than the preset error value, the network parameters of the hyper-division network model are adjusted, and S201 is repeatedly executed, and the error of the hyper-division network model is smaller than or equal to the preset error value, so that the hyper-division network model can output a high-quality high-resolution image.
And S204, acquiring a target image needing super-resolution processing.
When the error of the hyper-resolution network model is smaller than the preset error, the hyper-resolution network model can output a high-resolution image with higher quality, and the image processing device can acquire a target image needing super-resolution processing.
And S205, inputting the target image into the hyper-resolution network model for processing to obtain a high-resolution image.
And inputting the target image into a hyper-resolution network model for processing to obtain a high-resolution image, so that more detail characteristic information and edge characteristic information with higher definition can be obtained from the high-resolution image.
It can be seen that, by implementing the method described in fig. 2, since the network parameters of the hyper-resolution network model are obtained by adjusting according to a large number of sample images and the semantic feature images of each frame of sample image, and the semantic feature images include the detail feature information and the edge structure information of the sample images, the high-resolution image obtained by the hyper-resolution network model can provide more detail feature information and high-definition edge structure information, thereby improving the quality of the high-resolution image.
Please refer to fig. 4, which is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. The image processing apparatus described in this embodiment, the apparatus comprising:
an acquiring module 401, configured to acquire a target image that needs to be super-resolution processed.
A processing module 402, configured to input the target image into a hyper-resolution network model for processing, so as to obtain a high-resolution image;
the network parameters of the hyper-division network model are obtained by adjusting according to a plurality of frames of sample images and semantic feature maps corresponding to the sample images of each frame, and the semantic feature maps are obtained by performing semantic recognition through an image semantic network model.
A determining module 403, configured to determine an error of the hyper-division network model according to the multiple frames of sample images and the semantic feature map corresponding to each frame of the sample images.
An adjusting module 404, configured to adjust a network parameter of the hyper-division network model when the error is greater than a preset error value.
The determining module 403 is specifically configured to obtain a high-resolution sub-image and a low-resolution sub-image corresponding to each frame of sample image in multiple frames of sample images; inputting each frame of target sub-image into the image semantic network model for semantic recognition to obtain a semantic feature image corresponding to each frame of sample image, wherein the target sub-image is a high-resolution sub-image or a low-resolution sub-image corresponding to any sample image in the multi-frame sample images; inputting each frame of low-resolution sub-image into the hyper-division network model for processing to obtain a high-resolution characteristic image of each frame of sample image; superposing the high-resolution sub-image of each frame of sample image and the semantic feature image of the corresponding sample image to obtain a superposed image; determining the difference degree between the high-resolution characteristic image of each frame of sample image and the superposed image of the corresponding sample image; and calculating the sum of the difference degrees, and taking the sum of the difference degrees as the error of the hyper-division network model.
And a setting module 405, configured to set a weight for the image output by the image semantic network model.
The processing module 402 is further configured to process the semantic feature image of each frame of the sample image according to the weight, so as to obtain a processed semantic feature map.
The determining module 403 is specifically configured to superimpose the high-resolution sub-image of each frame of the sample image and the processed semantic feature image of the corresponding sample image to obtain a superimposed image.
The determining module 403 is specifically configured to input the target sub-image into the image semantic network model, perform semantic recognition through a multilayer neural network included in the image semantic network model, and output multiple candidate feature images, where each layer of the neural network outputs one candidate feature image; carrying out gray processing on each frame of candidate characteristic image to obtain a gray image; and determining a parameter value of each frame of the gray scale image, and taking the gray scale image with the maximum parameter value as a semantic feature image of a sample image corresponding to the target sub-image, wherein the parameter value is determined according to the definition of the gray scale image and/or the information quantity provided by the gray scale image.
The obtaining module 401 is further configured to obtain a type of the target image; and determining a hyper-segmentation network model matched with the type of the target image.
The processing module 402 is configured to input the target image into a hyper-resolution network model matched with the type of the target image, and process the target image to obtain a high-resolution image.
It can be seen that, by implementing the apparatus described in fig. 4, since the network parameters of the hyper-resolution network model are obtained by adjusting according to a large number of sample images and the semantic feature images of each frame of sample image, and the semantic feature images include the detail feature information and the edge structure information of the sample images, the high-resolution image obtained by the hyper-resolution network model can provide more detail feature information and provide high-definition edge structure information, thereby improving the quality of the high-resolution image.
Please refer to fig. 5, which is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device includes: the system comprises a processor 501, a memory 502, a communication interface 503 and a power supply 504, wherein the processor 501, the memory 502, the communication interface 503 and the power supply 504 are connected with each other through a bus.
The processor 501 may be one or more CPUs, and in the case that the processor 501 is one CPU, the CPU may be a single-core CPU or a multi-core CPU, and the processor 501 may include a modem for performing modulation or demodulation processing on a signal received by the transceiver 805.
Memory 502 includes, but is not limited to, RAM, ROM), EPROM, CD-ROM, and the memory 502 is used to store instructions, an operating system, various applications, and data.
The communication interface 503 is connected to a forwarding plane device or other control plane device. For example, the communication interface 503 includes a plurality of interfaces respectively connected to a plurality of terminals or connected to a forwarding plane device. The communication interface 503 may be a wired interface, a wireless interface, or a combination thereof. The wired interface may be, for example, an ethernet interface. The ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless interface may be, for example, a Wireless Local Area Network (WLAN) interface, a cellular network interface, or a combination thereof.
And a power supply 504 for supplying power to the control plane device.
The memory 502 is also used to store program instructions. The processor 501 may call the program instructions stored in the memory 502 to implement the image processing method according to the embodiments of the present application.
Based on the same inventive concept, the principle of the control plane device provided in the embodiment of the present invention to solve the problem is similar to that of the embodiment of the method of the present invention, so the implementation and beneficial effects of the control plane device may be referred to and beneficial effects, and for brevity, detailed descriptions are omitted here.
The present invention further provides a computer-readable storage medium, on which a computer program is stored, and the implementation and the advantageous effects of the program for solving the problems can refer to the implementation and the advantageous effects of the image processing method shown in fig. 1 and fig. 2, and the details are not repeated here.
The embodiment of the present invention further provides a computer program product, where the computer program product includes a non-volatile computer-readable storage medium storing a computer program, and when the computer program is executed, the computer program causes the computer to execute the steps of the image processing method in the embodiment corresponding to fig. 1 and fig. 2, and the implementation and the beneficial effects of the computer program product for solving the problem may refer to the implementation and the beneficial effects of the image processing method in fig. 1 and fig. 2, and repeated parts are not described again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above.

Claims (12)

1. An image processing method, characterized in that the method comprises:
acquiring a target image needing super-resolution processing;
inputting the target image into a hyper-resolution network model for processing to obtain a high-resolution image;
the network parameters of the hyper-division network model are obtained by adjusting according to a plurality of frames of sample images and semantic feature maps corresponding to the sample images of each frame, and the semantic feature maps are obtained by performing semantic recognition through an image semantic network model;
the method further comprises the following steps:
inputting a low-resolution sub-image corresponding to each frame of sample image in the multi-frame sample images into the hyper-division network model for processing to obtain a high-resolution characteristic image of each frame of sample image;
superposing a high-resolution sub-image corresponding to each frame of sample image in the multi-frame sample images with a semantic feature image of the corresponding sample image to obtain a superposed image;
determining the difference degree between the high-resolution characteristic image of each frame of sample image and the superposed image of the corresponding sample image;
calculating the sum of the difference degrees, and taking the sum of the difference degrees as the error of the hyper-division network model;
and when the error is larger than a preset error value, adjusting the network parameters of the hyper-division network model.
2. The method according to claim 1, wherein before the low-resolution sub-images corresponding to each frame of sample images in the multiple frames of sample images are input into the hyper-distribution network model and processed to obtain the high-resolution feature images of each frame of sample images, the method further comprises:
acquiring a high-resolution sub-image and a low-resolution sub-image corresponding to each frame of sample image in the multi-frame sample images;
and inputting each frame of target sub-image into the image semantic network model for semantic recognition to obtain a semantic feature image corresponding to each frame of sample image, wherein the target sub-image is a high-resolution sub-image or a low-resolution sub-image corresponding to any sample image in the multi-frame sample images.
3. The method of claim 2, further comprising:
setting weights for the images output by the image semantic network model;
processing the semantic feature image of each frame of sample image according to the weight to obtain a processed semantic feature image;
the superimposing the high-resolution sub-image of each frame of the sample image and the semantic feature image of the corresponding sample image to obtain a superimposed image includes:
and superposing the high-resolution sub-image of each frame of sample image and the processed semantic feature image of the corresponding sample image to obtain a superposed image.
4. The method according to claim 2, wherein the image semantic network model includes a multilayer neural network, and the step of inputting each frame of target sub-image into the image semantic network model for semantic recognition to obtain a semantic feature image corresponding to each frame of sample image includes:
inputting the target sub-images into the image semantic network model, performing semantic recognition through a multilayer neural network included in the image semantic network model to output a plurality of frames of candidate characteristic images, and outputting one frame of candidate characteristic image through each layer of the neural network;
carrying out gray level processing on each frame of candidate characteristic image to obtain a gray level image;
and determining a parameter value of each frame of the gray scale image, and taking the gray scale image with the maximum parameter value as a semantic feature image of a sample image corresponding to the target sub-image, wherein the parameter value is determined according to the definition of the gray scale image and/or the information quantity provided by the gray scale image.
5. The method according to any one of claims 1-4, further comprising:
acquiring the type of the target image;
determining a hyper-divided network model matched with the type of the target image;
the step of inputting the target image into a hyper-resolution network model for processing to obtain a high-resolution image comprises the following steps:
and inputting the target image into a hyper-resolution network model matched with the type of the target image for processing to obtain a high-resolution image.
6. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target image needing super-resolution processing;
the processing module is used for inputting the target image into a hyper-resolution network model for processing to obtain a high-resolution image;
the network parameters of the hyper-division network model are obtained by adjusting according to a plurality of frames of sample images and semantic feature maps corresponding to the sample images of each frame, and the semantic feature maps are obtained by performing semantic recognition through an image semantic network model;
the determining module is used for inputting the low-resolution sub-image corresponding to each frame of sample image in the multi-frame sample images into the hyper-division network model for processing to obtain a high-resolution characteristic image of each frame of sample image; superposing a high-resolution sub-image corresponding to each frame of sample image in the multi-frame sample images with a semantic feature image of the corresponding sample image to obtain a superposed image; determining the difference degree between the high-resolution characteristic image of each frame of sample image and the superposed image of the corresponding sample image; calculating the sum of the difference degrees, and taking the sum of the difference degrees as the error of the hyper-division network model;
and the adjusting module is used for adjusting the network parameters of the hyper-division network model when the error is larger than a preset error value.
7. The apparatus of claim 6,
the determining module is further configured to obtain a high-resolution sub-image and a low-resolution sub-image corresponding to each frame of sample image in the multiple frames of sample images; and inputting each frame of target sub-image into the image semantic network model for semantic recognition to obtain a semantic feature image corresponding to each frame of sample image, wherein the target sub-image is a high-resolution sub-image or a low-resolution sub-image corresponding to any sample image in the multi-frame sample images.
8. The apparatus of claim 7, further comprising:
the setting module is used for setting weight for the image output by the image semantic network model;
the processing module is further used for processing the semantic feature image of each frame of the sample image according to the weight to obtain a processed semantic feature map;
the determining module is specifically configured to superimpose the high-resolution sub-image of each frame of the sample image with the processed semantic feature image of the corresponding sample image to obtain a superimposed image.
9. The apparatus of claim 7,
the determining module is specifically configured to input the target sub-image into the image semantic network model, perform semantic recognition through a multilayer neural network included in the image semantic network model, and output a plurality of candidate feature images, where each layer of the neural network outputs one candidate feature image; carrying out gray level processing on each frame of candidate characteristic image to obtain a gray level image; and determining a parameter value of each frame of the gray scale image, and taking the gray scale image with the maximum parameter value as a semantic feature image of a sample image corresponding to the target sub-image, wherein the parameter value is determined according to the definition of the gray scale image and/or the information quantity provided by the gray scale image.
10. The apparatus according to any one of claims 6-9, further comprising:
the acquisition module is further used for acquiring the type of the target image; determining a hyper-divided network model matched with the type of the target image;
and the processing module is used for inputting the target image into a hyper-resolution network model matched with the type of the target image for processing to obtain a high-resolution image.
11. An electronic device comprising at least one processor, memory, and instructions stored on the memory and executed by the at least one processor, wherein the at least one processor executes the instructions to implement the steps of the image processing method of any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to carry out the steps of the image processing method according to any one of claims 1 to 5.
CN201880093293.9A 2018-09-29 2018-09-29 Image processing method, device and equipment Active CN112088393B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/108891 WO2020062191A1 (en) 2018-09-29 2018-09-29 Image processing method, apparatus and device

Publications (2)

Publication Number Publication Date
CN112088393A CN112088393A (en) 2020-12-15
CN112088393B true CN112088393B (en) 2022-09-23

Family

ID=69952653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880093293.9A Active CN112088393B (en) 2018-09-29 2018-09-29 Image processing method, device and equipment

Country Status (2)

Country Link
CN (1) CN112088393B (en)
WO (1) WO2020062191A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016542A (en) * 2020-05-08 2020-12-01 珠海欧比特宇航科技股份有限公司 Urban waterlogging intelligent detection method and system
CN111709878B (en) * 2020-06-17 2023-06-23 北京百度网讯科技有限公司 Face super-resolution implementation method and device, electronic equipment and storage medium
CN111932463B (en) * 2020-08-26 2023-05-30 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN113592709B (en) * 2021-02-19 2023-07-25 腾讯科技(深圳)有限公司 Image super processing method, device, equipment and storage medium
CN116883236B (en) * 2023-05-22 2024-04-02 阿里巴巴(中国)有限公司 Image superdivision method and image data processing method
CN116612466B (en) * 2023-07-20 2023-09-29 腾讯科技(深圳)有限公司 Content identification method, device, equipment and medium based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105793891A (en) * 2013-11-30 2016-07-20 夏普株式会社 Method and device for determining a high resolution output image
CN106780363A (en) * 2016-11-21 2017-05-31 北京金山安全软件有限公司 Picture processing method and device and electronic equipment
WO2018035805A1 (en) * 2016-08-25 2018-03-01 Intel Corporation Coupled multi-task fully convolutional networks using multi-scale contextual information and hierarchical hyper-features for semantic image segmentation
CN108537731A (en) * 2017-12-29 2018-09-14 西安电子科技大学 Image super-resolution rebuilding method based on compression multi-scale feature fusion network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080162561A1 (en) * 2007-01-03 2008-07-03 International Business Machines Corporation Method and apparatus for semantic super-resolution of audio-visual data
CN108229455B (en) * 2017-02-23 2020-10-16 北京市商汤科技开发有限公司 Object detection method, neural network training method and device and electronic equipment
CN107169450A (en) * 2017-05-15 2017-09-15 中国科学院遥感与数字地球研究所 The scene classification method and system of a kind of high-resolution remote sensing image
CN108428212A (en) * 2018-01-30 2018-08-21 中山大学 A kind of image magnification method based on double laplacian pyramid convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105793891A (en) * 2013-11-30 2016-07-20 夏普株式会社 Method and device for determining a high resolution output image
WO2018035805A1 (en) * 2016-08-25 2018-03-01 Intel Corporation Coupled multi-task fully convolutional networks using multi-scale contextual information and hierarchical hyper-features for semantic image segmentation
CN106780363A (en) * 2016-11-21 2017-05-31 北京金山安全软件有限公司 Picture processing method and device and electronic equipment
CN108537731A (en) * 2017-12-29 2018-09-14 西安电子科技大学 Image super-resolution rebuilding method based on compression multi-scale feature fusion network

Also Published As

Publication number Publication date
WO2020062191A1 (en) 2020-04-02
CN112088393A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN112088393B (en) Image processing method, device and equipment
CN110660037B (en) Method, apparatus, system and computer program product for face exchange between images
CN109493350B (en) Portrait segmentation method and device
CN110555795A (en) High resolution style migration
CN111507333B (en) Image correction method and device, electronic equipment and storage medium
CN110751649B (en) Video quality evaluation method and device, electronic equipment and storage medium
WO2023035531A1 (en) Super-resolution reconstruction method for text image and related device thereof
CN113704531A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112990219B (en) Method and device for image semantic segmentation
CN110619334B (en) Portrait segmentation method based on deep learning, architecture and related device
CN114863539A (en) Portrait key point detection method and system based on feature fusion
CN109523558A (en) A kind of portrait dividing method and system
CN114529982A (en) Lightweight human body posture estimation method and system based on stream attention
US20230072445A1 (en) Self-supervised video representation learning by exploring spatiotemporal continuity
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
CN113901928A (en) Target detection method based on dynamic super-resolution, and power transmission line component detection method and system
CN112037239B (en) Text guidance image segmentation method based on multi-level explicit relation selection
CN113822114A (en) Image processing method, related equipment and computer readable storage medium
CN110120009B (en) Background blurring implementation method based on salient object detection and depth estimation algorithm
CN111783734A (en) Original edition video identification method and device
US20230153965A1 (en) Image processing method and related device
CN113505247B (en) Content-based high-duration video pornography content detection method
CN111833413B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113643348B (en) Face attribute analysis method and device
CN113723289B (en) Image processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant