WO2019192316A1 - 图像相关处理方法及装置、设备、存储介质 - Google Patents

图像相关处理方法及装置、设备、存储介质 Download PDF

Info

Publication number
WO2019192316A1
WO2019192316A1 PCT/CN2019/078685 CN2019078685W WO2019192316A1 WO 2019192316 A1 WO2019192316 A1 WO 2019192316A1 CN 2019078685 W CN2019078685 W CN 2019078685W WO 2019192316 A1 WO2019192316 A1 WO 2019192316A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
layer
convolution
channel
data
Prior art date
Application number
PCT/CN2019/078685
Other languages
English (en)
French (fr)
Inventor
陈法圣
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP19782328.9A priority Critical patent/EP3748572A4/en
Publication of WO2019192316A1 publication Critical patent/WO2019192316A1/zh
Priority to US16/928,383 priority patent/US11836891B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • H04N21/4356Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen by altering the spatial resolution, e.g. to reformat additional data on a handheld device, attached to the STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • H04N21/4666Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms using neural networks, e.g. processing the feedback provided by the user

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image related processing method and apparatus, device, and storage medium.
  • Most of the existing super-resolution processing methods are constructed by simulation to construct multiple sets of low-resolution and high-resolution images, and then learn low-resolution images to high-resolution images through machine learning. Mapping relations.
  • the simulation process includes: acquiring a set of high-resolution images, interpolating each image, and reducing the resolution of the image, thus obtaining a plurality of images of low-resolution and high-resolution one-to-one correspondence.
  • the interpolation processing of the reduced resolution can be performed by the bilinear interpolation processing method, and multiple sets of images with low resolution and high resolution one-to-one correspondence are obtained.
  • the corresponding high-resolution image can be found and output to the user.
  • Embodiments of the present invention provide an image correlation processing method, apparatus, device, and storage medium, which can obtain a superior image processing model, and can complete super-resolution processing of an image with high quality.
  • an embodiment of the present invention provides a method for generating an image processing model, which is executed by a computing device, and includes:
  • the initial model including an input layer, an output layer, and an intermediate layer;
  • the training image comprising a first image and a second image, the first image being an image obtained by performing N-fold down-resolution processing on the second image;
  • an embodiment of the present invention further provides an image processing method, which is executed by a computing device, and includes:
  • the image processing model includes an input layer, an output layer, and an intermediate layer, wherein the convolution kernel parameter in the intermediate layer is determined based on a parameter update of the training image, the training image including the first image and the second image,
  • the first image is an image obtained by performing N-fold down-resolution processing on the second image
  • an intermediate layer in the image processing model is a volume in the middle layer according to the second image and the result data.
  • the result of the update is obtained by the intermediate layer before the update, and the channel output data including the N*N channel obtained by performing convolution calculation on the image data of the first image.
  • An embodiment of the present invention further provides an apparatus for generating an image processing model, including:
  • a generating module for generating an initial model for performing image resolution processing based on a convolutional neural network, the initial model including an input layer, an output layer, and an intermediate layer;
  • An acquiring module configured to acquire a training image, where the training image includes a first image and a second image, where the first image is an image obtained by performing N-fold down-resolution processing on the second image;
  • a calculation module configured to input image data of the first image from the input layer to the intermediate layer for convolution calculation, and obtain result data of the convolution calculation from the output layer, the result data
  • the channel output data including the N*N channel, where N is a positive integer greater than or equal to 2;
  • a processing module configured to perform parameter update on the convolution kernel parameter in the middle layer of the initial model according to the result data and the second image, and generate an image processing model according to the initial model after the parameter update.
  • An embodiment of the present invention further provides an image processing apparatus, including:
  • An acquiring module configured to acquire target image data of an image to be processed
  • a processing module configured to process the target image data by using an image processing model to obtain result data of channel output data including an N*N channel, where N is a positive integer greater than or equal to 2;
  • a generating module configured to generate, according to the result data, a target image that is increased in N times resolution corresponding to the image to be processed
  • the image processing model includes an input layer, an output layer, and an intermediate layer, wherein the convolution kernel parameter in the intermediate layer is determined based on a parameter update of the training image, the training image including the first image and the second image,
  • the first image is an image obtained by performing N-fold down-resolution processing on the second image
  • an intermediate layer in the image processing model is a volume in the middle layer according to the second image and the result data.
  • the result of the update is obtained by the intermediate layer before the update, and the channel output data including the N*N channel obtained by performing convolution calculation on the image data of the first image.
  • An embodiment of the present invention further provides an image processing apparatus, including: a processor and a storage device;
  • the storage device is configured to store computer program instructions
  • the processor is configured to invoke computer program instructions stored in the storage device to implement the method for generating an image processing model.
  • An embodiment of the present invention further provides another image processing apparatus, including: a processor and a storage device;
  • the storage device is configured to store computer program instructions
  • the processor is configured to invoke computer program instructions stored in the storage device to implement the image processing method described above.
  • the embodiment of the present invention further provides a computer storage medium, where the computer program medium stores instructions for generating the image processing model, or is used to implement the above, when the computer program instructions are executed.
  • Image processing method stores instructions for generating the image processing model, or is used to implement the above, when the computer program instructions are executed.
  • the embodiment of the present invention can perform training optimization on a model including an intermediate layer configured for convolution calculation based on two images before and after the resolution reduction processing, and finally obtain an image processing model that can perform N times super resolution processing, based on
  • the image processing model generated by the special structure and the training method can quickly realize super-resolution processing on images (such as various pictures, video frames, etc.), and the speed and efficiency of super-resolution processing are obviously improved.
  • the super-resolution processing of the image based on the image processing model is also more accurate and stable.
  • FIG. 1 is a schematic diagram of an image processing process according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a scenario of an applied image processing model according to an embodiment of the present invention.
  • FIG. 3a is a schematic diagram of a scenario of an image processing model of another application according to an embodiment of the present invention.
  • FIG. 3b is a schematic diagram of a video playing interface according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an initial model according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a process of convolution calculation according to an embodiment of the present invention.
  • FIG. 6 is a schematic flow chart of a method for performing training optimization on an initial model by acquiring a first image and a second image according to an embodiment of the present invention
  • FIG. 7 is a schematic flow chart of a method for super-resolution processing an image according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart diagram of an optimization method of an image processing model according to an embodiment of the present invention.
  • FIG. 9 is a schematic flow chart of a method for generating an image processing model according to an optimized initial model according to an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart diagram of an image processing method according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of an apparatus for optimizing an image processing model according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
  • the processing of the image is mainly super-resolution processing of the image.
  • Super-resolution processing is to process a low-resolution image to obtain a corresponding high-resolution image.
  • a 1080P-resolution image can be obtained.
  • the representation value of each pixel in the low-resolution image is first obtained, and then based on the representation value of each pixel of the low-resolution image, based on the image processing model after training optimization.
  • FIG. 1 is a schematic diagram of an image processing process according to an embodiment of the present invention.
  • the image processing process includes: after inputting the low-resolution image, calling the image processing model to perform N-time super-resolution processing, outputting data of the N*N channel, obtaining a new representation value based on the data, and generating the new representation value based on the new representation value.
  • the representation value may be: a brightness value V of a pixel point, or a brightness value L of a pixel point, or a brightness channel data of a pixel point, that is, a Y channel value, or a pixel point R (red, Any one or more of red, G (green), B (blue, blue) values, or gray values of various channels of a multispectral camera, or special cameras (infrared cameras, ultraviolet cameras, depth cameras) The gray value of the channel and so on.
  • N-times super-resolution processing is performed on a certain low-resolution image, mainly based on an image processing model to transform each pixel in the low-resolution image into N*N pixels.
  • N the resolution of the low-resolution image needs to be enlarged to 2 times, that is, 2 times of super-resolution processing is performed on the low-resolution image, and for 1 pixel in the low-resolution image, Processing into 2*2 or 4 pixel points requires calculation of the representation value of one pixel point based on the image processing model to obtain four values, which are the corresponding representation values of the pixel points of the high resolution image.
  • the result data is output by the image processing model, and the result data includes 2*2, that is, 4-channel channel output data.
  • a high resolution image 200 can be obtained based on the result data.
  • the image processing model may calculate the four corresponding to the vertex pixel point 101 based on the representation value of the vertex pixel point 101. Values, these four values correspond to the values 1011, 1012, 1013, and 1014 on the channel output data of 2*2 channels, respectively.
  • the four values are respectively the upper left pixel in the upper left corner region of the high-resolution image 200 (the region of the left dotted frame in FIG. 1).
  • the numerical values of 1021, upper right pixel 1022, lower left pixel 1023, and lower right pixel 1024 are represented.
  • the image position of the new pixel in the target image is determined with reference to the image position of the original pixel in the low resolution image 100.
  • the vertex pixel point 101 corresponds to four pixel points of the upper left corner region, and the next pixel point 102 of the vertex pixel point 101 (pixel points of the first row and second column) is on the high resolution image 200.
  • the corresponding area is an area corresponding to the vertex pixel point 101 corresponding to the high-resolution image 200.
  • the pixel point 102 corresponds to 4 pixel points in the area.
  • the other pixel points in the low resolution image 100 are corresponding to the corresponding position regions on the high resolution image 200; and if it is 3 times super resolution, the vertex pixel points 101 correspond to 3*3 or 9 pixels of the upper left corner region. point.
  • the image processing model mentioned above can be generated based on a convolutional neural network.
  • an image processing model with different numbers of output channels can be configured, and the values of the channel output data on each output channel are the target image with low resolution after super-resolution processing.
  • the representation value of the pixel at the corresponding position of the original pixel in the image may be provided, each image processing model having a different number of output channels, such that after receiving a low resolution image requiring super resolution processing, it may be processed according to super resolution.
  • a multiple of the image processing model is selected from a plurality of image processing models to facilitate super-resolution processing of the low-resolution image by the target image processing model to satisfy the user's super-resolution requirement.
  • an image processing model of 2*2 output channels can be set, which can satisfy 2 times of super-resolution processing, and can also set an image processing model of 3*3 output channels, which can satisfy 3 times of super-resolution processing.
  • users can select different resolutions to request to play video or images, for example, for a 540P resolution video, if the user selects 2 times super resolution processing, you can watch 1080P. Resolution video, and if the user selects 4x super resolution processing, they can watch 2K resolution video.
  • Image processing models require training optimization.
  • the training optimization of the initial model is completed based on the training data, and an image processing model that can perform image super-resolution processing is obtained.
  • it can also be further optimized as needed for a new initial model, so as to more accurately complete the super-resolution processing of the image.
  • FIG. 2 it is a schematic diagram of a scenario in which the above-mentioned image processing model is applied according to an embodiment of the present invention.
  • An image processing model that has been optimized for training is configured in an image playback application that is installed on an image display such as a smartphone, a tablet, a personal computer (PC), a smart wearable device, and the like.
  • the video playback application may be, for example, various applications capable of playing video, and/or various applications capable of displaying images, etc., and the user opens the video playback application on the smart terminal, and may request to play a video or display an image.
  • the smart terminal when the user requests to play the video, the smart terminal requests the video from the server that provides the video playing service, and after receiving the video returned by the server, the smart terminal extracts the low-resolution video and the low-resolution video.
  • Each of the video frames is determined as a low-resolution image, and the low-resolution image is input as an image to be processed to an image processing model, and the image processing model performs super-resolution processing on the image to be processed, correspondingly obtaining a high-resolution image, that is, A target image is obtained, each target image corresponding to one video frame.
  • the image playing application sequentially plays the target image according to factors such as the playing time sequence, thereby realizing the playback of the high-resolution video.
  • the smart terminal may also cache the target image that has been obtained, cache it to a preset amount of data, and then obtain and play the video based on factors such as the play time sequence.
  • the smart terminal can directly perform super-resolution processing on the single image as the image to be processed, and display the obtained target image to the user.
  • the intelligent terminal only downloads low-resolution video data, which saves bandwidth occupation and saves traffic of the intelligent terminal.
  • the server only needs to save low-resolution video data, and The storage space of the server is saved, and on the other hand, the intelligent terminal can facilitate the user to watch more high-definition video after super-resolution processing of the low-resolution image.
  • the user can request a display switch between a low resolution image and a high resolution image by clicking a toggle button.
  • the smart terminal can also receive user operations through an application interface. Only when the user clicks the high-resolution play button on the application interface, the smart terminal performs super-resolution processing on the low-resolution image to obtain high resolution.
  • the target image is played to the user.
  • the available image processing model that has been optimized for training is configured in an application server that provides a video playing service or an image display service, and a corresponding video playing application is installed on the smart terminal.
  • the application server may store low-resolution video or images.
  • the application server may display low-resolution video or images.
  • the data is sent to the smart terminal and presented to the user through the video playback application installed on the smart terminal.
  • the user can click the high-resolution play button set on the application interface.
  • the smart terminal sends N times high-resolution playback to the server. request.
  • the server responds to the high-resolution play request, determines low-resolution video data to be subjected to super-resolution processing, and obtains an image to be processed (for example, a single low-resolution picture or a video frame in a low-resolution video) through the image.
  • the model is processed for super-resolution processing, and finally the target image is obtained, and the target image is transmitted to the smart terminal.
  • the smart terminal displays the high-resolution image through the image playback application or sequentially plays the high-resolution target image according to factors such as the playing time sequence, thereby realizing high-resolution video playback.
  • FIG. 3b is a schematic diagram of an image playing interface.
  • the image playing application may be a video player.
  • the video playback application After the user clicks the "super-resolution” processing button in the lower right corner of the interface, the video playback application performs super-resolution processing on the video or image according to the manner corresponding to FIG. 2 or the application server according to the manner corresponding to FIG. 3a.
  • the "Super Resolution" processing button in the lower right corner of the interface can also be configured as multiple buttons of other types, such as an icon button configured as "2x Super Resolution", an icon button of "4x Super Resolution", and the like.
  • the server may also switch between the low resolution image and the high resolution image based on the network resource information between the server and the terminal, and when the network resource information satisfies the condition (for example, when the bandwidth is sufficient), The target image that converts the low resolution image into a high resolution is sent to the user, and when the network resource information does not satisfy the condition (for example, when the bandwidth is small), the low resolution image is transmitted to the user.
  • the condition for example, when the bandwidth is sufficient
  • one aspect includes the establishment of the image processing model and the training optimization to obtain the image processing model, and the other aspect includes further optimization processing of the image processing model after training optimization, and the third aspect includes the image processing model based on the image processing model.
  • the process of super-resolution processing includes the establishment of the image processing model and the training optimization to obtain the image processing model, and the other aspect includes further optimization processing of the image processing model after training optimization, and the third aspect includes the image processing model based on the image processing model.
  • the process of generating the image processing model includes first generating an initial model for performing image resolution processing based on a convolutional neural network, the initial model including an input layer, an output layer, and an intermediate layer.
  • the input layer has an input channel that is primarily used to input image data for the training image.
  • the image data refers to a representation value of each pixel of the training image
  • the representation value may be a brightness value of each pixel point, or a brightness value of each pixel point, or a Y channel value of each pixel point, or It can be any one or more of the R, G, and B values of each pixel, or it can be the gray value of each channel of the multi-spectral camera, and the gray of each channel of the special camera (infrared camera, ultraviolet camera, depth camera). Any one or more of the degrees, etc., that is, any two-dimensional data can be trained to correspond to the image processing model that processes the data.
  • the input layer of the initial model may also have multiple input channels to facilitate super-resolution processing of multiple data of the image.
  • the initial model may have three input channels to facilitate simultaneous input of R, G, and B values of the image.
  • the output layer of the initial model has N*N output channels, and the channel output data output by each channel constitutes the result data, and each value in the result data corresponds to one of the pixel points on the target image after super-resolution processing. Indicates the value.
  • the numerical value corresponding to the value in the output data includes: a brightness value of each pixel, or a brightness value of each pixel, or each The Y channel value of the pixel, or the R, G, or B value of each pixel, and so on.
  • the output layer has M*N*N output channels.
  • M is a positive integer
  • the first 4 output channels (such as the output channels numbered 1, 2, 3, 4) correspond to the R value and the middle 4 output channels (such as numbers 5, 6, 7, and 8).
  • the output channel corresponds to the G value
  • the last 4 output channels (such as the output channels numbered 9, 10, 11, and 12) correspond to the B value.
  • the middle layer may include multiple convolution layers.
  • the more convolution layers the more detailed the calculation, and the better the image quality of the target image is, the lower the image noise is. .
  • FIG. 4 it is a schematic structural diagram of an initial model of an embodiment of the present invention, including an input layer, an output layer, and an intermediate layer, for example, an intermediate layer composed of five convolution layers.
  • the relevant parameters for each convolutional layer are shown in Table 1 below.
  • the initial model is trained and optimized by training the image to finally obtain an image processing model that can be used online.
  • the training image includes a large amount of image data, and the first image and the second image are included in the image data, wherein the first image is an image obtained by performing N-fold down-resolution processing on the second image. That is to say, only the high resolution second image needs to be acquired, and the low resolution first image is obtained by downsampling.
  • the low resolution first image since the high resolution second image corresponding to the first image already exists as a reference standard, it is convenient to measure the super resolution processing capability of the initial model to the low resolution first image. To make training optimization more accurate.
  • the number of input channels of the first convolution layer is generally 1, that is, input image data of an image to be processed, and the number of input channels of the convolution layer of other intermediate layers is the upper layer connected to the convolution layer of the intermediate layer.
  • the number of output channels of the convolutional layer, and the input data corresponding to the convolutional layer of the intermediate layer is the output data of the convolution layer of the upper layer.
  • the number of input channels of the first layer of convolutional layers in Table 1 is 1, the number of output channels is 16; the number of input channels of the second layer of convolutional layers is 16, and the number of output channels of the first layer of convolutional layers the same.
  • the number of output channels in addition to the last convolutional layer in the middle layer, the number of output channels of other convolutional layers can also be configured as needed, and the number of output channels of the last convolutional layer needs to be processed according to super-resolution. Multiples to determine. If you need to increase the resolution by N times, the number of output channels of the last convolutional layer should be N*N, as shown in Table 1, and the 4 channels are calculated based on 2*2, in order to achieve 2 times. Super resolution processing.
  • the convolution kernel size can be customized, for example, it can be an odd number greater than 2, and the step size can be 1, which ensures that the convolution calculation of each convolution layer does not cause the image to shrink. After that, it is necessary to calculate the Pad value corresponding to the convolution kernel size and the step size.
  • the size of the Pad value determines how many pixels are copied outward during the convolution calculation to ensure that there is no loss of image width and height for each convolution operation.
  • the convolution layer is used in the intermediate layer in the generated initial model, and no other calculation layer such as an activation layer is introduced.
  • 16 convolution kernels are included in the first layer convolution layer based on the number of configured output channels, and each convolution kernel is 3 ⁇ 3 composed of different values. Matrix, these values are convolution kernel parameters.
  • the convolution kernel parameters in the final image processing model are obtained by training and optimizing the convolution kernel parameters in the initial model. Each convolution kernel parameter can be completely different or partially the same.
  • the image data of the first image input from the input layer of the image processing model to the intermediate layer is a two-dimensional matrix composed of representative numerical values.
  • the image data of the first image is first convoluted with each convolution kernel in the first layer convolutional layer, and 16 result data is output.
  • the image data of the first image refers to a representation value of each pixel in the first image, and it can be considered that after the representation value of each pixel in the first image passes through the first layer of the convolution layer, 16 is obtained. Values.
  • each group has 16 convolution kernels
  • each The convolution kernel is a 3 ⁇ 3 matrix composed of different values
  • the input of the second convolution layer is 16 result data output by the first convolution layer
  • the output is 32 result data.
  • 16 result data outputted by the first layer convolutional layer are respectively convolved with 16 convolution kernels in a set of convolution kernels.
  • the convolution calculation yields 16 convolution results, and then the 16 convolution results are summed to obtain the output data corresponding to the set of convolution kernels (one of the 32 output data).
  • the 32 sets of convolution kernels are subjected to such operations to obtain the result data of 32 channels in the second layer convolutional layer.
  • the third layer convolutional layer it consists of 32 sets of convolution kernels, each of which has 32 convolution kernels, and the processing method is the same as that of the second layer.
  • the 32 convolution kernels in the convolution kernel are respectively convoluted with the 32 result data outputted from the second layer convolutional layer, and the obtained 32 convolution results are summed to obtain the convolution kernel.
  • the output data corresponding to the group convolution kernel can be obtained.
  • the fourth layer of convolutional layer can also obtain the result data of 32 channels, and the deconvolution is realized by the 4-channel convolution in the last convolutional layer.
  • the fifth layer convolutional layer outputs the result data of four channels, which can be considered as corresponding to upper left, upper right, lower left respectively. The value of the lower right four pixels.
  • the output layer can finally output the result data including 4 channels directly, or in one embodiment, the result data can be processed to directly output a high-resolution target image arranged according to the channel output data of the 4 channels.
  • calculating the result data of the output channel in each convolutional layer of the intermediate layer comprises: convolution calculation of the input data into a one-to-one correspondence with each set of convolution kernels, and then summing to obtain the set of volumes The result data of the corresponding output channel is accumulated.
  • each set of convolution kernels corresponds to one output channel, so how many sets of convolution kernels, and how many channels are output.
  • other calculation manners may be included, and the convolution calculation method between each input data and each convolution kernel may have multiple combinations, and each input data may be convoluted with the convolution kernel. Average calculations and more.
  • the convolution kernel parameters in the initial model can be trained and optimized based on the result data of the final output after calculation, and can be guaranteed after one or more training optimizations. Entering the first image again, and the difference between the representation values of the pixel points at the same image position between the target image and the second image indicated by the result data corresponding to the first image that is finally output is small, for example, the two The difference value indicating the value is smaller than the preset threshold, or the difference value of the preset value is smaller than the preset threshold.
  • each value in the result data is a representation value of the pixel point at the position corresponding to the first image on the target image after the super-resolution processing.
  • the representation value may be a brightness value, a brightness value, or a Y channel value, or an RGB value, or the like of a pixel point at a position corresponding to the first image on the target image after the super-resolution processing, or may be more
  • the output representation value (ie, the value in the output result data) represents the Y channel value
  • the Y channel value based on one pixel point is obtained by interpolation processing or the like.
  • the other two color difference signal UV channel values of the same pixel point can be combined to obtain the YUV data of the pixel point, and finally the super-resolution processed target image is arranged according to the image position of all the pixel points.
  • the output representation value (ie, the value in the output result data) represents a brightness value
  • the target image is synthesized, it is obtained based on the brightness value V value of one pixel point and the interpolation processing.
  • the hue value H and the saturation value S of the same pixel can be combined to obtain the HSV data of the pixel, and finally the super-resolution processed target image is arranged according to the image position of all the pixels.
  • similar processing can be performed on the output luminance value L value to obtain HSL data, and the super-resolution processed target image can be finally arranged according to the image position of all the pixel points.
  • the output representation value (ie, the value in the output result data) is represented as an RGB value
  • the R value, the G value, and the B value of the pixel point at each pixel position may be directly combined.
  • the RGB value of each pixel is obtained, and finally the super-resolution processed RGB target image is arranged according to the image position of all the pixels.
  • the input data of the image to be processed is RGB data
  • RGB data that is, when the numerical value of the pixel is RGB
  • One of the methods is to configure three image processing models, respectively calculate the R value, the G value and the B value, respectively output corresponding result data, and obtain the R value, the G value and the B value of the target image based on the respective output data. Then, the RGB values are further combined to obtain a target image.
  • the image processing model may also be trained for R values, G values, and B values, respectively, during training.
  • the input channel of the image processing model is three, corresponding to the R value, the G value and the B value of each pixel of the image to be processed, and the corresponding three sets of N*N channels are output, and each group of N*N channels corresponds to R value, G value or B value, and finally further combine the RGB values to obtain the target image.
  • the target image data of the image to be processed when the target image data of the image to be processed is processed, it may be determined whether the acquired target image data is an RGB value, and if so, the RGB value is used as the target image value. Three image processing models are selected, so that the RGB values are separately processed by the image processing model to obtain the result data of the N*N channels corresponding to the RGB values. In one embodiment, the same processing can be performed for YUV data, HSV data, HSL data, etc., and N times of super-resolution processing of the image to be processed is completed by selecting three image processing models.
  • the above-mentioned entire process from inputting the first image to obtaining the result data is considered as the process of the super-resolution processing of the first image by the initial model.
  • the target image is compared with the second image, which mainly includes performing representation values of the pixel points at each corresponding image position of the target image and the second image.
  • the comparison for example, compares the representation values of the pixel points at the top left corner of the target image with the pixel points at the upper left corner of the second image to obtain a difference value representing the value.
  • the similarity information between the target image and the second image may refer to an average value of the difference values of the representation values of all the pixel points of the two images, and if the average value is lower than the first similarity threshold, the optimization is not satisfied. Condition, otherwise, it is considered that the optimization condition is satisfied.
  • the similarity information between the target image and the second image may refer to a difference value of the representation value of the pixel point at each image position, and if the difference value of the representation value exceeding the M pixel points is less than the preset The second similarity threshold is considered to not satisfy the optimization condition; otherwise, the optimization condition is considered to be satisfied.
  • the convolution kernel parameters included in each convolutional layer of the intermediate layer need to be adjusted and optimized in reverse, and after the adjustment and optimization, the first image is super-resolution processed by adjusting the optimized initial model again. If the similarity information between the target image and the second image obtained after the processing is not satisfied, the next first image is acquired and the preliminary optimized initial model is called for super-resolution processing, if the optimization condition is still satisfied. Then, the convolution kernel parameters of the convolution kernel in the convolutional layer are continuously adjusted and optimized, and then the super-resolution processing is performed again. By super-resolution processing a large number of first images and optimizing the initial model, an image processing model in which the convolution kernel parameters are trained and optimized is finally obtained.
  • the value of the target image channel of the first image may be used as training input data, for example, the Y channel value of the first image is input from the input layer as training input data, and after the convolution calculation described above, the result data output from the output layer includes Y channel value of the N*N channel.
  • the optimization condition if satisfied, needs to optimize the convolution kernel parameters of the convolution kernel in the convolutional layer in the initial model, and vice versa, continue to acquire the next image as the first image and extract the Y channel value.
  • FIG. 6 is a schematic flowchart of a method for performing training optimization on an initial model by acquiring a first image and a second image according to an embodiment of the present invention.
  • the method is performed by a computing device, which is a server or smart terminal capable of providing video and/or image services, the method comprising the steps of: in S601, the computing device acquires a high resolution image, the high resolution image being Second image. In embodiments of the invention, these high resolution images may be obtained from some websites that provide high definition images.
  • the computing device performs N-fold down-resolution processing on the high-resolution image to obtain a low-resolution image, which is the first image.
  • the computing device establishes a training image library based on the obtained plurality of first images and second images.
  • the computing device acquires the training image from the training image library, and optimizes the initial model for performing image resolution processing based on the convolutional neural network to obtain an image processing model.
  • the image processing model obtained after the training optimization is completed may be deployed to the smart terminal or the application server for super-resolution processing of the image or video.
  • the image processing model that has been trained and optimized may be processed to obtain a better image processing model.
  • the image processing model (which may be referred to as an initial image processing model) that has been trained and optimized may be further processed, and the processing mainly includes performing model compression processing on the initial image processing model, so as to obtain better performance.
  • An image processing model that can be deployed online.
  • the intermediate layer since the intermediate layer only uses the convolution layer when designing the initial model, it is considered that the preliminary image processing model is compression-optimized by a special processing method, for example, a multi-layer convolution can be performed.
  • the layer compression is combined into one convolution layer, which facilitates the super-resolution processing more quickly after the actual deployment.
  • the 5-layer convolution layer shown in Table 1 is compressed into a 1-layer convolution layer, and the original The five convolution calculations are combined into one convolution calculation, which can greatly reduce the time for super-resolution calculation.
  • the method of merging two convolution layers includes: recording a convolution layer parameter of the first convolutional layer as w 1 , a convolutional layer parameter of the second convolutional layer as w 2 , and a combined convolution
  • the layer parameter is w 3 .
  • Convolution layer parameters height width Number of input channels Number of output channels w 1 h 1 w 1 c 1 c 2
  • the number of channels input is the number of channels of the first layer of convolutional layer
  • the output is the number of output channels of the second layer of convolutional layers, that is, the combined In the convolutional layer (called the merged initial convolutional layer)
  • the convolution kernel is 5*5*1*32.
  • the convolution kernels in the two convolutional layers are calculated based on matrix multiplication.
  • the specific calculation method can be The convolution kernel in the first convolution layer and the convolution kernel in the second convolution layer included in the intermediate layer in the initial model after the parameter update are respectively represented by a convolution kernel matrix, and convolution is used.
  • the matrix multiplication corresponding to the kernel matrix is used to perform convolution kernel merging calculation to obtain a combined convolution kernel.
  • the convolution kernel of the xth row and the yth column in the merged convolution kernel is: each element of the i-th row in the convolution kernel matrix corresponding to the first convolutional layer, and the convolution corresponding to the second convolutional layer The sum of the new convolution kernels after the convolution of each element of the jth column in the kernel matrix.
  • the first convolutional layer can be used as the first convolutional layer, and the number of input channels corresponding to the input data of the first convolutional layer is called the starting channel number c 1 , which will be the first
  • the number of output channels corresponding to the convolutional layer output data is referred to as the number of intermediate channels c 2
  • the second layer of convolutional layers is used as the second convolutional layer
  • the number of output channels corresponding to the output data of the second convolutional layer is referred to as the number of end channels c 3 .
  • the first convolutional layer and the second convolutional layer can be represented by a convolution kernel matrix, that is, the first convolutional layer w 1 can be regarded as a c 1 by c 2 matrix A, and the second layer
  • the convolution layer w 2 is regarded as a matrix B of c 2 by c 3
  • the merged initial convolution layer w 3 of the first layer convolution layer and the second layer convolution layer is regarded as a c 1 by c 3 Matrix C.
  • Each element of the three matrices is a convolution kernel.
  • the convolutional merge calculation method is similar to matrix multiplication. The two matrices are "multiplied" to obtain the merged matrix.
  • the values of the elements of the i-th row and the j-th column of the merged matrix are: each element of the i-th row of all left-multiplied matrices. And the sum of the new convolution kernels convolved one by one with each element of the jth column of the right multiplying matrix. That is to say, if the convolution kernel of the xth row and the yth column of the merged convolution kernel matrix is C(x, y), then there is a formula:
  • the merged initial convolutional layer may be directly determined as the merged convolutional layer.
  • the intermediate layer includes a plurality of convolutional layers, based on Equation 3 above, the merged initial convolutional layer is taken as the new first convolutional layer, and the next convolutional layer (for example, the third convolutional layer) and / Or the merged initial convolutional layer obtained by combining the other two convolutional layers in the intermediate layer as a new second convolutional layer, repeatedly performing the above calculation of the combined convolution kernel, and obtaining a new merged initial convolutional layer The steps that result in a merged convolution layer.
  • the size of the resulting merged convolutional layer is calculated from the convolutional layer parameters of the two convolutional layers or the convolutional layer parameters of the plurality of convolutional layers calculated based on Equations 1 and 2 above. After getting w and h the same.
  • the convolution layer merging process described above is completed to obtain an image processing model including an input layer, an output layer, and a merged convolution layer
  • a merge convolution layer operation is performed according to the above method, and then generated.
  • the convolution kernel under the merged convolutional layer is a separable convolution kernel.
  • the four convolution kernels obtained after the combination are separable convolution kernels. Therefore, some decomposition algorithms can be used, for example, using Singular Value Decomposition (SVD) algorithm.
  • SVD Singular Value Decomposition
  • Each convolution kernel included in the unique merged convolutional layer is decomposed into one row and one column, and the original one-time two-dimensional volume integral is solved into two-dimensional one-dimensional convolution, and the final image is obtained after decomposing the unique convolutional layer. Processing the model, which can effectively improve the calculation speed of subsequent convolution calculations.
  • the image processing model can be deployed to an application server that provides services such as video playback or image presentation, or deployed in a smart terminal with an associated video or image application installed.
  • an application server that provides services such as video playback or image presentation
  • a smart terminal with an associated video or image application installed.
  • super-resolution processing can be performed by the image processing model, so that high or low resolution images can be provided to the user according to requirements.
  • the video frame or image may generally be an image that includes a YUV image channel, where Y represents brightness and U and V represent chrominance.
  • the image processor can process values on the Y image channel (ie, Y channel values) of a video frame or a single image that requires super-resolution processing, and U and V image channels.
  • the values (U channel value and V channel value) can be processed in other ways, for example, interpolation calculation is used to obtain U and V values of a plurality of pixels after super-resolution processing.
  • FIG. 7 is a schematic flowchart of a method for super-resolution processing an image according to an embodiment of the present invention.
  • the method is performed by a computing device that is a server or smart terminal capable of providing video and/or image services.
  • the computing device acquires an image to be subjected to super-resolution processing, and the images to be subjected to super-resolution processing may be image frames of some videos requiring increased resolution, or some images taken by a low-resolution camera device. Images to and other low resolution images.
  • an image that is below a predetermined resolution threshold such as may be an image below 540P resolution, may be considered a low resolution image.
  • it can be used as a pending image for subsequent super resolution processing.
  • low resolution and high resolution are relative, and when certain images require super resolution processing, the images are considered to be low resolution images.
  • the computing device performs data extraction and separation operations on the acquired image to be subjected to super-resolution processing, and obtains Y channel values and UV channel values, respectively.
  • the non-YUV image may be converted to an image in the YUV format to facilitate execution of the S702.
  • the computing device calls the image processing model to process the Y channel value, and outputs the Y channel value after the N times super resolution processing.
  • the computing device calculates the UV channel value based on the interpolation algorithm to obtain the UV channel value after the N-fold interpolation super-resolution processing.
  • the interpolation algorithm may be the nearest neighbor interpolation (the gray value of the transformed pixel is equal to the gray value of the input pixel nearest to it), bilinear interpolation, bicubic interpolation, Lanxos interpolation, and the like.
  • the computing device combines the N-channel super-resolution processed Y-channel value and the N-fold interpolated super-resolution processed UV channel value, and rearranges to obtain the super-resolution processed target image.
  • the embodiment of the present invention can perform accurate and comprehensive training optimization on a model including an intermediate layer configured for convolution calculation based on two images before and after the resolution reduction processing, and finally obtain an image capable of performing N-time super-resolution processing.
  • Process the model Based on the image processing model, on the one hand, without searching for the mapping relationship between the low-resolution image and the high-resolution image, the super-resolution calculation can be directly performed on the related data of the image or the video frame, so that the image or video frame can be realized very quickly.
  • Super-resolution processing, super-resolution processing speed and efficiency are significantly improved, on the other hand, super-resolution processing of images is more accurate and stable.
  • FIG. 8 is a schematic flowchart diagram of an optimization method of an image processing model according to an embodiment of the present invention.
  • the method of the embodiments of the present invention may be performed by a computing device, which is a server or a smart terminal capable of providing video and/or image services, and the method is mainly an image processing model for performing image super-resolution processing. Train optimization.
  • the method includes the following steps.
  • S801 Generate an initial model for performing image resolution processing based on a convolutional neural network, the initial model including an input layer, an output layer, and an intermediate layer.
  • the intermediate layer is a convolutional layer for convolution calculations.
  • the intermediate layer of the initial model may include a convolution layer, but does not include other layer structures such as an activation layer, which facilitates subsequent faster and more accurate intermediate layer merging processing, and multiple convolution layers. Merged into a convolution layer.
  • the intermediate layer may include multiple layers, and the structure of the initial model may be as shown in FIG.
  • S802 Acquire a training image, where the training image includes a first image and a second image, where the first image is an image obtained by performing N-fold down-resolution processing on the second image.
  • the training image can be a video frame in various images or videos.
  • the second image is an image of the original resolution, typically a high resolution image, such as an image that can be 1080P, or even a higher 2K, 4K image.
  • the first image can be obtained by performing a resolution reduction process on the second image. After each high-resolution image is acquired, each high-resolution image can be interpolated to reduce the resolution of the image.
  • the first image and the second image are in one-to-one correspondence.
  • the acquired training image may include tens of thousands, hundreds of thousands or even more high-resolution images (ie, the second image).
  • the first image needs to be input into the initial model for convolution calculation, which can be considered as the process of super-resolution processing of the first image, and the second image is used for the volume of the first image through the initial model.
  • the high-resolution target image corresponding to the result data outputted by the product calculation is verified to determine whether to perform training optimization on the convolution kernel parameters in the initial model.
  • S803 input image data of the first image from the input layer to the intermediate layer for convolution calculation, and obtain result data of the convolution calculation from the output layer, where the result data includes N*
  • the channel of the N channel outputs data and the N is a positive integer greater than or equal to 2. If the image processing model obtained after training optimization can increase the resolution of the image by N times, the result data of the channel output data including the N*N channel needs to be obtained through convolution calculation, and each of the N*N channel output data is obtained.
  • the value at the position is the representation of the pixel at the corresponding position on the super-resolution image.
  • the target image corresponding to the result data refers to an image obtained by combining respective values of channel output data included in the result data, and a value in each channel output data in the result data is used to indicate the The representation value of one of the pixel points of the target image.
  • the image data input by the input layer of the image processing model is a two-dimensional matrix.
  • the input image data is also a two-dimensional matrix of M*M (for example, corresponding to 960*540), and the numerical value of the two-dimensional matrix is
  • the representation values of the pixels of the first image are in one-to-one correspondence, that is, the values of the first row and the first column of the two-dimensional matrix are the representation values of the pixel points of the first row and the first column in the first image (such as brightness value, or brightness).
  • the image data input by the input layer is one, or two, or three two-dimensional matrices, that is, the image data input by the input layer is composed of corresponding R values.
  • S804 Perform parameter update on the convolution kernel parameter in the middle layer of the initial model according to the result data and the second image, and generate an image processing model according to the initial model after the parameter update.
  • the training of the initial model can be completed by completing the training optimization of the convolution kernel parameters of the respective convolution layers in the intermediate layer. Specifically, it may be determined whether to optimize the convolution kernel parameters in the intermediate layer based on the difference between the respective values in the result data and the representation values of the pixel points in the second image.
  • based on the result data re-arrange and generate a super-resolution processed target image according to the initial position of the pixel and the value at the corresponding position of the channel output data of each channel, and the super-resolution is obtained.
  • the processed target image and the second image are compared with the representation values of the pixel points at the respective image positions, and the difference values representing the numerical values are determined, thereby determining the correctness of the intermediate layer for convolution calculation.
  • the super-resolution processed image may not be generated based on the result data, but based on the channel output data of each output channel and the corresponding value of the corresponding data position, directly corresponding to the corresponding position in the second image.
  • the values of the pixel points are compared to determine the difference between the two.
  • the value of the top left vertex is compared with the value of the second pixel of the first line in the second image to determine the value of the top left vertex and Indicates the difference in values.
  • the output data of each channel in the result data is a two-dimensional numerical matrix, and the position and number of values on the result data correspond to the positions and numbers of the respective pixels on the first image.
  • the channel output data of each channel is also a matrix of 800*600, and the value at each position on the matrix is the pixel at the corresponding position on the first image.
  • the value of the point is calculated by convolution.
  • the value of the first row and the second column in the channel output data on an output channel is corresponding to the pixel of the first row and the second column on the first image, and is based on the corresponding pixel on the first image. Indicates that the value is calculated by convolution.
  • the representation value of the pixel point is If the difference value of the representation value of the pixel point at the corresponding position in the second image does not satisfy the optimization condition, it is considered that the current image processing model can perform correct super-resolution processing on the first image without requiring the convolution kernel parameter Update.
  • not satisfying the optimization condition may mean that there are P difference values indicating that two representation values are the same or the difference value is within a preset difference threshold, and P is a positive integer.
  • the pixel value of the top left vertex position in the super-resolution processed image is the same or the difference is a preset difference threshold (for example, the difference threshold is 5).
  • the difference threshold is 5
  • the representation value of the pixel point and the pixel at the corresponding position in the second image If the difference value of the point representation value satisfies the optimization condition, it is considered that the current model cannot perform correct super-resolution processing on the first image, and the convolution kernel parameter needs to be updated.
  • the convolution kernel in the convolutional layer is also a two-dimensional matrix, such as the 3*3 matrix mentioned above. Each value on the matrix is called a convolution kernel parameter.
  • the target image corresponding to the result data calculated and output based on the convolution kernel can exceed the preset first quantity. The difference between the representation value of the pixel at the position and the representation value of the pixel at the corresponding position in the second image does not satisfy the optimization condition.
  • the S804 may specifically include: determining a difference value of a representation value of a pixel point at a same image position between the target image and the second image, the target image being determined according to the result data. And updating the convolution kernel parameter in the middle layer of the initial model according to the difference value of the representation value, and generating an image processing model according to the initial model after the parameter update.
  • the image data of the first image refers to: a Y channel value extracted from the first image; and a value of each channel output data in the result data is used to represent the target a Y channel value of one of the pixels of the image; the difference value of the representation value means: a Y channel value of the pixel at the same image position between the target image corresponding to the result data and the second image
  • the difference, the difference of the Y channel values may be, for example, the difference of the Y channel values or the like.
  • the difference value of the representative value is mainly used to reflect a change value between a pixel point at a certain position on the target image corresponding to the result data and a pixel point at the same position on the second image, for example, a Y channel value. Difference, gray value difference, brightness value difference, and so on.
  • the image data of the first image refers to: a Y channel value extracted from the first image; and the target image corresponding to the result data is Y channel image data, U channel image data And the V channel image data combination is obtained; wherein the Y channel image data in the target image is obtained according to the result data, and the U channel image data in the target image is calculated by interpolating the U channel value extracted in the first image.
  • the V channel image data in the target image is obtained by interpolating the V channel values extracted in the first image; wherein a value of each channel output data in the result data is used to represent the target image
  • the Y channel value of one of the pixel points are examples of the image data of the first image.
  • the convolution kernel parameter in the intermediate layer of the initial model is parameter-updated according to the difference value of the representation value, and the optimization of the convolution kernel parameter is completed.
  • the parameter update process for the convolution kernel parameter is also an optimization process for the convolution kernel parameter.
  • the optimization process mainly refers to optimizing the value in the two-dimensional matrix corresponding to the convolution kernel, so that the calculated difference value of the representation value is not satisfied. Optimal conditions. If the difference value indicating the value does not satisfy the optimization condition, it indicates that the current image processing model has been able to perform more accurate super-resolution processing on the first image without further optimization.
  • FIG. 9 is a schematic flowchart of a method for generating an image processing model according to an optimized initial model according to an embodiment of the present invention.
  • the method in the embodiment of the present invention is performed by a computing device, where the computing device is capable of providing video and/or Or a server or smart terminal of an image service, the method may include the following steps.
  • the computing device performs convolution layer merging processing on at least two convolution layers of the intermediate layer in the initial model after the parameter update, to obtain a merged convolution layer.
  • the intermediate layer includes at least a first convolutional layer and a second convolutional layer
  • the convolutional layer is merged by at least two convolutional layers of the intermediate layer in the initial model after the parameter update.
  • the step of obtaining a merged convolution layer comprises: combining the first convolution layer and the second convolution layer included in the intermediate layer in the initial model after the parameter update;
  • the merging process comprises: calculating the merged volume The stacking parameters and the calculation of the combined convolution kernel.
  • the merged convolution layer parameter mainly indicates the size of the convolution kernel, for example, the 5-layer convolution structure described in Table 1, and finally the calculated convolutional layer parameter is 11*11, indicating that the convolutional convolution layer is formed.
  • the size of the medium convolution kernel is 11*11.
  • the calculated combined convolution kernel constitutes all convolution kernels in the merged convolutional layer.
  • performing the convolution layer merging process on the at least two convolution layers of the intermediate layer in the initial model after the parameter update, to obtain a merged convolution layer may include: updating according to parameters In the initial model, the intermediate layer includes a convolution layer parameter of the first convolution layer and a convolution layer parameter of the second convolution layer to obtain a merged convolution layer parameter; and a merge initial convolution layer is determined, the merge initial volume
  • the size of the convolution kernel in the stack is the same as the value indicated by the merged convolution layer parameter; the merged convolution layer is obtained according to the merged initial convolution layer.
  • the merged initial convolutional layer may directly serve as a merged convolutional layer, and if the intermediate layer further includes For many other convolutional layers, the merged initial convolutional layer may be further used as a new first convolutional layer, and the next convolutional layer (eg, the third convolutional layer) and/or by the middle
  • the merged initial convolutional layer obtained by combining the other two convolutional layers in the layer is used as a new second convolutional layer, and the above steps of calculating the merged convolutional layer parameters are performed and the initial convolutional layer is merged to obtain the final Merge convolution layers.
  • the merged convolution layer parameters can be determined, that is, the size of the merged convolutional layer is determined.
  • a method of determining the combined convolution kernel is also provided.
  • Performing a convolution layer merging process on the at least two convolution layers of the intermediate layer in the initial model after the parameter update, and obtaining the merged convolution layer may include: respectively, the intermediate layer included in the initial model after the parameter update is included
  • the convolution kernel of the first convolutional layer and the convolution kernel of the second convolutional layer are represented by a convolution kernel matrix, and are convoluted and combined by matrix multiplication corresponding to the convolution kernel matrix to obtain a merged convolution kernel.
  • a merged initial convolutional layer is formed, which constitutes all convolution kernels in the merged initial convolutional layer; a merged convolutional layer is obtained according to the merged initial convolutional layer;
  • the convolution kernel of the xth row and the yth column in the merged convolution kernel is: each element of the i-th row in the convolution kernel matrix corresponding to the first convolutional layer, and the convolution kernel corresponding to the second convolutional layer
  • the merged initial convolutional layer may directly serve as a merged convolutional layer, and if the intermediate layer further includes For many other convolutional layers, the merged initial convolutional layer may be further used as a new first convolutional layer, and the next convolutional layer (eg, the third convolutional layer) and/or by the middle
  • the merged initial convolutional layer obtained by combining the other two convolutional layers in the layer is used as a new second convolutional layer, and the above-mentioned calculation of the merged convolution kernel is performed and the steps of merging the initial convolutional layer are obtained to obtain the final merge. Convolution layer.
  • the computing device generates an image processing model according to the input layer, the output layer, and the merged convolution layer.
  • An image processing model including the input layer, the output layer, and the merged convolution layer may be directly generated.
  • the merged convolution layer may be further decomposed, and the S902 may specifically include: the computing device performs decomposition processing according to the convolution kernel parameter in the merged convolution layer, and decomposes the convolution kernel parameter.
  • a row parameter and a column parameter an image processing model is generated according to the input layer, the output layer, and a row of parameters and a column of parameters obtained by the decomposition. This can transform the two-dimensional convolution calculation into two one-dimensional convolution calculations, further improving the computational efficiency.
  • each of the convolution kernels of the merged convolutional layer may be subjected to the decomposition process described above.
  • the embodiment of the present invention can perform training optimization on a model including an intermediate layer configured for convolution calculation based on two images before and after the resolution reduction processing, and finally obtain an image processing model that can perform N times super resolution processing, based on
  • the image processing model generated by the special structure and the training method can realize the super-resolution processing of the image or the video frame very quickly, and the speed and efficiency of the super-resolution processing are obviously improved.
  • the model is also more accurate and stable for super-resolution processing of images.
  • FIG. 10 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • the method in the embodiment of the present invention may also be performed by a computing device, where the computing device is a server capable of providing an image or video service, or
  • the smart terminal the method of the embodiment of the present invention includes the following steps.
  • the target image data may correspond to a single image that needs to be subjected to super-resolution processing, or may be a video frame in a video that needs to be played for a user and needs to be subjected to super-resolution processing.
  • the target image data may refer to a representation value corresponding to a pixel point at each position in the image to be processed.
  • the need for super-resolution processing may mean that the user selects a corresponding super-resolution processing function on the user interface for displaying an image, for example, a super clear display button is set on the user interface, and if the button is clicked, The super-clear display button is associated with the resolution as a resolution threshold. If the resolution of the image to be played is lower than the resolution threshold during image playback, it is used as the image to be processed to obtain the target of the image to be processed. Image data.
  • the method may further include: determining a resolution of the image to be processed if an image to be processed is received; performing the obtaining if the determined resolution is lower than a resolution threshold a step of target image data of an image to be processed; wherein the resolution threshold is configured to be determined on a user interface for displaying an image.
  • the user interface is an interface that can browse an image, or an interface that plays a video.
  • the image to be processed may be an image waiting to be displayed, or may be a video frame waiting to be played in the video.
  • the resolution of the transmitted data can be determined according to data transmission resource information between end-to-end, such as available bandwidth resources, transmission rate, and the like. If the data transmission resource is sufficient, the sender can super-resolution the image with low resolution (below the resolution threshold) and transmit it to the image receiving end. If the data transmission resource is insufficient, the direct transmission is lower than the resolution. The image of the threshold can even be subjected to down-resolution processing of the high-resolution image and then sent to the image receiving end.
  • data transmission resource information between the transmitting end and the image receiving end may be acquired; if the data transmission resource information satisfies the restriction condition, the image to be processed is sent to the image receiving end; if the data transmission resource information is If the constraint condition is not met, the S1001 is executed, so as to send the generated target image with the N times resolution corresponding to the image to be processed to the image receiving end; wherein the data transmission resource information satisfies the restriction condition
  • the method includes: the bandwidth resource amount is lower than a preset bandwidth threshold, and/or the data transmission rate is lower than a preset rate threshold.
  • a corresponding button may also be set on the video player or other video playing application. After the user clicks the button, the video player or other video playing application first processes the video frame waiting to be played as the image to be processed. In order to play higher resolution video to the user.
  • the method before the acquiring the target image data of the image to be processed, the method further includes: determining a target video requested by the video play request if a video play request is received; if the video frame of the target video is clear The degree is lower than the preset video play definition threshold, and the video frame of the target video is sequentially taken as the image to be processed according to the play time sequence of the target video, and the target image data of the image to be processed is executed to facilitate output.
  • the image processing model is a preset model capable of realizing N times of super-resolution processing, the image processing model including an input layer, an output layer, and an intermediate layer, and the convolution kernel parameter in the intermediate layer is based on a training image Determined after the parameter is updated.
  • the training image includes a first image and a second image, and the first image is an image obtained by performing N-fold down-resolution processing on the second image.
  • the intermediate layer in the image processing model is determined by updating the convolution kernel parameter in the intermediate layer according to the second image and the result data, and the result data is an intermediate layer pair before the update.
  • the channel output data including the N*N channel obtained by convolution calculation of image data of an image.
  • the image processing model may be a model corresponding to the super-resolution multiplier value selected from the preset image processing model set according to a super-resolution multiplier value requested for the image to be processed, and the image processing model set is pre- A plurality of image processing models capable of providing different multiples of super-resolution processing are placed.
  • the method may further include: determining whether the image data of the obtained image to be processed is an RGB value, and if yes, acquiring the RGB value as the target image data, and selecting three image processing models
  • the S1002 includes: processing the RGB values by three image processing models to obtain result data of three N*N channels corresponding to the RGB values. Among them, the first image processing model of the three image processing models is used to process the R value, the second image processing model is used to process the G value, and the third image processing model is used to process the B value.
  • the S1002 includes: processing the Y channel value by an image processing model to obtain N*N The result data of the channel.
  • the brightness V value can also be extracted as the target image data for the HSV data
  • the brightness L value can also be extracted as the target image data for the HSL data.
  • the image data input by the input layer of the image processing model is a two-dimensional matrix.
  • the image to be processed is an image with a resolution of M*M (such as 960*540)
  • the two-dimensional matrix is also M*M (such as 960*540).
  • the two-dimensional matrix corresponds to the representation value of the pixel points of the image to be processed
  • the value of the first row and the first column of the two-dimensional matrix is the representation value of the pixel of the first row and the first column in the image to be processed (such as The brightness value, or the brightness value, or the Y channel value)
  • the value of the first row and the second column of the two-dimensional matrix corresponds to the representation value of the pixel of the first row and the second column in the image to be processed, and so on.
  • the image data input by the input layer is one, or two, or three two-dimensional matrices, that is, the image data input by the input layer is the corresponding R value.
  • each value is a representation value of one pixel, and each value in the result data is arranged in a preset order to obtain a super-resolution processed image.
  • the preset order may be an order of arrangement specified for each channel data, for example, for a 2x image processing model, the value of the first row and the first column in the first channel is used as the first row and the first column of the super-resolution image.
  • the value of the pixel point, the value of the first row and the first column in the second channel is used as the representation value of the pixel in the first row and the second column of the super-resolution image, and the first row and the first column in the third channel
  • the value is used as the representation value of the pixel of the first row of the second row in the high-resolution image
  • the value of the first row and the first column of the fourth channel is used as the representation of the pixel of the second row and the second column in the high-resolution image.
  • the image processing model performs N times super-resolution processing on the image, which is to change one pixel point in the original image into N*N pixel points, and the value of each channel in the result data is sequentially arranged to form the N*.
  • the representation value of N pixels is finally arranged to obtain a super-resolution processed image.
  • the target image data may be the Y channel value of the image or video frame, and the subsequent step is super-resolution processing on the Y channel value of the image, and the other U channel values and V channel values of the image may be Processing is performed by other super-resolution processing steps, such as by correlation steps of interpolation. That is, in the embodiment of the present invention, the target image data includes a Y channel value extracted from the image to be processed; and one of the result data is used to represent one pixel of the target image.
  • Y channel value according to the result data generating a target image with an N-fold resolution corresponding to the image to be processed, including: obtaining a Y channel according to each value combination in the result data of the N*N channel Image data; performing interpolation calculation on the U channel value extracted from the first image to obtain U channel image data; performing interpolation calculation on the V channel value extracted from the first image to obtain V channel image data;
  • the Y channel image data, the U channel image data, and the V channel image data are combined to obtain a corresponding target image with an N-fold resolution.
  • the super-resolution processing is performed by using the image processing model in the embodiment of the present invention to perform super-resolution processing and the current mapping relationship between the low-resolution image and the high-resolution image.
  • the relationship between the long time is indicated.
  • the test environment is: CPU CPU Intel E5v4 2.40GHz 16 core (one processor), 10G memory, no graphics processing unit (GPU).
  • FIG. 11 is a schematic structural diagram of an apparatus for optimizing an image processing model according to an embodiment of the present invention.
  • the apparatus according to an embodiment of the present invention may be disposed in a server or an intelligent terminal capable of providing various image services, where the apparatus is Includes the following structure.
  • a generating module 1101 configured to generate an initial model for performing image resolution processing based on a convolutional neural network, where the initial model includes an input layer, an output layer, and an intermediate layer;
  • the obtaining module 1102 is configured to acquire a training image, where the training image includes a first image and a second image, where the first image is an image obtained by performing N-fold down-resolution processing on the second image;
  • a calculation module 1103 configured to input image data of the first image from the input layer to the intermediate layer for convolution calculation, and obtain result data of the convolution calculation from the output layer, the result
  • the data includes channel output data of the N*N channel, and the N is a positive integer greater than or equal to 2;
  • the processing module 1104 is configured to perform parameter update on the convolution kernel parameter in the middle layer of the initial model according to the result data and the second image, and generate an image processing model according to the initial model after the parameter update.
  • the processing module 1104 is configured to determine a target image when performing parameter update on a convolution kernel parameter in an intermediate layer of the initial model according to the result data and the second image. a difference value of a pixel value at a same image position between the second images, the target image being an image determined according to the result data; a difference from the representation value to the initial model
  • the convolution kernel parameters in the intermediate layer are parameter updated.
  • the intermediate layer includes at least two convolutional layers, and the convolutional layers are directly connected; the processing module 1104, when generating an image processing model according to the initial model updated by the parameter And performing convolution layer merging processing on at least two convolution layers of the intermediate layer in the initial model after the parameter update to obtain a merged convolution layer; according to the input layer, the output layer, and the merged convolution layer , generate an image processing model.
  • the intermediate layer includes a first convolutional layer and a second convolutional layer
  • the processing module 1104 at least two convolutions of the intermediate layer in an initial model for updating parameters
  • the layer performs convolutional layer merging processing to obtain a convolutional layer parameter of the first convolutional layer and a convolution of the second convolutional layer included in the intermediate layer in the initial model after updating according to the parameter a layer parameter, a merged convolution layer parameter is obtained, and a merged initial convolution layer is determined, wherein a size of the convolution kernel in the merged initial convolution layer is the same as a value indicated by the merged convolution layer parameter, according to the merge
  • the intermediate layer includes a first convolutional layer and a second convolutional layer
  • the processing module 1104 at least two convolutions of the intermediate layer in an initial model for updating parameters
  • the layer performs convolutional layer merging processing to obtain a convolution kernel in the first convolutional layer and the second convolutional layer volume included in the intermediate layer in the initial model after updating the parameter respectively.
  • the kernel is represented by a convolution kernel matrix, and the convolutional combination calculation is performed by matrix multiplication corresponding to the convolution kernel matrix to obtain a merged convolution kernel; according to the calculated merge convolution kernel, a merged initial convolution layer is obtained; Combining the initial convolutional layer to obtain a merged convolution layer; the convolution kernel of the xth row and the yth column in the merged convolution kernel is: each element of the i th row in the convolution kernel matrix corresponding to the first convolutional layer a convolution kernel obtained by performing a summation calculation on a new convolution kernel obtained by convolving each element of the jth column in the convolution kernel matrix corresponding to the second convolutional layer, wherein the volume is
  • the representation of the kernel matrix refers to: each element pair in the convolution kernel matrix Laminated on the first volume or a second convolution kernel convolution layer.
  • the processing module 1104 is configured to use a convolution kernel in the merged convolution layer when generating an image processing model according to the input layer, the output layer, and the merged convolution layer
  • the parameters are decomposed into a row of parameters and a column of parameters; an image processing model is generated according to the input layer, the output layer, and a row of parameters and a list of parameters obtained by the decomposition.
  • the target image refers to an image obtained by combining respective values of channel output data included in the result data, and a value in each channel output data of the result data is used to represent A representation value of one of the pixel points of the target image.
  • the image data of the first image refers to: a Y channel value extracted from the first image; and the target image corresponding to the result data is Y channel image data, U channel image data And the V channel image data combination is obtained; wherein the Y channel image data in the target image is obtained according to the result data, and the U channel image data in the target image is calculated by interpolating the U channel value extracted in the first image.
  • the V channel image data in the target image is obtained by interpolating the V channel values extracted in the first image; wherein a value of each channel output data in the result data is used to represent the target image
  • the Y channel value of one of the pixel points are examples of the image data of the first image.
  • the image data of the first image refers to: a Y channel value extracted from the first image; and a value of each channel output data in the result data is used to represent the target a Y channel value of one of the pixels of the image; the difference value of the representation value means: a Y channel value of the pixel at the same image position between the target image corresponding to the result data and the second image Difference.
  • the embodiment of the present invention can perform training optimization on a model including an intermediate layer configured for convolution calculation based on two images before and after the resolution reduction processing, and finally obtain an image processing model that can perform N times super resolution processing.
  • the training method enables the image processing model to perform super-resolution processing with high quality and accuracy.
  • FIG. 12 it is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the apparatus may be disposed in a server or an intelligent terminal capable of providing various image services, and the apparatus may include the following structure.
  • An obtaining module 1201, configured to acquire target image data of an image to be processed
  • the processing module 1202 is configured to process the target image data by using an image processing model to obtain result data of channel output data including an N*N channel, where N is a positive integer greater than or equal to 2;
  • a generating module 1203, configured to generate, according to the result data, a target image that is increased in N times resolution corresponding to the image to be processed;
  • the image processing model includes an input layer, an output layer, and an intermediate layer, wherein the convolution kernel parameter in the intermediate layer is determined based on a parameter update of the training image, the training image including the first image and the second image,
  • the first image is an image obtained by performing N-fold down-resolution processing on the second image
  • an intermediate layer in the image processing model is a volume in the middle layer according to the second image and the result data.
  • the result of the update is obtained by the intermediate layer before the update, and the channel output data including the N*N channel obtained by performing convolution calculation on the image data of the first image.
  • the target image data includes a Y channel value extracted from the image to be processed; and one of the result data is used to represent a Y channel value of one of the pixel points of the target image;
  • the generating module 1203 is configured to combine each value in the result data according to the N*N channel when generating the target image with the N times resolution corresponding to the image to be processed according to the result data.
  • the Y channel image data, the U channel image data, and the V channel image data are combined to obtain a corresponding target image with an N-fold resolution.
  • the apparatus may further include a monitoring module 1204, configured to determine a resolution of the image to be processed if the image to be processed is received, and trigger the method if the determined resolution is lower than a resolution threshold
  • the obtaining module 1201 performs the step of acquiring target image data of an image to be processed; wherein the resolution threshold is configured and determined on a user interface for displaying an image.
  • the apparatus may further include a monitoring module 1204 and a transmitting module 1205.
  • the monitoring module 1204 is configured to acquire data transmission resource information with the image receiving end; if the data transmission resource information meets the restriction condition, triggering the sending module 1205 to send the to-be-processed image to the image receiving end; If the data transmission resource information does not satisfy the constraint, the acquiring module 1201 is triggered to acquire the target image data of the image to be processed, so as to send the target image with the N times resolution corresponding to the generated image to be processed to And the image receiving end; wherein, the data transmission resource information meets the constraint condition, the bandwidth resource quantity is lower than a preset bandwidth threshold, and/or the data transmission rate is lower than a preset rate threshold.
  • the apparatus may further include a monitoring module 1204, if the video play request is received, determining a target video requested by the video play request; if the video frame of the target video is lower in definition a video playback clarity threshold, in which the video frame of the target video is sequentially used as the image to be processed according to the playback time sequence of the target video, and the acquiring module 1201 is triggered to perform the step of acquiring the target image data of the image to be processed, so that And outputting a target image with an N-time resolution corresponding to each image to be processed.
  • a monitoring module 1204 if the video play request is received, determining a target video requested by the video play request; if the video frame of the target video is lower in definition a video playback clarity threshold, in which the video frame of the target video is sequentially used as the image to be processed according to the playback time sequence of the target video, and the acquiring module 1201 is triggered to perform the step of acquiring the target image data of the image to be processed, so that And outputting a
  • the embodiment of the invention is based on an image processing model generated by a special structure and a training method, and on the one hand, can realize super-resolution processing of images (such as various pictures, video frames, etc.) very quickly, and the super-resolution processing speed and efficiency are obvious. It is improved, on the other hand, the super-resolution processing of the image based on the image processing model is also more accurate and stable.
  • the image processing device may include a power supply module, a shell structure, and various required data interfaces 1302 , a network interface 1304 , etc. as needed. And the image processing device further includes: a processor 1301 and a storage device 1303.
  • the image processing device can interact with a server or a smart terminal.
  • various high-resolution images can be received from the server or the management user as the second image in the training image through the data interface 1302, and more information can be obtained through the network through the network interface 1304.
  • the high resolution image is used as the second image in the training image, and the image processing device stores the training images in the storage device 1303 to facilitate training the initial model to obtain an image processing model.
  • the storage device 1303 may include a volatile memory, such as a random-access memory (RAM); the storage device 1303 may also include a non-volatile memory, such as a fast A flash memory, a solid-state drive (SSD) or the like; the storage device 1303 may further include a combination of the above-described types of memories.
  • RAM random-access memory
  • SSD solid-state drive
  • the processor 1301 may be a central processing unit (CPU).
  • the processor 1301 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or the like.
  • the PLD may be a field-programmable gate array (FPGA), a general array logic (GAL), or the like.
  • the storage device 1303 stores computer program instructions, and the processor 1301 can invoke the computer program instructions for implementing the related method steps of the above-mentioned training optimization of the image processing model.
  • the storage device 1303 is configured to store computer program instructions; the processor 1301, invokes computer program instructions stored in the storage device 1303 to perform optimization of the image processing model shown in FIG. The steps of the method.
  • the embodiment of the present invention can perform training optimization on a model including an intermediate layer configured for convolution calculation based on two images before and after the resolution reduction processing, and finally obtain an image processing model that can perform N times super resolution processing.
  • the training method enables the image processing model to perform super-resolution processing with high quality and accuracy.
  • the image processing apparatus may include a power supply module, a shell structure, and various required data interfaces 1403, a network interface 1404, and the like according to requirements. And the image processing apparatus further includes: a processor 1401 and a storage device 1402.
  • the image processing device can be connected to the network as a server to connect to the smart terminal, or the access network can be connected to the server as an intelligent terminal.
  • the image processing device is used as a server, various images or videos of low resolution or images or videos obtained by super-resolution processing of low-resolution images can be sent to the image with the image interface 1404 as needed.
  • a smart terminal that plays the demand.
  • the data interface 1403 can receive various types of picture data or video data transmitted by the image management user or other servers, and store them in the storage device 1402 for use.
  • the network interface 1404 can access an application server that provides various image or video services, thereby acquiring low-resolution images, videos, or acquiring low-resolution images or The high-resolution image or video obtained by the super-resolution processing of the video, through which the low-resolution image or video transmitted by the user or other intelligent terminal can be received through the data interface 1403; or the low-resolution image or video is acquired. High resolution image or video obtained after super resolution processing.
  • the image processing apparatus may further include a user interface 1405, such as a touch screen, physical keys, voice input, image input, etc., in order to receive some of the user's operations and present the video or image to the user.
  • the storage device 1402 can include a volatile memory, such as a RAM; the storage device 1402 can also include a non-volatile memory, such as an SSD, etc.; the storage device 1402 can also include a combination of the types of memory described above.
  • a volatile memory such as a RAM
  • the storage device 1402 can also include a non-volatile memory, such as an SSD, etc.
  • the storage device 1402 can also include a combination of the types of memory described above.
  • the processor 1401 may be a CPU.
  • the processor 1401 may further include a hardware chip.
  • the above hardware chip may be an ASIC, a PLD or the like.
  • the above PLD may be an FPGA, a GAL or the like.
  • the storage device 1402 stores computer program instructions, and the processor 1401 can call the computer program instructions to implement the method steps of processing the image based on the image processing model mentioned above.
  • the storage device 1402 is configured to store computer program instructions; the processor 1401, invokes computer program instructions stored in the storage device 1402 to perform optimization of the image processing model illustrated in FIG. The steps of the method.
  • the embodiment of the invention is based on an image processing model generated by a special structure and a training method, and on the one hand, can realize super-resolution processing of images (such as various pictures, video frames, etc.) very quickly, and the super-resolution processing speed and efficiency are obvious. It is improved, on the other hand, the super-resolution processing of the image based on the image processing model is also more accurate and stable.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开了一种图像相关处理方法及装置、设备、存储介质,其中的方法包括:基于卷积神经网络生成用于进行图像分辨率处理的初始模型;获取训练图像,训练图像包括第一图像和第二图像,第一图像是对第二图像进行N倍降分辨率处理后得到的图像;将第一图像的图像数据从输入层输入到初始模型的中间层进行卷积计算,并从输出层获取卷积计算的结果数据,其中卷积计算后输出的结果数据包括N*N通道的通道输出数据;根据结果数据和第二图像,对中间层中的卷积核参数进行参数更新,并根据参数更新后的初始模型生成图像处理模型。采用本发明实施例,可以更好的训练优化得到图像处理模型,以便于基于该图像处理模型准确地进行图像的超分辨率处理。

Description

图像相关处理方法及装置、设备、存储介质
本申请要求于2018年4月2日提交中国专利局、申请号为201810285137.4、发明名称为“图像相关处理方法及装置、智能终端、服务器、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像相关处理方法及装置、设备、存储介质。
发明背景
随着网络技术和电子技术的发展,人们可以通过网络,从各种服务器、其他用户端获取到各种各样的信息,这些信息包括图像、视频等影像。基于终端上安装的应用,用户可以随时观看视频,浏览图像。
为了获取更好的视频或图像的观看体验,人们希望观看到高清的视频、图像。为了更快更及时地提供高分辨率图像给用户,目前提出了一些图像超分辨率处理的方法,通过对视频帧或者图片等图像进行超分辨率处理,能够为用户提供更高分辨率图像。
现有的超分辨率处理方法,大多都是通过模拟的方式构造出多组低分辨率与高分辨率一一对应的图像,然后通过机器学习的方法学习低分辨率图像到高分辨率图像的映射关系。模拟的过程包括:获取一组高分辨率图像、对每个图像进行插值处理,把图像的分辨率降低,这样就得到了多组低分辨率与高分辨率一一对应的图像。其中,可以通过双线性插值处理方式进行降分辨率的插值处理,得到多组低分辨率与高分辨率一一对应的图像。在得到低分辨率图像后,根据低分辨率图像到高分辨率图像的映射关系,可以找到对应的高分辨率图像,并输出给用户。
而如何通过训练图像处理模型的方式来更好地完成超分辨率处理成为研究的热点问题。
发明内容
本发明实施例提供了一种图像相关处理方法及装置、设备、存储介质,可得到较优的图像处理模型,能够高质量的完成图像的超分辨率处理。
一方面,本发明实施例提供了一种图像处理模型的生成方法,由计算设备执行,包括:
基于卷积神经网络生成用于进行图像分辨率处理的初始模型,所述初始模型包括输入层、输出层和中间层;
获取训练图像,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像;
将所述第一图像的图像数据从所述输入层输入到所述中间层进行卷积计算,并从所述输出层获取所述卷积计算的结果数据,所述结果数据包括N*N通道的通道输出数据,所述N为大于等于2的正整数;
根据所述结果数据和第二图像,对所述初始模型的中间层中的卷积核参数进行参数更新,并根据所述参数更新后的初始模型生成图像处理模型。
另一方面,本发明实施例还提供了一种图像处理方法,由计算设备执行,包括:
获取待处理图像的目标图像数据;
通过图像处理模型对所述目标图像数据进行处理,得到包括N*N通道的通道输出数据的结果数据,所述N为大于等于2的正整数;
根据所述结果数据,生成所述待处理图像所对应的提高N倍分辨率的目标图像;
所述图像处理模型包括输入层、输出层和中间层,所述中间层中的卷积核参数是基于训练图像进行参数更新后确定的,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像,其中,所述图像处理模型中的中间层是根据所述第二图像和结果数据对中间层中的卷积核参数进行更新后得到的,所述结果数据是更新前的中间层对所述第一图像的图像数据进行卷积计算后得到的包括N*N通道的通道输出数据。
本发明实施例还提供了一种图像处理模型的生成装置,包括:
生成模块,用于基于卷积神经网络生成用于进行图像分辨率处理的初始模型,所述初始模型包括输入层、输出层和中间层;
获取模块,用于获取训练图像,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像;
计算模块,用于将所述第一图像的图像数据从所述输入层输入到所述中间层进行卷积计算,并从所述输出层获取所述卷积计算的结果数据,所述结果数据包括N*N通道的通道输出数据,所述N为大于等于2的正整数;
处理模块,用于根据所述结果数据和第二图像,对所述初始模型的中间层中的卷积核参数进行参数更新,并根据所述参数更新后的初始模型生成图像处理模型。
本发明实施例还提供了一种图像处理装置,包括:
获取模块,用于获取待处理图像的目标图像数据;
处理模块,用于通过图像处理模型对所述目标图像数据进行处理,得到包括N*N通道的通道输出数据的结果数据,所述N为大于等于2的正整数;
生成模块,用于根据所述结果数据,生成所述待处理图像所对应的提高N倍分辨率的目标图像;
所述图像处理模型包括输入层、输出层和中间层,所述中间层中的卷积核参数是基于训练图像进行参数更新后确定的,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像,其中,所述图像处理模型中的中间层是根据所述第二图像和结果数据对中间层中的卷积核参数进行更新后得到的,所述结果数据是更新前的中间层对所述第一图像的图像数据进行卷积计算后得到的包括N*N通道的通道输出数据。
本发明实施例还提供了一种图像处理设备,包括:处理器和存储装置;
所述存储装置,用于存储计算机程序指令;
所述处理器,用于调用所述存储装置中存储的计算机程序指令,以实现上述的图像处理模型的生成方法。
本发明实施例还提供了另一种图像处理设备,包括:处理器和存储装置;
所述存储装置,用于存储计算机程序指令;
所述处理器,用于调用所述存储装置中存储的计算机程序指令,以实现上述的图像处理方法。
本发明实施例还提供了一种计算机存储介质,该计算机存储介质中存储有计算机程序指令,该计算机程序指令被执行时,用于实现上述的图像处理模型的生成方法,或者用于实现上述的图像处理方法。
本发明实施例可以基于降分辨率处理前后的两个图像,对包括配置有用于进行卷积计算的中间层的模型进行训练优化,最终得到可以进行N倍超分辨率处理的图像处理模型,基于该由特殊结构和训练方式生成的图像处理模型,一方面能够快捷地实现对图像(例如各种图片、视频帧等)的超分辨率处理,超分辨率处理速度和效率明显得到提升,另一方面,基于该图像处理模型对图像的超分辨率处理也更准确和稳定。
附图简要说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例的图像处理过程的简要示意图;
图2是本发明实施例的一种应用的图像处理模型的场景示意图;
图3a是本发明实施例的另一种应用的图像处理模型的场景示意图;
图3b是本发明实施例的一种影像播放界面的示意图;
图4是本发明实施例的初始模型的一种结构示意图;
图5是本发明实施例的卷积计算的过程示意图;
图6是本发明实施例的获取第一图像和第二图像对初始模型进行训练优化的方法流程示意图;
图7是本发明实施例的一种对图像进行超分辨率处理的方法流程示意图;
图8是本发明实施例的一种图像处理模型的优化方法的流程示意图;
图9是本发明实施例的根据优化后的初始模型生成图像处理模型的方法流程示意图;
图10是本发明实施例的一种图像处理方法的流程示意图;
图11是本发明实施例的一种图像处理模型的优化装置的结构示意图;
图12是本发明实施例的一种图像处理装置的结构示意图;
图13是本发明实施例的一种图像处理设备的结构示意图;
图14是本发明实施例的另一种图像处理设备的结构示意图。
实施本发明的方式
在本发明实施例中,对图像的处理主要是对图像的超分辨率处理。超分辨率处理是将低分辨率图像进行处理,得到对应的高分辨率图像,例如对于540P分辨率的图像,进行超分辨率处理后,可以得到1080P分辨率的图像。在超分辨率处理过程中,首先会获取低分辨率图像中每一个像素点的表示数值,然后,以低分辨率图像的每一个像素点的表示数值为基础,基于训练优化后的图像处理模型对各表示数值进行计算,输出多通道数据,得到与每一个表示数值相关的多个数值,这些数值会作为进行超分辨率处理后的图像上的新像素点的表示数值。最后基于这些新像素点的表示数值,可以排列生成高分辨率图像。如图1所示,是本发明实施例的图像处理过程的简要示意图。该图像处理过程包括:低分辨率图像输入后,调用图像处理模型进行N倍超分辨率处理,会输出N*N通道的数据,基于这些数据得到新的表示数值,并基于新的表示数值生成高分辨率图像,得到想要的目标图像,其中,所述N是大于等于2的正整数。在一个实施例中,所述的表示数值可以为:像素点的明度值V、或者像素点的亮度值L、或者像素点的明亮度通道数据即Y通道数值,或者像素点的R(red,红)、G(green,绿)、B(blue,蓝)数值中的任意一个或者多个、或者多光谱相机各个通道的灰度值、或者特种相机(红外相机、紫外相机、深度相机)各个通 道的灰度值等等。
在一个实施例中,对某个低分辨率图像进行N倍超分辨率处理,主要是基于图像处理模型将低分辨率图像中的每一个像素点变换为N*N个像素点。例如,当N等于2时,需要将低分辨率图像的分辨率放大到2倍,即对低分辨率图像进行2倍超分辨率处理,则对于低分辨率图像中的1个像素点,需要处理成2*2即4个像素点,需要基于图像处理模型对一个像素点的表示数值进行计算得到四个数值,这四个数值即为高分辨率图像的像素点的对应表示数值。
如图1所示,对于进行超分辨率处理之前的低分辨率图像100,调用图像处理模型后,由图像处理模型输出结果数据,该结果数据中包括2*2即4通道的通道输出数据,基于结果数据可以得到高分辨率图像200。在一个实施例中,对于低分辨率图像100上的左上角顶点位置处的顶点像素点101,图像处理模型可以基于该顶点像素点101的表示数值,计算得到该顶点像素点101对应的四个数值,这四个数值分别对应于2*2个通道的通道输出数据上的数值1011、1012、1013以及1014。在基于四通道的通道输出数据生成高分辨率图像200,即目标图像后,这四个数值分别为高分辨率图像200的左上角区域(如图1中左边虚线框的区域)中左上像素点1021、右上像素点1022、左下像素点1023、右下像素点1024的表示数值。目标图像中新的像素点的图像位置是以低分辨率图像100中原像素点的图像位置为参考确定的。例如,所述顶点像素点101对应所述左上角区域的四个像素点,而顶点像素点101的下一个像素点102(第一行第二列的像素点)在高分辨率图像200上所对应的区域,是紧邻于所述顶点像素点101在高分辨率图像200上对应的区域,具体如图1右边的虚线框所示的区域,像素点102对应于该区域中的4个像素点。低分辨率图像100中其他像素点在高分辨率图像200上对应的位置区域以此类推;而如果是3倍超分辨率,则顶点像素点101对应左上角区域的3*3即9个像素点。
上述提及的图像处理模型可以基于卷积神经网络来生成。针对不同的超分辨率提升倍数,可以配置具有不同数量的输出通道的图像处理模型,每一个输出通道上的通道输出数据的数值,均是进行超分辨率处理后的目标图像上与低分辨率图像中原像素点对应位置处的像素点的表示数值。在一个实施例中,可以设置多个图像处理模型,每一个图像处理模型具有不同数量的输出通道,这样,在接收到需要进行超分辨率处理的低分辨率图像后,可以根据超分辨率处理的倍数,从多个图像处理模型中选择目标图像处理模型,以便于通过该目标图像处理模型来实现对该低分辨率图像的超分辨率处理,满足用户的超分辨率需求。例如,可以设置2*2输出通道的图像处理模型,这样可以满足2倍超分辨率处理,还可以设置3*3输出通道的图像处理模型,这样可 以满足3倍超分辨率处理。这样一来,对于视频播放器等应用,用户可以选择不同的分辨率来请求播放视频或者图像等,例如对于某个540P分辨率的视频,如果用户选择2倍超分辨率处理,就可以观看1080P分辨率的视频,而如果用户选择4倍超分辨率处理,则可以观看2K分辨率的视频。
图像处理模型需要进行训练优化。在本发明实施例中,可以在生成用于进行图像分辨率处理的初始模型的基础上,基于训练数据来完成对初始模型的训练优化,得到可以进行图像超分辨率处理的图像处理模型。图像处理模型在使用过程中,也可以随时作为一个新的初始模型根据需要进行进一步的优化处理,以便于更准确地完成图像的超分辨率处理。
如图2所示,是本发明实施例的一种应用上述提及的图像处理模型的场景示意图。已经被训练优化完成的可供使用的图像处理模型被配置在影像播放应用中,该应用安装在诸如智能手机、平板电脑、个人电脑(personal computer,PC)、智能可穿戴设备等具有图像显示、视频播放等功能的智能终端上。影像播放应用例如可以是能够播放视频的各种应用、和/或能够显示图像的各种应用等等,用户在智能终端上打开该影像播放应用,可以请求播放视频或者展示图像。在一个实施例中,在用户请求播放视频时,智能终端会向提供视频播放服务的服务器请求视频,智能终端在接收到服务器返回的视频后,提取其中的低分辨率视频,将低分辨率视频中的每一个视频帧确定为低分辨率图像,并将低分辨率图像作为待处理图像输入到图像处理模型,图像处理模型对待处理图像进行超分辨率处理,对应地得到高分辨率图像,即得到目标图像,每一个目标图像对应于一个视频帧。在得到目标图像后,影像播放应用再按照播放时间顺序等因素依次播放目标图像,实现高分辨率视频的播放。在其他实施例中,智能终端也可以缓存已经得到的目标图像,缓存到预设的数据量后,再基于播放时间顺序等因素得到视频并播放。同样,智能终端在接收到单张图像时,可以直接将该单张图像作为待处理图像进行超分辨率处理,并将得到的目标图像显示给用户。这样一来,一方面智能终端仅下载低分辨率的视频数据即可,节省了带宽的占用,还可以节省智能终端的流量,另一方面服务器只需保存低分辨率的视频数据即可,可以节省服务器的存储空间,再一方面智能终端在对低分辨率图像进行超分辨率处理后,还能方便用户观看更加高清的视频。用户可以通过点击切换按钮的方式来请求在低分辨率的影像和高分辨率的影像之间进行显示切换。当然,智能终端也可以通过一个应用界面来接收用户操作,只有在用户点击了该应用界面上的高分辨率播放按钮时,智能终端才将低分辨率图像进行超分辨率处理,得到高分辨率的目标图像并播放给用户。
如图3a所示,是本发明实施例的另一种应用上述提及的图像处理模型的场景示 意图。已经被训练优化完成的可供使用的图像处理模型被配置在提供视频播放服务或者图像显示服务的应用服务器中,智能终端上安装有相应的影像播放应用。所述应用服务器中可以存储有低分辨率的视频或者图像,当有用户通过智能终端上安装的影像播放应用请求观看这些低分辨率视频或者图像时,应用服务器可以将低分辨率视频或者图像的数据发送给智能终端,并通过智能终端上安装的影像播放应用展示给用户。而在观看低分辨率的视频时,如果用户希望观看高分辨率的视频,用户可以点击在应用界面上设置的高分辨率播放按钮,此时,智能终端向服务器发送N倍的高分辨率播放请求。服务器响应该高分辨率播放请求,确定要进行超分辨率处理的低分辨率视频数据,从中获取待处理图像(例如单张低分辨率图片或者低分辨率视频中的一视频帧),通过图像处理模型进行超分辨率处理,最后获得目标图像,并将目标图像传输给智能终端。智能终端再通过影像播放应用显示高分辨率图片或者按照播放时间顺序等因素依次播放高分辨率的目标图像,实现高分辨视频的播放。
图3b为一种影像播放界面的简要示意图,在本实施例中,影像播放应用可以为影音播放器。用户在点击了界面右下角的“超分辨率”处理按钮后,影像播放应用按照上述图2对应的方式、或应用服务器按照上述图3a对应的方式,对视频或者图像进行超分辨率处理,以向用户提供高分辨率的视频或者图像。界面右下角的“超分辨率”处理按钮也可以配置为其他类型的多个按钮,例如配置为“2倍超分辨率”的图标按钮、“4倍超分辨率”的图标按钮等等。在一个实施例中,服务器也可以基于服务器与终端之间的网络资源信息,在低分辨率图像和高分辨率图像之间进行切换,在网络资源信息满足条件时(例如带宽充足时),自动将低分辨率图像转换为高分辨率的目标图像发送给用户,在网络资源信息不满足条件时(例如在带宽较小时),向用户发送低分辨率图像。
下面从三个方面来对本发明实施例的图像处理过程进行详细说明。在这三个方面中,一个方面包括图像处理模型的建立以及训练优化得到图像处理模型,另一个方面包括对训练优化后的图像处理模型的进一步优化处理,第三方面包括基于图像处理模型对图像进行超分辨率处理的过程。
图像处理模型的生成过程包括:首先基于卷积神经网络生成用于进行图像分辨率处理的初始模型,所述初始模型包括输入层、输出层和中间层。在初始模型中,输入层有一个输入通道,主要用于输入训练图像的图像数据。这些图像数据是指训练图像的每一个像素点的表示数值,所述表示数值可以是每一个像素点的明度值、或者每一个像素点的亮度值、或者每一个像素点的Y通道数值,或者可以是每一个像素点的R、G、B数值中的任意一个或者多个,或者可以是多光谱相机各个通道的灰度值、特种 相机(红外相机、紫外相机、深度相机)各个通道的灰度值中的任意一个或者多个等等,也就是说,任何二维数据都可以训练对应处理该数据的图像处理模型。基于每一个像素点的明度值、或者每一个像素点的亮度值、或者每一个像素点的Y通道数值,或者每一个像素点的R、G、B数值等,可以训练出能够对低分辨率图像进行超分辨率处理的图像处理模型。所述特种相机各个通道的灰度值可以训练得到能够对红外相机、紫外相机、深度相机等类型的特种相机的相应通道灰度值进行超分辨率处理的图像处理模型。甚至还可以训练得到能对地质雷达图像、遥感图像等图像进行超分辨率处理的图像处理模型。初始模型的输入层也可以有多个输入通道以便于对图像的多个数据进行超分辨率处理,例如,初始模型的输入通道可以有三个,以便于同时输入图像的R、G以及B数值,对RGB进行超分辨率处理。
初始模型的输出层有N*N个输出通道,每一个通道输出的通道输出数据构成了结果数据,结果数据中的每一个数值对应于进行超分辨率处理后的目标图像上的其中一个像素点的表示数值。对应于输入通道输入的在上述已提及的图像数据,输出的所述结果数据中的数值对应的表示数值包括:每一个像素点的明度值、或者每一个像素点的亮度值、或者每一个像素点的Y通道数值,或者是每一个像素点的R、G、或B数值等等。
在一个实施例中,如果输入层为M个输入通道(M为正整数),则输出层有M*N*N个输出通道。例如,输入层如果输入的是R、G以及B数值,则输出的应该是3*N*N(即M=3),每一连续的N*N输出通道分别对应R、G和B数值。例如对于2倍超分辨率处理,前4个输出通道(如编号为1、2、3、4的输出通道)对应R数值、中间4个输出通道(如编号为5、6、7以及8的输出通道)对应G数值,最后4个输出通道(如编号为9、10、11以及12的输出通道)对应B数值。
中间层则可以包括多个卷积层,一般情况下,卷积层越多,计算也就越细致,后续在进行超分辨率处理时,得到的目标图像的画质越好,图像噪点越低。
如图4所示,是本发明实施例的初始模型的一种结构示意图,包括输入层,输出层以及中间层,例如,由5个卷积层组成的中间层。每个卷积层的相关参数如下表1所示。通过训练图像来对初始模型进行训练优化,以便最终得到可上线使用的图像处理模型。训练图像包括大量的图像数据,在这些图像数据中包括第一图像和第二图像,其中的所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像。也就是说,只需要采集高分辨率的第二图像,通过降采样来得到低分辨率的第一图像。对于低分辨率的第一图像,因为已经存在了与该第一图像对应的高分辨率的第二图像作为参考标准,因此便于衡量初始模型对低分辨率的第一图像的超分辨率处理能力,使 得训练优化更准确。
表1:
类型 卷积核大小 输出通道数 卷积核数量 步长 Pad
1 卷积 3×3 16 共1组每组16个 1 1
2 卷积 3×3 32 共16组每组32个 1 1
3 卷积 3×3 32 共32组每组32个 1 1
4 卷积 3×3 32 共32组每组32个 1 1
5 卷积 3×3 4 共32组每组4个 1 1
针对表1所列的信息,在其他实施例中,可以采用更多(或更少)层数的卷积层,并且卷积核的大小可以根据需要进行调整。第一层卷积层的输入通道数一般为1,即输入一张待处理图像的图像数据,其他中间层的卷积层的输入通道数为与该中间层的卷积层相连的上一层卷积层的输出通道数,该中间层的卷积层所对应的输入数据为该上一层卷积层的输出数据。例如,表1中的第一层卷积层的输入通道数为1、输出通道数为16;第二层卷积层的输入通道数则为16,与第一层卷积层的输出通道数相同。针对输出通道数,除了中间层中最后一个卷积层外,其他卷积层的输出通道数也可以根据需要进行配置,而该最后一个卷积层的输出通道数则需要根据超分辨率处理的倍数来确定。如果需要提高N倍分辨率,此时,最后一个卷积层的输出通道数应该为N*N,例如表1中所示,4通道是基于2*2计算得到的,是为了实现2倍的超分辨率处理。卷积核大小尺寸可以自定义,例如可以是大于2的奇数,步长的大小可以为1,以此可保证每次卷积层的卷积计算不会导致图像缩小。之后还需要计算出与卷积核大小、步长对应的Pad值。Pad值的大小会决定卷积计算时,向外复制多少个像素的宽度,以保证每次卷积的操作不会有图像宽高的损失。Pad值的计算方法为:Pad=(卷积核大小-步长)/2,以表1为例,卷积核大小是3,则Pad=(3-1)/2=1。
在本发明实施例中,在生成的初始模型中的中间层只使用了卷积层,并未引入任何诸如激活层等其他计算层。结合图5所示,对于第一层卷积层,基于配置的输出通道数,在第一层卷积层中包括16个卷积核,每个卷积核为不同数值构成的3×3的矩阵,这些数值为卷积核参数,最终的图像处理模型中的卷积核参数是通过对初始模型中的卷积核参数进行训练优化得到的,每个卷积核参数可以完全不相同或者部分相同。
从图像处理模型的输入层输入到中间层的第一图像的图像数据是一个由表示数值构成的二维矩阵。对于包括16个卷积核的第一层卷积层,该第一图像的图像数据首先分别与第一层卷积层中的每一个卷积核进行卷积计算,则会输出16个结果数据。 所述第一图像的图像数据是指所述第一图像中每个像素点的表示数值,可以认为,第一图像中每一个像素点的表示数值经过第一层卷积层后,会得到16个数值。
对于第二层卷积层,基于配置的输出通道数(可以配置不同的输出通道数),在第二层卷积层中包括32组卷积核,每组有16个卷积核,每个卷积核为不同数值构成的3×3矩阵,第二层卷积层的输入为第一卷积层输出的16个结果数据,输出则为32个结果数据。在一个实施例中,在第二层卷积层进行卷积计算的过程中,是将第一层卷积层输出的16个结果数据,分别与一组卷积核中的16个卷积核做卷积计算得到16个卷积结果,然后将16个卷积结果求和,得到这一组卷积核对应的输出数据(32个输出数据中的一个)。这样,对32组卷积核都进行这样的操作,得到第二层卷积层中32个通道的结果数据。
对于第三层卷积层,其由32组卷积核构成,每组卷积核有32个卷积核,处理方法与第二层相同。对于每一组卷积核,将其里面的32个卷积核分别与从第二层卷积层输出的32个结果数据进行卷积计算,将得到的32个卷积结果求和,得到该组卷积核对应的输出数据。这样,对32组卷积核都进行这样的操作,能够得到第三层卷积层的32个通道的结果数据。以此类推,第四层卷积层也能得到32个通道的结果数据,在最后一个卷积层采用4通道的卷积实现解卷积。由于2倍超分辨率是将1个像素点放大成4个像素点,所以在初始模型中,第五层卷积层输出的是四个通道的结果数据,可以认为分别对应左上、右上、左下、右下四个像素点的表示数值。输出层最终可以直接输出包括4通道的结果数据,或者在一个实施例中能够对结果数据进行处理,直接输出按照4个通道的通道输出数据排列得到的一张高分辨率的目标图像。通过这样的方法,用4通道的卷积层,实现stride=2的解卷积层。也就是说,在所述初始模型中配置的最后一层实际上是采用卷积来实现了解卷积。
在一个实施例中,计算中间层的每一个卷积层中输出通道的结果数据包括:分别将输入数据与每组卷积核进行一一对应的卷积计算,然后求和,得到该组卷积核对应的输出通道的结果数据。在卷积层中每组卷积核对应一个输出通道,所以有多少组卷积核,输出的通道数就是多少个。在其他实施例中,还可以包括其他的计算方式,各个输入数据与各个卷积核之间的卷积计算方式可以有多种组合,每一个输入数据与卷积核进行卷积计算后可以进行平均计算等等。可以理解的是,无论采用哪种计算方式,都可以基于计算后最终输出的结果数据,对初始模型中的卷积核参数进行训练优化,并能够保证在一次或者多次的训练优化后,如果再次输入第一图像,最终输出的该第一图像对应的结果数据所指示的目标图像与第二图像之间在相同图像位置处的像素点的表示数值的差值较小,例如该两者的表示数值的差值均小于预设的阈值、或者预 设比例的表示数值的差值小于预设的阈值。
第一图像的图像数据经过初始模型计算后,最终可以输出结果数据,结果数据中每一个数值均为经过超分辨率处理后的目标图像上与第一图像对应位置处的像素点的表示数值,例如该表示数值可以为经过超分辨率处理后的目标图像上的与第一图像对应位置处的像素点的明度值、或亮度值、或者Y通道数值,或者RGB数值等,或者也可以是多光谱相机各个通道的灰度值、或特种相机(红外相机、紫外相机、深度相机)各个通道的灰度值、或者遥感图像对应通道的灰度值等等。如图1所述的对低分辨率图像进行2倍超分辨率处理后,输出包括4通道的通道输出数据的结果数据,基于结果数据可以组合成目标图像。
在一个实施例中,如果输出的表示数值(即输出的结果数据中的数值)表示的是Y通道数值,则在合成目标图像时,基于一个像素点的Y通道数值和经过插值处理等方式获取的同一个像素点的另外两个色差信号UV通道数值,可以组合得到该像素点的YUV数据,最终按照所有像素点的图像位置排列得到经过超分辨率处理后的目标图像。在一个实施例中,如果输出的表示数值(即输出的结果数据中的数值)表示的是明度值,则在合成目标图像时,基于一个像素点的明度值V值和经过插值处理等方式获取到的同一个像素点的色调值H、饱和度值S,即可组合得到该像素点的HSV数据,最终按照所有像素点的图像位置排列得到经过超分辨率处理后的目标图像。同理,针对输出的亮度值L值进行类似处理可以组合得到HSL数据,也能够最终按照所有像素点的图像位置排列得到经过超分辨率处理后的目标图像。
在一个实施例中,当输出的表示数值(即输出的结果数据中的数值)表示为RGB数值时,则可以直接基于每个像素位置处的像素点的R数值、G数值及B数值,组合得到每个像素点的RGB数值,最终按照所有像素点的图像位置排列得到经过超分辨率处理后的RGB目标图像。
在待处理图像的输入数据为RGB数据的情况下,即像素点的表示数值为RGB数值的情况下,进行超分辨率处理的方式有两种。其中一种方式是配置三个图像处理模型,分别对R数值、G数值和B数值进行计算,各自输出对应的结果数据,基于各自输出的结果数据得到目标图像的R数值、G数值和B数值,再进一步组合RGB数值得到目标图像。在一个实施例中,在此情况下,图像处理模型在训练时,也可以分别针对R数值、G数值和B数值进行训练优化。另一种方式是图像处理模型的输入通道为三个,分别对应待处理图像各个像素点的R数值、G数值和B数值,输出对应的3组N*N通道,每组N*N通道对应R数值、G数值或B数值,最后进一步组合RGB数值得到目标图像。
在一个实施例中,在部署了图像处理模型后,需要对待处理图像的目标图像数据进行处理时,可以先判断获取的目标图像数据是否为RGB数值,如果是,则将RGB数值作为目标图像数值,选择三个图像处理模型,以便于通过图像处理模型分别对RGB数值进行处理,得到RGB数值对应的N*N通道的结果数据。在一个实施例中,针对YUV数据、HSV数据、HSL数据等也可做相同处理,通过选择三个图像处理模型来完成对待处理图像的N倍超分辨率处理。
在本发明实施例中,上述提及的从输入第一图像到得到结果数据的整个过程被认为是初始模型对第一图像的超分辨率处理的过程。
基于初始模型对第一图像进行超分辨率处理得到目标图像后,将目标图像与第二图像进行比较,主要包括对目标图像和第二图像的每一个对应图像位置处的像素点的表示数值进行比较,比如对目标图像左上角顶点处的像素点和第二图像左上角顶点处的像素点的表示数值进行比较,得到表示数值的差值。根据所有或者预设区域范围内的像素点(例如目标图像和第二图像上80%以上的图像区域中的像素点)的表示数值的差值,判断目标图像和第二图像之间的相似信息,如果相似信息不满足优化条件,则认为初始模型能够很好地对第一图像进行超分辨率处理,继续对下一个第一图像进行相同处理。
在一个实施例中,目标图像和第二图像之间的相似信息可以是指两图像所有像素点的表示数值的差值的平均值,如果平均值低于第一相似阈值,则认为不满足优化条件,否则,则认为满足优化条件。在一个实施例中,目标图像和第二图像之间的相似信息可以是指每一个图像位置处像素点的表示数值的差值,如果超过M个像素点的表示数值的差值均小于预设的第二相似阈值,则认为不满足优化条件,否则,则认为满足优化条件。
如果满足优化条件,则需要反向对中间层各个卷积层中包括的卷积核参数进行调整优化,并在调整优化后,再次通过调整优化后的初始模型对第一图像进行超分辨率处理,如果再次进行处理后得到的目标图像与第二图像之间的相似信息不满足优化条件,则获取下一个第一图像并调用经过初步优化的初始模型进行超分辨率处理,如果仍然满足优化条件,则继续对卷积层中卷积核的卷积核参数进行调整优化后再一次进行超分辨率处理。通过对大量的第一图像进行超分辨率处理以及对初始模型的优化,最终会得到一个卷积核参数被训练优化后的图像处理模型。
可以将第一图像的目标图像通道的数值作为训练输入数据,例如将第一图像的Y通道数值作为训练输入数据从输入层输入,经过上述的卷积计算后,从输出层输出的结果数据包括N*N通道的Y通道数值。在确定是否对初始模型中卷积层中卷积核的 卷积核参数进行优化时,可以判断输出的N*N通道的Y通道数值与第二图像的Y通道数值之间的差值是否满足优化条件,如果满足,则需要对初始模型中卷积层中卷积核的卷积核参数进行优化,反之,则继续获取下一图像作为第一图像,并提取Y通道数值。
请参见图6,是本发明实施例的获取第一图像和第二图像对初始模型进行训练优化的方法流程示意图。该方法由计算设备执行,所述计算设备为能够提供视频和/或图像服务的服务器或智能终端,该方法包括如下步骤:在S601中,计算设备获取高分辨率图像,该高分辨率图像为第二图像。在本发明实施例中,可以从一些提供高清图像的网站来获取这些高分辨率图像。在S602中,计算设备对高分辨率图像进行N倍降分辨率处理,得到低分辨率图像,该低分辨率图像为第一图像。在S603中,计算设备基于得到的大量的第一图像和第二图像,建立训练图像库。在S604中,计算设备从训练图像库中获取训练图像,对基于卷积神经网络生成的用于进行图像分辨率处理的初始模型进行优化,得到图像处理模型。
在一个实施例中,训练优化完成后得到的图像处理模型可以部署到智能终端或者应用服务器中,用于对图像或者视频进行超分辨率处理。在一个实施例中,在部署图像处理模型之前,还可以对已经训练优化后的图像处理模型进行处理,得到更优的图像处理模型。
在一个实施例中,还可以进一步对已经训练优化后的图像处理模型(可以称之为初始图像处理模型)进行处理,该处理主要包括对初始图像处理模型进行模型压缩处理,以便于得到更优的能够上线部署的图像处理模型。在本发明实施例中,由于在设计初始模型时,中间层只采用了卷积层,因此,可以考虑通过特殊的处理方式对初步图像处理模型进行压缩优化,例如,可以将多层的卷积层压缩合并为一个卷积层,方便后续在实际部署后能够更为快捷地完成超分辨率处理,例如将表1所示的5层卷积层压缩为1层卷积层,将原有的5次卷积计算合并为一次卷积计算,可以大大减少超分辨率计算的时间。
在一个实施例中,合并两个卷积层的方法包括:记第一卷积层的卷积层参数为w 1,第二卷积层的卷积层参数为w 2,合并后的卷积层参数为w 3。如表2所示,wi(i=1,2,3)为4维数组,其中,第1维为高度,第2维为宽度,第3维为输入的通道数,第4维为输出的通道数
表2
卷积层参数 高度 宽度 输入通道数 输出通道数
w 1 h 1 w 1 c 1 c 2
w 2 h 2 w 2 c 2 c 3
w 3 h 1+h 2-1 w 1+w 2-1 c 1 c 3
也就是说,通过上述表2中的两个计算公式:h 1+h 2-1(公式1)、w 1+w 2-1(公式2),可以得到合并后卷积核的大小、输入通道数以及输出通道数。以表1中第一层卷积层和第二层卷积层为例,这两个卷积层合并后,每个卷积核的高度为:3+3-1=5,每个卷积核的宽度为3+3-1=5,同时,输入的通道数为第一层卷积层的通道数1,而输出则为第二层卷积层的输出通道数32,即合并后的卷积层(称为合并初始卷积层)中,卷积核为5*5*1*32。进一步再将合并后的w 3作为新的第一卷积层与第三卷积层w 4(作为新的第二卷积层)合并,则合并后的w 5的高度为:5+3-1=7,宽度为5+3-1=7,输入通道数为1,输出通道数为第三层卷积层w 4的输出通道数32,即新的合并后的卷积层(称为新的合并初始卷积层)中,卷积核为7*7*1*32,以此类推,最后得到1个11x11x1x4的合并卷积层,该卷积层中卷积核的大小为11x11,包括1个输入通道,4个输出通道。
而对于合并后的卷积核参数,则可以通过卷积核合并计算的方式来计算,也就是说将两个卷积层中的卷积核基于矩阵乘法的方式进行计算,具体的计算方式可以为:分别将参数更新后的初始模型中所述中间层包括的第一卷积层中的卷积核与第二卷积层的卷积核,用卷积核矩阵进行表示,并采用卷积核矩阵对应的矩阵乘法进行卷积核合并计算,得到合并卷积核。所述合并卷积核中第x行第y列的卷积核为:第一卷积层所对应的卷积核矩阵中第i行每个元素,与第二卷积层所对应的卷积核矩阵中第j列每个元素一一卷积后的新卷积核的和。
以上述5层卷积层为例,可以将第一层卷积层作为第一卷积层,第一卷积层的输入数据对应的输入通道数称为起始通道数c 1,将第一卷积层输出数据对应的输出通道数称为中间通道数c 2,将第二层卷积层作为第二卷积层,第二卷积层的输出数据对应的输出通道数称为末尾通道数c 3。所以,可以用卷积核矩阵来表示第一卷积层和第二卷积层,即:可以将第一层卷积层w 1看做一个c 1乘c 2的矩阵A,将第二层卷积层w 2看做一个c 2乘c 3的矩阵B,将第一层卷积层和第二层卷积层合并后的合并初始卷积层w 3看做一个c 1乘c 3的矩阵C。三个矩阵的每个元素都是一个卷积核。卷积合并计算的方法类似矩阵乘法,将两个矩阵“相乘”得到合并后的矩阵,合并后矩阵第i行第j列的元素的值为:所有左乘矩阵第i行的每个元素,与右乘矩阵第j列的每个元素一一卷积后的新卷积核的和。也就是说,记合并后的卷积核矩阵第x行第y列的卷积核为C(x,y),则有公式:
公式3:
Figure PCTCN2019078685-appb-000001
其中,*表示二维卷积运算。通过以上公式3,就能够计算出合并后的由第一层卷积层和第二层卷积层合并后的每一个卷积核,进而确定合并初始卷积层中各个卷积核的卷积核参数。
在一个实施例中,如果中间层仅包括第一卷积层和第二卷积层,则可以直接将合并初始卷积层确定为合并卷积层。而如果中间层包括多个卷积层,则基于上述的公式3,将合并初始卷积层作为新的第一卷积层,且将下一卷积层(例如第三卷积层)和/或由所述中间层中其他两个卷积层合并处理得到的合并初始卷积层作为新的第二卷积层,重复执行上述的计算合并卷积核,并得到新的合并初始卷积层的步骤,最终得到合并卷积层。在一个实施例中,得到的合并卷积层的尺寸与基于上述的公式1和公式2计算得到的由两个卷积层的卷积层参数或者多个卷积层的卷积层参数进行计算后得到w和h相同。
在一个实施例中,在完成上述的卷积层合并处理,得到包括输入层、输出层以及合并卷积层的图像处理模型后,经过试验发现,按照上述方法进行合并卷积层操作后,生成的该合并的卷积层下的卷积核是可分离卷积核。例如针对表1的5层卷积层,合并后得到的4个卷积核是可分离卷积核,因此,可以采用一些分解算法,例如采用奇异值分解(Singular Value Decomposition,SVD)算法,把该唯一的合并卷积层中包括的每个卷积核分解为一行与一列,把原先的1次二维卷积分解为2次一维卷积,在分解唯一卷积层后得到最终的图像处理模型,这可以有效提高后续进行卷积计算的计算速度。
在得到可供使用的图像处理模型后,可以将该图像处理模型部署到提供视频播放或者图像展示等服务的应用服务器中,或者部署在安装有相关视频或者图像应用的智能终端中。这样对于视频中每一视频帧或者单张图像,可以通过图像处理模型进行超分辨率处理,从而能够根据需求提供高或低分辨率的图像给用户。
在一个实施例中,因为视频帧或者图像一般可以为包括YUV图像通道的图像,其中,Y表示明亮度,而U和V表示的则是色度。在对图像进行超分辨率处理时,图像处理器可以对需要进行超分辨率处理的视频帧或者单张图像的Y图像通道上的数值(即Y通道数值)进行处理,而U和V图像通道的数值(U通道数值和V通道数值)则可以采用其他方式处理,例如采用插值计算的方式来得到超分辨率处理后的多个像素点的U、V值。如图7所示,是本发明实施例的一种对图像进行超分辨率处理的方法流程示意图。该方法由计算设备执行,所述计算设备为能够提供视频和/或图像服务的服务器或智能终端。在S701中,计算设备获取待进行超分辨率处理的图像,这些待进行超分辨率处理的图像可以是某些需要提高分辨率的视频的图像帧,或 者是一些通过低分辨率的摄像装置拍摄到的图像以及其他低分辨率图像。在一个实施例中,低于预设的分辨率阈值的图像,例如可以是低于540P分辨率的图像,可以认为是低分辨率图像。对于低分辨率图像,可以将其作为待处理的图像进行后续的超分辨率处理。在另一个实施例中,低分辨率和高分辨率是相对而言的,在某些图像需要进行超分辨率处理时,这些图像就被认为是低分辨率图像。
在S702中,计算设备对获取到的待进行超分辨率处理的图像进行数据提取以及分离操作,分别得到Y通道数值和UV通道数值。在一个实施例中,可以将非YUV图像转换为YUV格式的图像,以便于执行所述S702。在S703中,计算设备调用图像处理模型对Y通道数值进行处理,输出N倍超分辨率处理后的Y通道数值。在S704中,计算设备基于插值算法对UV通道数值进行计算,得到N倍插值超分辨率处理后的UV通道数值。比如,插值算法可以为最邻近插值(变换后像素的灰度值等于距它最近的输入像素的灰度值)、双线性插值、双立方插值、兰索斯Lanczos插值等。在S705中,计算设备将N倍超分辨率处理后的Y通道数值和N倍插值超分辨率处理后的UV通道数值进行合并,重新排列以得到超分辨率处理后的目标图像。
本发明实施例可以基于降分辨率处理前后的两个图像,对包括配置有用于进行卷积计算的中间层的模型进行准确、全面的训练优化,最终得到可以进行N倍超分辨率处理的图像处理模型。基于该图像处理模型,一方面不用查找低分辨率图像和高分辨率图像的映射关系,可以直接对图像或者视频帧的相关数据进行超分辨率计算,从而能够非常快捷地实现对图像或者视频帧的超分辨率处理,超分辨率处理速度和效率明显得到提升,另一方面,对图像的超分辨率处理也更准确和稳定。
再请参见图8,是本发明实施例的一种图像处理模型的优化方法的流程示意图。本发明实施例的所述方法可以由计算设备执行,所述计算设备为能够提供视频和/或图像服务的服务器或智能终端,该方法主要是对用于进行图像超分辨率处理的图像处理模型进行训练优化。所述方法包括如下步骤。
S801:基于卷积神经网络生成用于进行图像分辨率处理的初始模型,所述初始模型包括输入层、输出层和中间层。在一个实施例中,中间层为进行卷积计算的卷积层。在一个实施例中,初始模型的中间层可以包括卷积层,但不包括其他的诸如激活层等层结构,这样可以方便后续进行更快捷、准确的中间层合并处理,将多个卷积层合并为一个卷积层。除了输入层和输出层外,中间层可以包括多层,初始模型的结构可以如图4所示。
S802:获取训练图像,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像。训练图像可以是各种图像或 者视频中的视频帧。第二图像是原始分辨率的图像,一般为高分辨率图像,例如可以为1080P的图像,甚至更高的2K、4K的图像。通过对第二图像进行降分辨率处理,可以得到第一图像。可以在获取到各高分辨率图像后,对每个高分辨率图像进行插值处理,把图像的分辨率降低。第一图像和第二图像是一一对应的,为了训练优化得到更好的图像处理模型,所获取的训练图像可以包括数万、数十万甚至更多的高分辨图像(即第二图像)。第一图像需要输入到初始模型中进行卷积计算,该卷积计算可以认为是对第一图像进行超分辨率处理的过程,而第二图像用于对第一图像经过所述初始模型的卷积计算后输出的结果数据所对应的高分辨率的目标图像进行验证,以此来确定是否对所述初始模型中的卷积核参数进行训练优化。
S803:将所述第一图像的图像数据从所述输入层输入到所述中间层进行卷积计算,并从所述输出层获取所述卷积计算的结果数据,所述结果数据包括N*N通道的通道输出数据,所述N为大于等于2的正整数。如果需要训练优化后得到的图像处理模型能够将图像的分辨率提升为N倍,则需要经由卷积计算得到包括N*N通道的通道输出数据的结果数据,N*N通道输出数据上每个位置处的数值,即为超分辨图像上相应位置处的像素的表示数值。例如,如果需要训练优化后得到的图像处理模型能够实现2倍的超分辨率处理,则需要将原来的1个像素点变成4个像素点,具体的4个像素点的分布可以参考图1所示。所述结果数据所对应的目标图像是指:根据所述结果数据中包括的通道输出数据的各个数值组合得到的图像,所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的表示数值。
图像处理模型的输入层输入的图像数据是一个二维矩阵。例如,第一图像为分辨率M*M(比如960*540)的图像,则输入的图像数据也是M*M(比如对应也为960*540)的二维矩阵,该二维矩阵的数值与第一图像的像素点的表示数值一一对应,即二维矩阵的第一行第一列的数值为第一图像中第一行第一列的像素点的表示数值(诸如明度值、或亮度值、或Y通道数值、或者甚至可以是上述提及的遥感图像、地质雷达图像数据等等)。在一个实施例中,如果用RGB作为表示数值,则输入层输入的图像数据是一个、或者两个、或者三个二维矩阵,也就是说,输入层输入的图像数据是对应的R数值构成的二维矩阵、G数值构成的二维矩阵、B数值构成的二维矩阵中的任意一个或者多个。对RGB对应的每个二维矩阵,均可以进行相同的训练处理。
S804:根据所述结果数据和第二图像,对所述初始模型的中间层中的卷积核参数进行参数更新,并根据所述参数更新后的初始模型生成图像处理模型。通过完成对中间层中各个卷积层的卷积核参数的训练优化,可以完成对初始模型的训练。具体可以基于结果数据中各个数值与第二图像中像素点的表示数值之间的差值来确定是否对 中间层中的卷积核参数进行优化。
在一个实施例中,基于结果数据,按照像素点初始的位置和各个通道的通道输出数据相应位置处的数值,重新进行排列并生成一张超分辨率处理后的目标图像,将该超分辨率处理后的目标图像与第二图像进行各个图像位置处的像素点的表示数值比较,确定表示数值的差值,以此来确定中间层进行卷积计算的正确性。
在一个实施例中,也可以不用基于结果数据来生成超分辨率处理后的图像,而是基于各个输出通道的通道输出数据以及相应数据位置所对应的数值,直接与第二图像中相应位置处的像素点的表示数值进行比较,确定二者的差值。例如,将第一个通道的通道输出数据中,左上角顶点的数值与第二图像的左上角顶点的像素点(即第一行第一个像素点)的表示数值进行比较,确定左上角顶点的数值与表示数值的差值;第二个通道的结果数据中,左上角顶点的数值与第二图像中第一行的第二个像素点的表示数值进行比较,确定左上角顶点的数值与表示数值的差值。结果数据中每一个通道输出数据为一个二维的数值矩阵,该结果数据上数值的位置与数量与第一图像上各个像素点的位置和数量相对应。例如,第一图像是800*600的分辨率,则每一个通道的通道输出数据也为800*600的矩阵,且该矩阵上每一个位置处的数值是对第一图像上相应位置处的像素点的表示数值进行卷积计算得到的。例如,某个输出通道上的通道输出数据中第一行第二列的数值是与第一图像上第一行第二列的像素点对应的,是根据该第一图像上对应的像素点的表示数值进行卷积计算得到的。
如果与结果数据对应的目标图像中,如果超过预设第一数量(例如,将第二图像所有像素点数量的90%的数量作为预设第一数量)的位置处的像素点的表示数值与第二图像中相应位置处的像素点的表示数值的差值不满足优化条件,则认为当前的图像处理模型能够对所述第一图像进行正确的超分辨率处理,不需要对卷积核参数进行更新。在其他实施例中,不满足优化条件可以是指:存在P个差值指示两个表示数值相同或者差值在预设的差值阈值内,P为正整数。例如,超分辨率处理后的图像中左上角顶点位置的像素点、与第二图像左上角顶点位置的像素点的表示数值相同或者差值在预设差值阈值(例如差值阈值为5)内。
而如果超过预设第二数量(例如,将第二图像所有像素点数量的10%的数量作为预设第二数量)的位置处的像素点的表示数值与第二图像中相应位置处的像素点的表示数值的差值满足优化条件,则认为当前的模型不能够对所述第一图像进行正确的超分辨率处理,需要对卷积核参数进行更新。
卷积层中的卷积核也是一个二维的矩阵,例如上述提及的3*3的矩阵。该矩阵上每一个数值称之为卷积核参数,对卷积核参数进行优化调整后,可以使得基于卷积核 计算并输出的结果数据所对应的目标图像上,超过预设的第一数量位置处的像素点的表示数值与第二图像中相应位置处的像素点的表示数值的差值不满足优化条件。
在一个实施例中,所述S804具体可以包括:确定目标图像与所述第二图像之间的相同图像位置处的像素点的表示数值的差值,所述目标图像是根据所述结果数据确定的图像;根据所述表示数值的差值对所述初始模型的中间层中的卷积核参数进行参数更新,根据参数更新后的初始模型生成图像处理模型。在一个实施例中,所述第一图像的图像数据是指:从所述第一图像中提取的Y通道数值;所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值;所述表示数值的差值是指:该结果数据所对应的目标图像与所述第二图像之间的相同图像位置处的像素点的Y通道数值的差值,该Y通道数值的差值例如可以是Y通道数值的差值等。所述表示数值的差值主要用于体现结果数据对应的目标图像上某个位置处的像素点、与所述第二图像上相同位置处的像素点之间的变化值,例如Y通道数值的差值、灰度值差值、明度值差值等等。
在一个实施例中,所述第一图像的图像数据是指:从所述第一图像中提取的Y通道数值;所述结果数据所对应的目标图像是由Y通道图像数据、U通道图像数据以及V通道图像数据组合得到;其中,所述目标图像中Y通道图像数据是根据所述结果数据得到的、目标图像中U通道图像数据是对第一图像中提取的U通道数值进行插值计算得到的、目标图像中的V通道图像数据是对第一图像中提取的V通道数值进行插值计算得到的;其中,所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值。
在一个实施例中,在满足优化条件时,根据所述表示数值的差值对所述初始模型的中间层中的卷积核参数进行参数更新,完成对卷积核参数的优化。对卷积核参数的参数更新过程也是对卷积核参数的优化过程,优化过程主要是指对卷积核对应的二维矩阵中的数值进行优化,使得计算得到的表示数值的差值不满足优化条件。如果表示数值的差值不满足优化条件,则表明当前图像处理模型已经能够对所述第一图像进行较为准确的超分辨率处理,不需要再进行优化。
再请参见图9,是本发明实施例的根据优化后的初始模型生成图像处理模型的方法流程示意图,本发明实施例的所述方法由计算设备执行,所述计算设备为能够提供视频和/或图像服务的服务器或智能终端,该方法可以包括如下步骤。
S901:计算设备对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到合并卷积层。在一个实施例中,所述中间层至少包括第一卷积层和第二卷积层,所述对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷 积层合并处理,得到一个合并卷积层的步骤包括:对参数更新后的初始模型中所述中间层包括的第一卷积层和第二卷积层进行合并处理;所述合并处理包括:计算合并卷积层参数和计算合并卷积核。得到了最终的合并卷积层参数和合并卷积核参数,即可构建合并卷积层。本发明实施例中,合并卷积层参数主要指示卷积核的尺寸,例如针对表1所描述的5层卷积结构,最终计算得到合并卷积层参数为11*11,表明合并卷积层中卷积核的大小为11*11。而计算得到的合并卷积核则构成了合并卷积层中的所有卷积核。
在一个实施例中,所述对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到一个合并卷积层的步骤可以包括:根据参数更新后的初始模型中所述中间层包括的第一卷积层的卷积层参数和第二卷积层的卷积层参数,得到合并卷积层参数;确定合并初始卷积层,所述合并初始卷积层中卷积核的尺寸与所述合并卷积层参数所指示的值相同;根据所述合并初始卷积层得到所述合并卷积层。所述合并卷积层参数为:卷积核长度h=h1+h2-1,卷积核宽度w=w1+w2-1,其中,h1是为第一卷积层设置的各个卷积核的高度、w1是为第一卷积层设置的各个卷积核的宽度,h2是为第二卷积层设置的各个卷积核的高度、w2是为第二卷积层设置的各个卷积核的宽度。在一个实施例中,如果所述中间层仅包括第一卷积层和第二卷积层,则所述合并初始卷积层直接可以作为合并卷积层,而如果所述中间层还包括更多的其他卷积层,则可以进一步地将所述合并初始卷积层作为新的第一卷积层,且将下一卷积层(例如第三卷积层)和/或由所述中间层中其他两个卷积层合并处理得到的合并初始卷积层作为新的第二卷积层,重复执行上述的计算合并卷积层参数并得到合并初始卷积层的步骤,以得到最终的合并卷积层。
基于上述提及的公式可以确定合并卷积层参数,即确定合并卷积层的尺寸。在一个实施例中,还提供了确定合并卷积核的计算方式。所述对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到合并卷积层可以包括:分别将参数更新后的初始模型中所述中间层包括的第一卷积层的卷积核与第二卷积层的卷积核,用卷积核矩阵进行表示,并采用卷积核矩阵对应的矩阵乘法进行卷积合并计算,得到合并卷积核;根据计算得到的合并卷积核,得到合并初始卷积层,该合并卷积核构成了合并初始卷积层中的所有卷积核;根据合并初始卷积层得到合并卷积层;所述合并卷积核中第x行第y列的卷积核为:第一卷积层所对应的卷积核矩阵中第i行的每个元素,与第二卷积层所对应的卷积核矩阵中第j列的每个元素一一卷积后得到的新卷积核并进行求和计算后得到的卷积核,其中,所述用卷积核矩阵进行表示是指:卷积核矩阵中的每一个元素对应于第一卷积层或第二卷积层中的一个卷积核,也 就是说,第一卷积层所对应的卷积核矩阵中每一个元素对应第一卷积层的一个卷积核,第二卷积层所对应的卷积核矩阵中每一个元素对应第二卷积层的一个卷积核。在一个实施例中,合并卷积核的具体计算方式可以参考上述的公式3所对应的描述。
在一个实施例中,如果所述中间层仅包括第一卷积层和第二卷积层,则所述合并初始卷积层直接可以作为合并卷积层,而如果所述中间层还包括更多的其他卷积层,则可以进一步地将所述合并初始卷积层作为新的第一卷积层,且将下一卷积层(例如第三卷积层)和/或由所述中间层中其他两个卷积层合并处理得到的合并初始卷积层作为新的第二卷积层,重复执行上述的计算合并卷积核并得到合并初始卷积层的步骤,以得到最终的合并卷积层。
S902:计算设备根据所述输入层、输出层和所述合并卷积层,生成图像处理模型。可以直接生成一个包括所述输入层、输出层和所述合并卷积层的图像处理模型。在一个实施例中,可以进一步地对所述合并卷积层进行分解,所述S902可以具体包括:计算设备根据所述合并卷积层中卷积核参数进行分解处理,将卷积核参数分解为一行参数和一列参数;根据所述输入层、输出层和分解得到的一行参数和一列参数生成图像处理模型。这样可以将二维的卷积计算变成两个一维的卷积计算,进一步地提高计算效率。在一个实施例中,可以对合并卷积层的每一个卷积核均进行上述的分解处理。
本发明实施例可以基于降分辨率处理前后的两个图像,对包括配置有用于进行卷积计算的中间层的模型进行训练优化,最终得到可以进行N倍超分辨率处理的图像处理模型,基于该由特殊结构和训练方式生成的图像处理模型,一方面能够非常快捷地实现对图像或者视频帧的超分辨率处理,超分辨率处理速度和效率明显得到提升,另一方面,基于该图像处理模型对图像的超分辨率处理也更准确和稳定。
再请参见图10,是本发明实施例的一种图像处理方法的流程示意图,本发明实施例的所述方法也可以由计算设备执行,所述计算设备为能够提供图像或者视频服务的服务器或智能终端,本发明实施例的所述方法包括如下步骤。
S1001:获取待处理图像的目标图像数据;所述目标图像数据可以对应于需要进行超分辨率处理的单张图像,也可以是需要为用户播放且需要进行超分辨率处理的视频中的视频帧。目标图像数据可以是指待处理图像中的各位置处的像素点对应的表示数值。
需要进行超分辨率处理可以是指用户在用于显示图像的用户界面上选择了相应的超分辨率处理功能,例如,在该用户界面上设置了超清显示按钮,如果该按钮被点击,将该超清显示按钮关联的分辨率作为分辨率阈值,在图像播放的过程中,如果需要播放的图像的分辨率低于该分辨率阈值,则将其作为待处理图像,获取待处理图像 的目标图像数据。也就是说,在所述S1001之前,所述方法还可以包括:如果接收到待处理图像,则确定该待处理图像的分辨率;如果确定的分辨率低于分辨率阈值,则执行所述获取待处理图像的目标图像数据的步骤;其中,所述分辨率阈值是在用于显示图像的用户界面上配置确定的。所述用户界面是可以浏览图像的界面,也可以是播放视频的界面,待处理图像可以是等待显示的图像,也可以是视频中等待播放的视频帧。
在端到端之间传递图像时,可以根据端到端之间的数据传输资源信息,例如可用带宽资源、传输速率等,来确定所传数据的分辨率。如果数据传输资源充足,则发送端可以将低分辨率(低于分辨率阈值)的图像进行超分辨率处理后,传输给图像接收端,如果数据传输资源不充足,则直接发送低于分辨率阈值的图像,甚至可以将高分辨率图像进行降分辨率处理后再发送给图像接收端。在一个实施例中,可以获取发送端与图像接收端之间的数据传输资源信息;如果数据传输资源信息满足限制条件,则向所述图像接收端发送所述待处理图像;如果数据传输资源信息不满足限制条件,则执行所述S1001,以便于将生成的所述待处理的图像所对应的提高N倍分辨率的目标图像发送给所述图像接收端;其中,数据传输资源信息满足限制条件包括:带宽资源量低于预设的带宽阈值、和/或数据传输速率低于预设的速率阈值。
在一个实施例中,视频播放器或其它影像播放应用上也可以设置相应的按钮,用户点击该按钮后,该视频播放器或者其他影像播放应用先将等待播放的视频帧作为待处理图像进行处理,以便于向用户播放更高分辨率的视频。在一个实施例中,所述获取待处理图像的目标图像数据之前,还包括:如果接收到视频播放请求,则确定该视频播放请求所请求的目标视频;如果所述目标视频的视频帧的清晰度低于预设的视频播放清晰度阈值,按照该目标视频的播放时间顺序,依次将该目标视频的视频帧作为待处理图像,并执行所述获取待处理图像的目标图像数据,以便于输出各待处理图像所对应的提高N倍分辨率的目标图像。
S1002:通过图像处理模型对所述目标图像数据进行处理,得到包括N*N通道的通道输出数据的结果数据,所述N为大于等于2的正整数。所述图像处理模型为预先设置的能够实现N倍超分辨率处理的模型,所述图像处理模型包括输入层、输出层和中间层,所述中间层中的卷积核参数是基于训练图像进行参数更新后确定的。所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像。其中,所述图像处理模型中的中间层是根据所述第二图像和结果数据对中间层中的卷积核参数进行更新后确定的,所述结果数据是更新前的中间层对所述第一图像的图像数据进行卷积计算后得到的包括N*N通道的通道输出数 据。该图像处理模型相关的训练优化以及生成方式可参考上述各个实施例的相关描述。
所述图像处理模型可以是根据针对所述待处理图像请求的超分辨率倍数值,从预置的图像处理模型集合中选择的与该超分辨率倍数值对应的模型,图像处理模型集合中预置了多个能够提供不同倍数的超分辨率处理的图像处理模型。
在一个实施例中,在所述S1001之前,还可以包括:判断得到的待处理图像的图像数据是否为RGB数值,如果是,则获取RGB数值作为目标图像数据,并选择三个图像处理模型,所述S1002则包括:通过三个图像处理模型对所述RGB数值进行处理,得到对应于RGB数值的三个N*N通道的结果数据。其中,三个图像处理模型中第一图像处理模型用于对R数值进行处理,第二图像处理模型用于对G数值进行处理,第三图像处理模型用于对B数值进行处理。如果得到的待处理图像的图像数据不是RGB数值而是YUV数值,则提取其中的Y数值作为目标图像数据,所述S1002则包括:通过一个图像处理模型对Y通道数值进行处理,得到N*N通道的结果数据。针对HSV数据也可以提取明度V数值作为目标图像数据,针对HSL数据也可以提取亮度L数值作为目标图像数据。
图像处理模型的输入层输入的图像数据是一个二维矩阵,例如,待处理图像为分辨率M*M(比如960*540)的图像,则二维矩阵也是M*M(比如960*540)的二维矩阵,与待处理图像的像素点的表示数值一一对应,二维矩阵的第一行第一列的数值为待处理图像中第一行第一列的像素点的表示数值(诸如明度值、或亮度值、或Y通道数值),二维矩阵的第一行第二列的数值则对应为待处理图像中第一行第二列的像素点的表示数值,以此类推。
在一个实施例中,如果用RGB来作为表示数值,则输入层输入的图像数据是一个、或者两个、或者三个二维矩阵,也就是说,输入层输入的图像数据是对应的R数值构成的二维矩阵、G数值构成的二维矩阵、B数值构成的二维矩阵中的任意一个或者多个。对RGB对应的每个二维矩阵,各自进行相同的超分辨率处理。
S1003:根据所述结果数据,生成所述待处理图像所对应的提高N倍分辨率的目标图像。N*N通道的结果数据中,每一个数值均为一个像素点的表示数值,将这些结果数据中的各个数值按照预设顺序进行排列,可以得到一张超分辨率处理后的图像。预设的顺序可以是针对每一个通道数据指定的排列顺序,例如,针对2倍的图像处理模型,第一个通道中第一行第一列的数值作为超分辨图像的第一行第一列的像素点的表示数值,第二个通道中第一行第一列的数值作为超分辨图像中第一行第二列的像素点的表示数值,第三个通道中第一行第一列的数值作为高分辨率图像中第二行第 一列的像素点的表示数值,第四个通道中第一行第一列的数值作为高分辨率图像中第二行第二列的像素点的表示数值。简单来讲,图像处理模型将图像进行N倍超分辨率处理,是将原图像中的一个像素点变成N*N个像素点,结果数据中每个通道的数值依序排列构成这N*N个像素点的表示数值,最终排列得到一张超分辨率处理后的图像。
在一个实施例中,目标图像数据可以为图像或者视频帧的Y通道数值,后续步骤是对图像的Y通道数值进行的超分辨率处理,而图像其他的U通道数值和V通道数值,则可以通过其他的超分辨率处理步骤进行处理,例如通过插值的相关步骤来实现。也就是说,在本发明实施例中,所述目标图像数据包括从所述待处理图像中提取的Y通道数值;所述结果数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值;所述根据所述结果数据,生成所述待处理的图像所对应的提高N倍分辨率的目标图像,包括:根据N*N通道的结果数据中每一个数值组合得到Y通道图像数据;对从第一图像中提取的U通道数值进行插值计算,得到U通道图像数据;对从所述第一图像中提取的V通道数值进行插值计算,得到V通道图像数据;将所述Y通道图像数据、所述U通道图像数据和所述V通道图像数据进行合并处理,得到对应的提高N倍分辨率的目标图像。
如下表3所示,是利用本发明实施例中的图像处理模型进行超分辨率处理所耗时长、与目前基于低分辨率图像到高分辨率图像之间的映射关系来进行超分辨率处理所耗时长之间的关系示意。测试环境均为:中央处理器CPU Intel E5v4 2.40GHz 16核(一款处理器),10G内存,均无图像处理器(Graphics Processing Unit,GPU)。
表3
Figure PCTCN2019078685-appb-000002
通过表3发现,通过本发明实施例,能够非常快捷地实现对图像或者视频帧的超分辨率处理,超分辨率处理速度和效率明显得到提升,并且超分辨率处理也更准确和稳定。
下面在对本发明实施例的相关装置及设备进行详细描述。
请参见图11,是本发明实施例的一种图像处理模型的优化装置的结构示意图,本发明实施例的所述装置可以设置在能够提供各种图像服务的服务器或者智能终端中,所述装置包括如下结构。
生成模块1101,用于基于卷积神经网络生成用于进行图像分辨率处理的初始模型,所述初始模型包括输入层、输出层和中间层;
获取模块1102,用于获取训练图像,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像;
计算模块1103,用于将所述第一图像的图像数据从所述输入层输入到所述中间层进行卷积计算,并从所述输出层获取所述卷积计算的结果数据,所述结果数据包括N*N通道的通道输出数据,所述N为大于等于2的正整数;
处理模块1104,用于根据所述结果数据和第二图像,对所述初始模型的中间层中的卷积核参数进行参数更新,并根据所述参数更新后的初始模型生成图像处理模型。
在一个实施例中,所述处理模块1104,在用于根据所述结果数据和第二图像,对所述初始模型的中间层中的卷积核参数进行参数更新时,用于确定目标图像与所述第二图像之间的相同图像位置处的像素点的表示数值的差值,所述目标图像是根据所述结果数据确定的图像;根据所述表示数值的差值对所述初始模型的所述中间层中的卷积核参数进行参数更新。
在一个实施例中,所述中间层包括至少两个卷积层,且卷积层之间直接相连;所述处理模块1104,在用于根据所述参数更新后的初始模型生成图像处理模型时,用于对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到合并卷积层;根据所述输入层、输出层和所述合并卷积层,生成图像处理模型。
在一个实施例中,所述中间层包括第一卷积层和第二卷积层,所述处理模块1104,在用于对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到合并卷积层时,用于根据参数更新后的初始模型中所述中间层包括的第一卷积层的卷积层参数和第二卷积层的卷积层参数,得到合并卷积层参数,确定合并初始卷积层,其中,所述合并初始卷积层中卷积核的尺寸与所述合并卷积层参数所指示的值相同,根据所述合并初始卷积层得到所述合并卷积层;其中,所述合并卷积层参数包括:卷积核长度h=h1+h2-1,卷积核宽度w=w1+w2-1,其中,h1是为第一卷积层设置的各个卷积核的高度、w1是为第一卷积层设置的各个卷积核的宽度,h2是为第二卷积层设置的各个卷积核的高度、w2是为第二卷积层设置的各个卷积核的宽度。
在一个实施例中,所述中间层包括第一卷积层和第二卷积层,所述处理模块1104,在用于对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到合并卷积层时,用于分别将参数更新后的初始模型中所述中间层包括的第一卷积层中的卷积核与第二卷积层的卷积核,用卷积核矩阵进行表示,并采用卷积核矩 阵对应的矩阵乘法进行卷积合并计算,得到合并卷积核;根据计算得到的合并卷积核,得到合并初始卷积层;根据合并初始卷积层得到合并卷积层;所述合并卷积核中第x行第y列的卷积核为:第一卷积层所对应的卷积核矩阵中第i行的每个元素,与第二卷积层所对应的卷积核矩阵中第j列的每个元素一一卷积后得到的新卷积核进行求和计算后得到的卷积核,其中,所述用卷积核矩阵进行表示是指:卷积核矩阵中的每一个元素对应于第一卷积层或第二卷积层中的一个卷积核。
在一个实施例中,所述处理模块1104,在用于根据所述输入层、输出层和所述合并卷积层,生成图像处理模型时,用于将所述合并卷积层中卷积核参数分解为一行参数和一列参数;根据所述输入层、输出层和分解得到的一行参数和一列参数生成图像处理模型。
在一个实施例中,所述目标图像是指:根据所述结果数据中包括的通道输出数据的各个数值组合得到的图像,所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的表示数值。
在一个实施例中,所述第一图像的图像数据是指:从所述第一图像中提取的Y通道数值;所述结果数据所对应的目标图像是由Y通道图像数据、U通道图像数据以及V通道图像数据组合得到;其中,所述目标图像中Y通道图像数据是根据所述结果数据得到的、目标图像中U通道图像数据是对第一图像中提取的U通道数值进行插值计算得到的、目标图像中的V通道图像数据是对第一图像中提取的V通道数值进行插值计算得到的;其中,所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值。
在一个实施例中,所述第一图像的图像数据是指:从所述第一图像中提取的Y通道数值;所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值;所述表示数值的差值是指:该结果数据所对应的目标图像与所述第二图像之间的相同图像位置处的像素点的Y通道数值的差值。
本发明实施例中各个功能模块的具体实现可参考上述实施例中相关步骤的描述,在此不赘述。
本发明实施例可以基于降分辨率处理前后的两个图像,对包括配置有用于进行卷积计算的中间层的模型进行训练优化,最终得到可以进行N倍超分辨率处理的图像处理模型,这种训练方式使得图像处理模型能够高质量、更准确地进行超分辨率处理。
再请参见图12,是本发明实施例的一种图像处理装置的结构示意图,所述装置可以设置在能够提供各种图像服务的服务器或智能终端中,所述装置可以包括如下结构。
获取模块1201,用于获取待处理图像的目标图像数据;
处理模块1202,用于通过图像处理模型对所述目标图像数据进行处理,得到包括N*N通道的通道输出数据的结果数据,所述N为大于等于2的正整数;
生成模块1203,用于根据所述结果数据,生成所述待处理图像所对应的提高N倍分辨率的目标图像;
所述图像处理模型包括输入层、输出层和中间层,所述中间层中的卷积核参数是基于训练图像进行参数更新后确定的,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像,其中,所述图像处理模型中的中间层是根据所述第二图像和结果数据对中间层中的卷积核参数进行更新后得到的,所述结果数据是更新前的中间层对所述第一图像的图像数据进行卷积计算后得到的包括N*N通道的通道输出数据。
在一个实施例中,所述目标图像数据包括从所述待处理图像中提取的Y通道数值;所述结果数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值;所述生成模块1203,在用于根据所述结果数据,生成所述待处理的图像所对应的提高N倍分辨率的目标图像时,用于根据N*N通道的结果数据中每一个数值组合得到Y通道图像数据;对从第一图像中提取的U通道数值进行插值计算,得到U通道图像数据;对从所述第一图像中提取的V通道数值进行插值计算,得到V通道图像数据;将所述Y通道图像数据、所述U通道图像数据和所述V通道图像数据进行合并处理,得到对应的提高N倍分辨率的目标图像。
在一个实施例中,所述装置还可以包括监控模块1204,用于如果接收到待处理图像,则确定该待处理图像的分辨率;如果确定的分辨率低于分辨率阈值,则触发所述获取模块1201执行所述获取待处理图像的目标图像数据的步骤;其中,所述分辨率阈值是在用于显示图像的用户界面上配置确定的。
在一个实施例中,所述装置还可以包括监控模块1204和发送模块1205。所述监控模块1204,用于获取与图像接收端之间的数据传输资源信息;如果数据传输资源信息满足限制条件,则触发所述发送模块1205向所述图像接收端发送所述待处理图像;如果数据传输资源信息不满足限制条件,则触发所述获取模块1201获取待处理图像的目标图像数据,以便于将生成的所述待处理的图像所对应的提高N倍分辨率的目标图像发送给所述图像接收端;其中,数据传输资源信息满足限制条件包括:带宽资源量低于预设的带宽阈值、和/或数据传输速率低于预设的速率阈值。
在一个实施例中,所述装置还可以包括监控模块1204,用于如果接收到视频播放请求,则确定该视频播放请求所请求的目标视频;如果所述目标视频的视频帧的清 晰度低于视频播放清晰度阈值,按照该目标视频的播放时间顺序,依次将该目标视频的视频帧作为待处理图像,并触发所述获取模块1201执行所述获取待处理图像的目标图像数据的步骤,以便于输出各待处理图像所对应的提高N倍分辨率的目标图像。
本发明实施例中各个功能模块的具体实现可参考上述实施例中相关步骤的描述,在此不赘述。
本发明实施例基于由特殊结构和训练方式生成的图像处理模型,一方面能够非常快捷地实现对图像(例如各种图片、视频帧等)的超分辨率处理,超分辨率处理速度和效率明显得到提升,另一方面,基于该图像处理模型对图像的超分辨率处理也更准确和稳定。
再请参见图13,是本发明实施例的一种图像处理设备的结构示意图,该图像处理设备可以根据需要包括供电模块、外壳结构,以及所需的各种数据接口1302、网络接口1304等,并且该图像处理设备还包括:处理器1301和存储装置1303。
通过数据接口1302和网络接口1304,所述图像处理设备可以与服务器或者智能终端交互数据。在本发明实施例中,通过数据接口1302可以从服务器或者管理用户接收到各种各样的高分辨率图像作为训练图像中的第二图像,而通过网络接口1304则可以通过网络获取到更多的高分辨率图像作为训练图像中的第二图像,图像处理设备将这些训练图像存储到所述存储装置1303中,以便于对初始模型进行训练得到图像处理模型。
所述存储装置1303可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储装置1303也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),固态硬盘(solid-state drive,SSD)等;存储装置1303还可以包括上述种类的存储器的组合。
所述处理器1301可以是中央处理器(central processing unit,CPU)。所述处理器1301还可以进一步包括硬件芯片。该硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)等。该PLD可以是现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)等。
所述存储装置1303中存储有计算机程序指令,所述处理器1301可以调用这些计算机程序指令,用于实现上述提及的对图像处理模型的训练优化的相关方法步骤。
在一个实施例中,所述存储装置1303,用于存储计算机程序指令;所述处理器1301,调用所述存储装置1303中存储的计算机程序指令,以执行图8所示的图像处理模型的优化方法的步骤。
本发明实施例中所述处理器1301的具体实现还可参考上述其它方法实施例中相关步骤的描述,在此不赘述。
本发明实施例可以基于降分辨率处理前后的两个图像,对包括配置有用于进行卷积计算的中间层的模型进行训练优化,最终得到可以进行N倍超分辨率处理的图像处理模型,这种训练方式使得图像处理模型能够高质量、更准确地进行超分辨率处理。
再请参见图14,是本发明实施例的另一种图像处理设备的结构示意图,该图像处理设备可以根据需要包括供电模块、外壳结构以及各种所需的数据接口1403、网络接口1404等,并且该图像处理设备还包括:处理器1401和存储装置1402。
通过所述网络接口1404,图像处理设备可以作为服务器接入到网络与智能终端相连,或者接入网络作为智能终端与服务器相连。当所述图像处理设备作为服务器时,通过该网络接口1404可以根据需求将低分辨率的各种图像或视频,或者将对低分辨率图像进行超分辨率处理得到的图像或视频发送给具有图像播放需求的智能终端。而通过所述数据接口1403则可以接收图像管理用户或者其他服务器传输的各种类型的图片数据或者视频数据,存储到所述存储装置1402中以供使用。
当所述图像处理设备是智能终端时,通过网络接口1404则可以接入到提供各种图像或者视频服务的应用服务器,从而获取到低分辨率图像、视频;或者获取到对低分辨率图像或者视频进行超分辨率处理后得到的高分辨率图像或者视频,通过所述数据接口1403则可以接收用户或者其他智能终端传输的低分辨率图像、视频;或者获取到对低分辨率图像或者视频进行超分辨率处理后得到的高分辨率图像或者视频。并且,该图像处理设备还可以包括用户接口1405,例如触摸屏、物理按键、语音输入、图像输入等接口,以便于接收用户的一些操作并向用户展示视频或者图像。
所述存储装置1402可以包括易失性存储器,例如RAM;存储装置1402也可以包括非易失性存储器,例如SSD等;存储装置1402还可以包括上述种类的存储器的组合。
所述处理器1401可以是CPU。所述处理器1401还可以进一步包括硬件芯片。上述硬件芯片可以是ASIC,PLD等。上述PLD可以是FPGA,GAL等。
所述存储装置1402中存储有计算机程序指令,所述处理器1401可以调用这些计算机程序指令,以实现上述提及的基于图像处理模型对图像进行处理的方法步骤。
在一个实施例中,所述存储装置1402,用于存储计算机程序指令;所述处理器1401,调用所述存储装置1402中存储的计算机程序指令,以执行图10所示的图像处理模型的优化方法的步骤。
本发明实施例中所述处理器1401的具体实现还可参考上述其它方法实施例中相 关步骤的描述,在此不赘述。
本发明实施例基于由特殊结构和训练方式生成的图像处理模型,一方面能够非常快捷地实现对图像(例如各种图片、视频帧等)的超分辨率处理,超分辨率处理速度和效率明显得到提升,另一方面,基于该图像处理模型对图像的超分辨率处理也更准确和稳定。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明的部分实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (19)

  1. 一种图像处理模型的生成方法,由计算设备执行,包括:
    基于卷积神经网络生成用于进行图像分辨率处理的初始模型,所述初始模型包括输入层、输出层和中间层;
    获取训练图像,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像;
    将所述第一图像的图像数据从所述输入层输入到所述中间层进行卷积计算,并从所述输出层获取所述卷积计算的结果数据,所述结果数据包括N*N通道的通道输出数据,所述N为大于等于2的正整数;
    根据所述结果数据和第二图像,对所述初始模型的中间层中的卷积核参数进行参数更新,并根据所述参数更新后的初始模型生成图像处理模型。
  2. 如权利要求1所述的方法,其中,所述根据所述结果数据和第二图像,对所述初始模型的中间层中的卷积核参数进行参数更新,包括:
    确定目标图像与所述第二图像之间的相同图像位置处的像素点的表示数值的差值,所述目标图像是根据所述结果数据确定的图像;
    根据所述表示数值的差值对所述初始模型的所述中间层中的卷积核参数进行参数更新。
  3. 如权利要求1所述的方法,其中,所述中间层包括至少两个卷积层,且卷积层之间直接相连;所述根据所述参数更新后的初始模型生成图像处理模型,包括:
    对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到合并卷积层;
    根据所述输入层、输出层和所述合并卷积层,生成图像处理模型。
  4. 如权利要求3所述的方法,其中,所述对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到合并卷积层,包括:
    根据参数更新后的初始模型中所述中间层包括的第一卷积层的卷积层参数和第二卷积层的卷积层参数,得到合并卷积层参数;
    确定合并初始卷积层,其中,所述合并初始卷积层中卷积核的尺寸与所述合并卷积层参数所指示的值相同;
    根据所述合并初始卷积层得到所述合并卷积层;
    其中,所述合并卷积层参数包括:卷积核长度h=h1+h2-1,卷积核宽度w=w1+w2-1,其中,h1是为第一卷积层设置的各个卷积核的高度、w1是为第一卷积 层设置的各个卷积核的宽度,h2是为第二卷积层设置的各个卷积核的高度、w2是为第二卷积层设置的各个卷积核的宽度。
  5. 如权利要求3所述的方法,其中,所述对参数更新后的初始模型中所述中间层的至少两个卷积层进行卷积层合并处理,得到合并卷积层,包括:
    分别将参数更新后的初始模型中所述中间层包括的第一卷积层中的卷积核与第二卷积层的卷积核,用卷积核矩阵进行表示,并采用卷积核矩阵对应的矩阵乘法进行卷积合并计算,得到合并卷积核;
    根据计算得到的合并卷积核,得到合并初始卷积层;
    根据合并初始卷积层得到合并卷积层;
    所述合并卷积核中第x行第y列的卷积核为:第一卷积层所对应的卷积核矩阵中第i行的每个元素,与第二卷积层所对应的卷积核矩阵中第j列的每个元素一一卷积后得到的新卷积核进行求和计算后得到的卷积核,其中,所述用卷积核矩阵进行表示是指:卷积核矩阵中的每一个元素对应于第一卷积层或第二卷积层中的一个卷积核。
  6. 如权利要求3-5任一项所述的方法,其中,所述根据所述输入层、输出层和所述合并卷积层,生成图像处理模型,包括:
    将所述合并卷积层中卷积核参数分解为一行参数和一列参数;
    根据所述输入层、输出层和分解得到的一行参数和一列参数生成图像处理模型。
  7. 如权利要求1或2所述的方法,其中,所述目标图像是指:根据所述结果数据中包括的通道输出数据的各个数值组合得到的图像,所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的表示数值。
  8. 如权利要求1或2所述的方法,其中,所述第一图像的图像数据是指:从所述第一图像中提取的Y通道数值;所述结果数据所对应的目标图像是由Y通道图像数据、U通道图像数据以及V通道图像数据组合得到;
    其中,所述目标图像中Y通道图像数据是根据所述结果数据得到的、目标图像中U通道图像数据是对第一图像中提取的U通道数值进行插值计算得到的、目标图像中的V通道图像数据是对第一图像中提取的V通道数值进行插值计算得到的;
    其中,所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值。
  9. 如权利要求1或2所述的方法,其中,所述第一图像的图像数据是指:从所述第一图像中提取的Y通道数值;所述结果数据中每一个通道输出数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值;
    所述表示数值的差值是指:该结果数据所对应的目标图像与所述第二图像之间的 相同图像位置处的像素点的Y通道数值的差值。
  10. 一种图像处理方法,由计算设备执行,包括:
    获取待处理图像的目标图像数据;
    通过图像处理模型对所述目标图像数据进行处理,得到包括N*N通道的通道输出数据的结果数据,所述N为大于等于2的正整数;
    根据所述结果数据,生成所述待处理图像所对应的提高N倍分辨率的目标图像;
    所述图像处理模型包括输入层、输出层和中间层,所述中间层中的卷积核参数是基于训练图像进行参数更新后确定的,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像,其中,所述图像处理模型中的中间层是根据所述第二图像和结果数据对中间层中的卷积核参数进行更新后得到的,所述结果数据是更新前的中间层对所述第一图像的图像数据进行卷积计算后得到的包括N*N通道的通道输出数据。
  11. 如权利要求10所述的方法,其中,所述目标图像数据包括从所述待处理图像中提取的Y通道数值;所述结果数据中的一个数值用于表示所述目标图像的其中一个像素点的Y通道数值;
    所述根据所述结果数据,生成所述待处理的图像所对应的提高N倍分辨率的目标图像,包括:
    根据N*N通道的结果数据中每一个数值组合得到Y通道图像数据;
    对从第一图像中提取的U通道数值进行插值计算,得到U通道图像数据;
    对从所述第一图像中提取的V通道数值进行插值计算,得到V通道图像数据;
    将所述Y通道图像数据、所述U通道图像数据和所述V通道图像数据进行合并处理,得到对应的提高N倍分辨率的目标图像。
  12. 如权利要求10或11所述的方法,其中,在所述获取待处理图像的目标图像数据之前,还包括:
    如果接收到待处理图像,则确定该待处理图像的分辨率;
    如果确定的分辨率低于分辨率阈值,则执行所述获取待处理图像的目标图像数据的步骤;
    其中,所述分辨率阈值是在用于显示图像的用户界面上配置确定的。
  13. 如权利要求10或11所述的方法,其中,在所述获取待处理图像的目标图像数据之前,还包括:
    获取与图像接收端之间的数据传输资源信息;
    如果所述数据传输资源信息满足限制条件,则向所述图像接收端发送所述待处理 图像;
    如果所述数据传输资源信息不满足限制条件,则执行所述获取待处理图像的目标图像数据的步骤,以便于将生成的所述待处理的图像所对应的提高N倍分辨率的目标图像发送给所述图像接收端;
    其中,所述数据传输资源信息满足限制条件包括:带宽资源量低于预设的带宽阈值、和/或数据传输速率低于预设的速率阈值。
  14. 如权利要求10或11所述的方法,其中,所述获取待处理图像的目标图像数据之前,还包括:
    如果接收到视频播放请求,则确定该视频播放请求所请求的目标视频;
    如果所述目标视频的视频帧的清晰度低于视频播放清晰度阈值,按照该目标视频的播放时间顺序,依次将该目标视频的视频帧作为待处理图像,并执行所述获取待处理图像的目标图像数据的步骤,以便于输出各待处理图像所对应的提高N倍分辨率的目标图像。
  15. 一种图像处理模型的生成装置,包括:
    生成模块,用于基于卷积神经网络生成用于进行图像分辨率处理的初始模型,所述初始模型包括输入层、输出层和中间层;
    获取模块,用于获取训练图像,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像;
    计算模块,用于将所述第一图像的图像数据从所述输入层输入到所述中间层进行卷积计算,并从所述输出层获取所述卷积计算的结果数据,所述结果数据包括N*N通道的通道输出数据,所述N为大于等于2的正整数;
    处理模块,用于根据所述结果数据和第二图像,对所述初始模型的中间层中的卷积核参数进行参数更新,并根据所述参数更新后的初始模型生成图像处理模型。
  16. 一种图像处理装置,包括:
    获取模块,用于获取待处理图像的目标图像数据;
    处理模块,用于通过图像处理模型对所述目标图像数据进行处理,得到包括N*N通道的通道输出数据的结果数据,所述N为大于等于2的正整数;
    生成模块,用于根据所述结果数据,生成所述待处理图像所对应的提高N倍分辨率的目标图像;
    所述图像处理模型包括输入层、输出层和中间层,所述中间层中的卷积核参数是基于训练图像进行参数更新后确定的,所述训练图像包括第一图像和第二图像,所述第一图像是对所述第二图像进行N倍降分辨率处理后得到的图像,其中,所述图像 处理模型中的中间层是根据所述第二图像和结果数据对中间层中的卷积核参数进行更新后得到的,所述结果数据是更新前的中间层对所述第一图像的图像数据进行卷积计算后得到的包括N*N通道的通道输出数据。
  17. 一种图像处理设备,包括:处理器和存储装置;
    所述存储装置,用于存储计算机程序指令;
    所述处理器,用于调用所述存储装置中存储的计算机程序指令,以实现如权利要求1-9任一项所述的图像处理模型的生成方法。
  18. 一种图像处理设备,包括:处理器和存储装置;
    所述存储装置,用于存储计算机程序指令;
    所述处理器,用于调用所述存储装置中存储的计算机程序指令,以实现如权利要求10-14任一项所述的图像处理方法。
  19. 一种计算机存储介质,该计算机存储介质中存储有计算机程序指令,该计算机程序指令被执行时,用于实现如权利要求1-9任一项所述的图像处理模型的生成方法,或者用于实现如权利要求10-14任一项所述的图像处理方法。
PCT/CN2019/078685 2018-04-02 2019-03-19 图像相关处理方法及装置、设备、存储介质 WO2019192316A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19782328.9A EP3748572A4 (en) 2018-04-02 2019-03-19 PROCESS AND APPARATUS FOR PROCESSING RELATED TO IMAGES, DEVICE AND STORAGE MEDIA
US16/928,383 US11836891B2 (en) 2018-04-02 2020-07-14 Image related processing method and apparatus, device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810285137.4 2018-04-02
CN201810285137.4A CN108259997B (zh) 2018-04-02 2018-04-02 图像相关处理方法及装置、智能终端、服务器、存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/928,383 Continuation US11836891B2 (en) 2018-04-02 2020-07-14 Image related processing method and apparatus, device and storage medium

Publications (1)

Publication Number Publication Date
WO2019192316A1 true WO2019192316A1 (zh) 2019-10-10

Family

ID=62747887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078685 WO2019192316A1 (zh) 2018-04-02 2019-03-19 图像相关处理方法及装置、设备、存储介质

Country Status (4)

Country Link
US (1) US11836891B2 (zh)
EP (1) EP3748572A4 (zh)
CN (1) CN108259997B (zh)
WO (1) WO2019192316A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428732A (zh) * 2020-03-03 2020-07-17 平安科技(深圳)有限公司 Yuv图像识别方法、系统和计算机设备
CN111753770A (zh) * 2020-06-29 2020-10-09 北京百度网讯科技有限公司 人物属性识别方法、装置、电子设备和存储介质
CN112686888A (zh) * 2021-01-27 2021-04-20 上海电气集团股份有限公司 混凝土轨枕裂纹的检测方法、系统、设备和介质
CN112733585A (zh) * 2019-10-29 2021-04-30 杭州海康威视数字技术股份有限公司 图像识别方法
CN113240583A (zh) * 2021-04-13 2021-08-10 浙江大学 一种基于卷积核预测的图像超分辨方法
EP4106338A4 (en) * 2020-02-10 2023-08-16 Beijing Baidu Netcom Science And Technology Co., Ltd. METHOD OF PROCESSING A COVER IMAGE OF MULTIMEDIA INFORMATION, CLIENT AND ELECTRONIC DEVICE
EP4062360A4 (en) * 2019-11-18 2024-01-17 Advanced Micro Devices Inc SUPER-RESOLUTION GAMING

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11361403B2 (en) * 2017-02-24 2022-06-14 Deepmind Technologies Limited Iterative multiscale image generation using neural networks
CN108259997B (zh) 2018-04-02 2019-08-23 腾讯科技(深圳)有限公司 图像相关处理方法及装置、智能终端、服务器、存储介质
JP6876940B2 (ja) * 2018-04-04 2021-05-26 パナソニックIpマネジメント株式会社 画像処理装置および画像処理方法
KR102077215B1 (ko) * 2018-04-06 2020-02-13 경희대학교 산학협력단 국부 이진 패턴 분류 및 선형 매핑을 이용한 초해상화 방법
CN108881739B (zh) * 2018-07-18 2020-01-10 腾讯科技(深圳)有限公司 图像生成方法、装置、终端及存储介质
CN109102463B (zh) * 2018-08-13 2023-01-24 苏州飞搜科技有限公司 一种超分辨率图像重建方法及装置
CN109389045B (zh) * 2018-09-10 2021-03-02 广州杰赛科技股份有限公司 基于混合时空卷积模型的微表情识别方法与装置
CN109242776B (zh) * 2018-09-11 2023-04-07 江苏君英天达人工智能研究院有限公司 一种基于视觉系统的双车道线检测方法
CN110969217B (zh) * 2018-09-28 2023-11-17 杭州海康威视数字技术股份有限公司 基于卷积神经网络进行图像处理的方法和装置
CN110971837B (zh) * 2018-09-30 2021-07-27 Tcl科技集团股份有限公司 基于ConvNets的暗光图像处理方法及终端设备
CN109360153B (zh) * 2018-10-26 2023-05-02 北京金山云网络技术有限公司 图像处理方法、超分辨率模型生成方法、装置及电子设备
CN109413434B (zh) * 2018-11-08 2021-03-09 腾讯科技(深圳)有限公司 图像处理方法、装置、系统、存储介质和计算机设备
CN112913253A (zh) * 2018-11-13 2021-06-04 北京比特大陆科技有限公司 图像处理方法、装置、设备、存储介质及程序产品
CN110147864B (zh) 2018-11-14 2022-02-22 腾讯科技(深圳)有限公司 编码图案的处理方法和装置、存储介质、电子装置
CN109741253A (zh) * 2018-12-12 2019-05-10 深圳慧源创新科技有限公司 无人机图传视频近景远景切换技术
CN110062282A (zh) * 2019-03-18 2019-07-26 北京奇艺世纪科技有限公司 一种超分辨率视频重建方法、装置及电子设备
CN110473137B (zh) * 2019-04-24 2021-09-14 华为技术有限公司 图像处理方法和装置
CN110070511B (zh) * 2019-04-30 2022-01-28 北京市商汤科技开发有限公司 图像处理方法和装置、电子设备及存储介质
CN110211017B (zh) * 2019-05-15 2023-12-19 北京字节跳动网络技术有限公司 图像处理方法、装置及电子设备
JP7312026B2 (ja) * 2019-06-12 2023-07-20 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム
CN110390636A (zh) * 2019-06-19 2019-10-29 深圳慧源创新科技有限公司 无人机超、高清图片或视频数据的模拟变焦方法
KR20210020387A (ko) 2019-08-14 2021-02-24 삼성전자주식회사 전자 장치 및 그 제어 방법
US11023783B2 (en) * 2019-09-11 2021-06-01 International Business Machines Corporation Network architecture search with global optimization
US20220335571A1 (en) * 2019-09-11 2022-10-20 The State Of Israel, Ministry Of Agriculture & Rural Development, Agricultural Research Organization Methods and systems for super resolution for infra-red imagery
US10943353B1 (en) 2019-09-11 2021-03-09 International Business Machines Corporation Handling untrainable conditions in a network architecture search
CN110827200B (zh) * 2019-11-04 2023-04-07 Oppo广东移动通信有限公司 一种图像超分重建方法、图像超分重建装置及移动终端
CN110703223B (zh) * 2019-11-07 2023-06-30 上海禾赛科技有限公司 应用于激光雷达的调节方法和电子设备
CN110880160B (zh) * 2019-11-14 2023-04-18 Oppo广东移动通信有限公司 图片帧超分方法、装置、终端设备及计算机可读存储介质
CN110868625A (zh) * 2019-11-22 2020-03-06 北京金山云网络技术有限公司 一种视频播放方法、装置、电子设备及存储介质
CN110889809B9 (zh) * 2019-11-28 2023-06-23 RealMe重庆移动通信有限公司 图像处理方法及装置、电子设备、存储介质
CN111027635B (zh) * 2019-12-12 2023-10-31 深圳前海微众银行股份有限公司 图像处理模型的构建方法、装置、终端及可读存储介质
CN113052756A (zh) * 2019-12-27 2021-06-29 武汉Tcl集团工业研究院有限公司 一种图像处理方法、智能终端及存储介质
CN111353581B (zh) * 2020-02-12 2024-01-26 北京百度网讯科技有限公司 轻量模型获取方法、装置、电子设备及存储介质
CN111445392B (zh) * 2020-03-20 2023-09-15 Oppo广东移动通信有限公司 图像处理方法及装置、计算机可读存储介质和电子设备
CN111508038A (zh) * 2020-04-17 2020-08-07 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN111581408A (zh) * 2020-04-27 2020-08-25 北京电信易通信息技术股份有限公司 一种基于超分辨率的图片存储方法及系统
US20210357730A1 (en) * 2020-05-12 2021-11-18 Alibaba Group Holding Limited Multi-size convolutional layer background
CN114071188A (zh) * 2020-08-04 2022-02-18 中国电信股份有限公司 处理视频数据的方法、装置及计算机可读存储介质
US11300652B1 (en) * 2020-10-30 2022-04-12 Rebellion Defense, Inc. Systems and methods for generating images from synthetic aperture radar data using neural networks
CN112561877B (zh) * 2020-12-14 2024-03-29 中国科学院深圳先进技术研究院 多尺度双通道卷积模型训练方法、图像处理方法及装置
CN114827666A (zh) * 2021-01-27 2022-07-29 阿里巴巴集团控股有限公司 视频处理方法、装置及设备
CN114845152B (zh) * 2021-02-01 2023-06-30 腾讯科技(深圳)有限公司 播放控件的显示方法、装置、电子设备及存储介质
CN113128670B (zh) * 2021-04-09 2024-03-19 南京大学 一种神经网络模型的优化方法及装置
CN113222824B (zh) * 2021-06-03 2022-08-30 北京理工大学 一种红外图像超分辨率及小目标检测方法
CN113378858A (zh) * 2021-06-28 2021-09-10 北京百度网讯科技有限公司 图像检测方法、装置、设备、车辆和介质
CN114025106A (zh) * 2021-12-16 2022-02-08 海宁奕斯伟集成电路设计有限公司 智能处理器、图像智能处理系统、方法及存储介质
CN114422802B (zh) * 2022-03-28 2022-08-09 浙江智慧视频安防创新中心有限公司 一种基于码本的自编码机图像压缩方法
CN115147279B (zh) * 2022-07-05 2023-04-07 南京林业大学 基于选择性通道处理机制的轻量级遥感图像超分辨率方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744357A (zh) * 2016-02-29 2016-07-06 哈尔滨超凡视觉科技有限公司 一种基于在线分辨率提升的降低网络视频带宽占用方法
US20170256033A1 (en) * 2016-03-03 2017-09-07 Mitsubishi Electric Research Laboratories, Inc. Image Upsampling using Global and Local Constraints
CN107220934A (zh) * 2017-05-15 2017-09-29 北京小米移动软件有限公司 图像重建方法及装置
CN107240066A (zh) * 2017-04-28 2017-10-10 天津大学 基于浅层和深层卷积神经网络的图像超分辨率重建算法
CN107578377A (zh) * 2017-08-31 2018-01-12 北京飞搜科技有限公司 一种基于深度学习的超分辨率图像重建方法及系统
CN108259997A (zh) * 2018-04-02 2018-07-06 腾讯科技(深圳)有限公司 图像相关处理方法及装置、智能终端、服务器、存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9237297B1 (en) * 2010-12-06 2016-01-12 Kenneth M. Waddell Jump view interactive video system
US9552438B2 (en) * 2013-05-17 2017-01-24 Paypal, Inc. Systems and methods for responsive web page delivery based on network bandwidth
US9212851B2 (en) * 2014-02-10 2015-12-15 Honeywell International Inc. Multi resolution, hierarchical radiance field estimation
CN105898565A (zh) * 2016-04-28 2016-08-24 乐视控股(北京)有限公司 一种视频处理方法及设备
CN106228512A (zh) * 2016-07-19 2016-12-14 北京工业大学 基于学习率自适应的卷积神经网络图像超分辨率重建方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744357A (zh) * 2016-02-29 2016-07-06 哈尔滨超凡视觉科技有限公司 一种基于在线分辨率提升的降低网络视频带宽占用方法
US20170256033A1 (en) * 2016-03-03 2017-09-07 Mitsubishi Electric Research Laboratories, Inc. Image Upsampling using Global and Local Constraints
CN107240066A (zh) * 2017-04-28 2017-10-10 天津大学 基于浅层和深层卷积神经网络的图像超分辨率重建算法
CN107220934A (zh) * 2017-05-15 2017-09-29 北京小米移动软件有限公司 图像重建方法及装置
CN107578377A (zh) * 2017-08-31 2018-01-12 北京飞搜科技有限公司 一种基于深度学习的超分辨率图像重建方法及系统
CN108259997A (zh) * 2018-04-02 2018-07-06 腾讯科技(深圳)有限公司 图像相关处理方法及装置、智能终端、服务器、存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3748572A4 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733585A (zh) * 2019-10-29 2021-04-30 杭州海康威视数字技术股份有限公司 图像识别方法
CN112733585B (zh) * 2019-10-29 2023-09-05 杭州海康威视数字技术股份有限公司 图像识别方法
EP4062360A4 (en) * 2019-11-18 2024-01-17 Advanced Micro Devices Inc SUPER-RESOLUTION GAMING
US11967043B2 (en) 2019-11-18 2024-04-23 Advanced Micro Devices, Inc. Gaming super resolution
EP4106338A4 (en) * 2020-02-10 2023-08-16 Beijing Baidu Netcom Science And Technology Co., Ltd. METHOD OF PROCESSING A COVER IMAGE OF MULTIMEDIA INFORMATION, CLIENT AND ELECTRONIC DEVICE
CN111428732A (zh) * 2020-03-03 2020-07-17 平安科技(深圳)有限公司 Yuv图像识别方法、系统和计算机设备
CN111428732B (zh) * 2020-03-03 2023-10-17 平安科技(深圳)有限公司 Yuv图像识别方法、系统和计算机设备
CN111753770A (zh) * 2020-06-29 2020-10-09 北京百度网讯科技有限公司 人物属性识别方法、装置、电子设备和存储介质
CN112686888A (zh) * 2021-01-27 2021-04-20 上海电气集团股份有限公司 混凝土轨枕裂纹的检测方法、系统、设备和介质
CN113240583A (zh) * 2021-04-13 2021-08-10 浙江大学 一种基于卷积核预测的图像超分辨方法

Also Published As

Publication number Publication date
CN108259997B (zh) 2019-08-23
CN108259997A (zh) 2018-07-06
EP3748572A1 (en) 2020-12-09
EP3748572A4 (en) 2021-04-14
US11836891B2 (en) 2023-12-05
US20200342572A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
WO2019192316A1 (zh) 图像相关处理方法及装置、设备、存储介质
CN108022212B (zh) 高分辨率图片生成方法、生成装置及存储介质
CN111683269B (zh) 视频处理方法、装置、计算机设备和存储介质
US11928753B2 (en) High fidelity interactive segmentation for video data with deep convolutional tessellations and context aware skip connections
CN110909790A (zh) 图像的风格迁移方法、装置、终端及存储介质
CN110598781A (zh) 图像处理方法、装置、电子设备及存储介质
WO2021115242A1 (zh) 一种超分辨率图像处理方法以及相关装置
US11070705B2 (en) System and method for image dynamic range adjusting
EP4207051A1 (en) Image super-resolution method and electronic device
US20190370933A1 (en) Image Processing Method and Apparatus
WO2022166298A1 (zh) 一种图像处理方法、装置、电子设备及可读存储介质
WO2020001222A1 (zh) 图像处理方法、装置、计算机可读介质及电子设备
CN111402139A (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN112399120A (zh) 电子装置及其控制方法
US20200402243A1 (en) Video background estimation using spatio-temporal models
WO2023103378A1 (zh) 视频插帧模型训练方法、装置、计算机设备和存储介质
WO2021258530A1 (zh) 图像分辨率处理方法、装置、设备及可读存储介质
CN107220934B (zh) 图像重建方法及装置
WO2022099710A1 (zh) 图像重建方法、电子设备和计算机可读存储介质
WO2020215263A1 (zh) 一种图像处理方法及装置
CN110113595B (zh) 一种2d视频转3d视频的方法、装置及电子设备
KR20220014801A (ko) 이미지 처리 방법, 장치 및 저장 매체
KR20210062485A (ko) 전자 장치 및 그 제어 방법
US20230060988A1 (en) Image processing device and method
WO2023010981A1 (zh) 编解码方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19782328

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019782328

Country of ref document: EP

Effective date: 20200903

NENP Non-entry into the national phase

Ref country code: DE