CN112950465A - Video super-resolution processing method, video super-resolution processing device and storage medium - Google Patents

Video super-resolution processing method, video super-resolution processing device and storage medium Download PDF

Info

Publication number
CN112950465A
CN112950465A CN202110104274.5A CN202110104274A CN112950465A CN 112950465 A CN112950465 A CN 112950465A CN 202110104274 A CN202110104274 A CN 202110104274A CN 112950465 A CN112950465 A CN 112950465A
Authority
CN
China
Prior art keywords
processed
resolution
processing
video
frame picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110104274.5A
Other languages
Chinese (zh)
Inventor
常群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110104274.5A priority Critical patent/CN112950465A/en
Publication of CN112950465A publication Critical patent/CN112950465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a video super-resolution processing method, a video super-resolution processing device and a storage medium. The video super-resolution processing method comprises the following steps: acquiring a video to be processed, and determining a frame picture to be processed in the video to be processed; if the resolution of the frame picture to be processed is larger than a resolution threshold, determining a first processing area in the frame picture to be processed, and performing super-resolution processing on the first processing area based on a deep learning model, wherein the first processing area is a partial area in the frame picture to be processed. Through the embodiment of the disclosure, the video super-resolution processing effect of the video to be processed is ensured, the real-time requirement of video super-resolution is met, and the balance between the video super-resolution effect and the video super-resolution real-time is realized.

Description

Video super-resolution processing method, video super-resolution processing device and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video super-resolution processing method, a video super-resolution processing apparatus, and a storage medium.
Background
With the development of the technology, in the fields of medical imaging, video monitoring, remote sensing imaging and the like, a clear and comfortable visual experience is brought to a user by a high-resolution video, and related technical research is also widely concerned. However, in real life, due to the influence of factors such as the limitation of imaging equipment, atmospheric disturbance and scene motion change, the actually acquired video is often low in resolution, which brings difficulty to subsequent video processing and analysis and is difficult to meet the requirements of people.
The basic task of the created super-resolution reconstruction technology is to reconstruct a corresponding high-resolution image or video from an original low-resolution image or video. The super-resolution reconstruction technology for improving the resolution becomes a research hotspot in the field of image processing.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a video super-resolution processing method, a video super-resolution processing apparatus, and a storage medium.
According to an aspect of the embodiments of the present disclosure, a video super-resolution processing method is provided, including: acquiring a video to be processed, and determining a frame picture to be processed in the video to be processed; if the resolution of the frame picture to be processed is larger than a resolution threshold, determining a first processing area in the frame picture to be processed, and performing super-resolution processing on the first processing area based on a deep learning model, wherein the first processing area is a partial area in the frame picture to be processed.
In an embodiment, the video super-resolution processing method further includes: and if the resolution of the frame picture to be processed is smaller than the resolution threshold, performing super-resolution processing on the whole area of the frame picture to be processed based on the deep learning model.
In an embodiment, determining the first processing region in the frame picture to be processed comprises: determining an area in a set range taking the geometric center of the frame picture to be processed as the center in the frame picture to be processed as the first processing area; and/or determining the area of the pixel point, at the same position in the adjacent frame pictures to be processed, of the frame pictures to be processed, where the pixel value difference is greater than the set pixel difference threshold value, as the first processing area.
In an embodiment, before performing the super-resolution processing, the video super-resolution processing method further includes: and carrying out deblurring processing on the frame picture to be processed.
In an embodiment, the deblurring processing on the frame picture to be processed includes: and performing deblurring processing on the frame picture to be processed based on the deep learning model.
In an embodiment, the video super-resolution processing method further includes: and performing hyper-differentiation processing on a second processing area of the frame picture to be processed based on an image smoothing processing method, wherein the second processing area is other areas except the first processing area in the frame picture to be processed.
According to still another aspect of the embodiments of the present disclosure, there is provided a video super-resolution processing apparatus including: the acquisition module is used for acquiring a video to be processed; a determining module, configured to determine a frame picture to be processed in the video to be processed, and if a resolution of the frame picture to be processed is greater than a resolution threshold, determine a first processing area in the frame picture to be processed, where the first processing area is a partial area in the video frame picture; and the super-separation processing module is used for carrying out super-separation processing on the first processing area based on a deep learning model.
In one embodiment, the processing module is further configured to: and when the resolution of the frame picture to be processed is smaller than the resolution threshold, performing super-resolution processing on the frame picture to be processed based on the deep learning model.
In an embodiment, the first processing region is a motion region and/or a central region in the video frame picture, and the determining module is further configured to: determining an area where a geometric center of the video frame picture is located as the first processing area; and if the same pixel position in the adjacent video frame pictures of the video to be processed and the difference value of the pixel values are larger than the set threshold value, determining the area where the pixel position is located as the first processing area.
In one embodiment, the video super-resolution processing apparatus further includes: and the deblurring processing module is used for deblurring the frame picture to be processed.
In an embodiment, the deblurring processing module performs deblurring processing on the frame picture to be processed by adopting the following method: and performing deblurring processing on the frame picture to be processed based on the deep learning model.
In one embodiment, the super-divide processing module is further configured to: and performing hyper-differentiation processing on a second processing area of the frame picture to be processed based on an image smoothing processing method, wherein the second processing area is other areas except the first processing area in the frame picture to be processed.
According to another aspect of the embodiments of the present disclosure, there is provided a video super-resolution processing apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: performing the video super-resolution processing method of any one of the preceding claims.
According to yet another aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions stored thereon, which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the video super-resolution processing method of any one of the preceding claims.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: when the resolution of the frame picture to be processed is larger than the resolution threshold, the first processing area in the frame picture to be processed is subjected to super-resolution processing based on the deep learning model, the video super-resolution processing effect of the video to be processed can be guaranteed, the real-time performance requirement of video super-resolution is met, and the balance between the video super-resolution effect and the video super-resolution real-time performance is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a video super-resolution processing method according to yet another exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a video super-resolution processing method according to yet another exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a video super-resolution processing method according to yet another exemplary embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a video super-resolution processing method according to yet another exemplary embodiment of the present disclosure.
Fig. 6 is a flowchart illustrating a video super-resolution processing method according to yet another exemplary embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment of the present disclosure.
Fig. 8 is an application scenario and effect diagram illustrating a video super-resolution processing method according to an exemplary embodiment of the present disclosure.
Fig. 9 is a block diagram illustrating a video super-resolution processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 10 is a block diagram illustrating a video super-resolution processing apparatus according to still another exemplary embodiment of the present disclosure.
Fig. 11 is a block diagram illustrating an apparatus for video super-resolution processing according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
With the advent of high-definition display devices and the appearance of ultra-high-definition video resolution formats, a high-resolution video can bring clearer and more comfortable visual experience to users. There is an increasing demand for reconstructing high resolution video from low resolution video. The video super-resolution is a technology for reconstructing a high-resolution video from a given low-resolution video, and is widely applied to the fields of high-definition televisions, satellite images, video monitoring and the like.
The super-resolution method calculates the unknown pixel values in the high-resolution image by a given low-resolution image input. This method requires only a small amount of computation and is very fast. However, the reconstruction effect is not good, and particularly, the reconstruction of an image with a large amount of high-frequency information is performed. The single-image super-resolution is to reconstruct a corresponding high-resolution image by using a low-resolution image. Compared with the prior art, the video super-resolution is to reconstruct the corresponding high-resolution video frames by utilizing a plurality of related low-resolution video frames. When the format of the video to be processed is large, for example, the number of frames of each second of video frame is 24 frames, 30 frames and more, the calculation amount of the video super-resolution algorithm is large, the steps are complex, the played video has obvious pause and slow phenomena, and bad experience is brought to users.
Based on the above, the present disclosure provides a video super-resolution processing method, when the resolution of the frame to be processed is greater than the resolution threshold, the frame is processed in different regions, and video super-resolution processing is performed only in the determined region based on the depth model, so that the video super-resolution processing effect of the video to be processed is ensured, and the real-time requirement of video super-resolution is met.
Fig. 1 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment of the present disclosure, and referring to fig. 1, the video super-resolution processing method includes the following steps.
In step S101, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined.
In step S102, if the resolution of the frame picture to be processed is greater than the resolution threshold, a first processing region in the frame picture to be processed is determined, and the first processing region is subjected to a super-resolution process based on the deep learning model.
In the embodiment of the present disclosure, the video to be processed may be an original video acquired by an image acquisition device such as a camera in a field range of the original video, or may be a video obtained after preprocessing the original video, or a video obtained by network downloading. Determining a frame picture to be processed in the video to be processed, and determining a video frame obtained by analyzing the video to be processed as the frame picture to be processed, where the frame picture to be processed may be a multi-frame picture that can be continuous before and after as a reference, for example, the frame picture to be processed may be an ith frame picture in the video to be processed, and an ith-1 frame picture and an ith +1 frame picture adjacent to the ith frame picture. And reconstructing corresponding high-resolution frame pictures by utilizing a plurality of related frame pictures to be processed, wherein the time correlation between adjacent low-resolution video frame pictures is utilized in addition to the internal spatial correlation of a single image. The frame picture to be processed can also be the ith frame picture in the video to be processed, and the ith-1 frame, the ith-2 frame, the (i + 1) th frame, the (i + 2) th frame picture and the like adjacent to the ith frame picture.
The resolution threshold may be preset, and the resolution of the frame picture to be processed is compared with the preset resolution threshold. It is to be understood that the resolution threshold may be set to different values according to different video super-resolution processing capabilities, and the specific value of the resolution threshold is not limited by the embodiments of the present disclosure. In the embodiment of the disclosure, the first processing region is a partial region in a video frame picture, and when the resolution of the frame picture to be processed is greater than a resolution threshold, the first processing region in the frame picture to be processed is determined, and the video is subjected to super-resolution processing on the first processing region based on a deep learning model. The time for generating the hyper-resolution image based on the hyper-resolution network model of the deep learning is related to the resolution of the input frame picture to be processed. For the same super-resolution network model, the generation time of the super-resolution image generated by the video frame picture with the resolution of 360P is about 1/3 to 1/4 of the generation time of the super-resolution image generated by the video frame picture with the resolution of 720P. Therefore, when the resolution of the frame picture to be processed is large, namely larger than the set resolution threshold, the video hyper-segmentation processing is carried out on only partial areas in the video frame picture by using the video hyper-segmentation model based on the deep learning.
In the embodiment of the present disclosure, a deep learning model for video super-resolution processing may be trained in advance, and a network structure of the deep model may be a basic structure such as a super-resolution generation countermeasure network (SRGAN). When performing video super-resolution processing, in order to achieve a better processing effect, the frame picture to be processed may be a plurality of adjacent video frame pictures, and the plurality of video frame pictures of the video to be processed are used as the input of the neural network. The deep learning model for video super-resolution processing is used for extracting features of a plurality of video frame pictures through convolution for a plurality of times, expanding the feature sizes of the plurality of input video frame pictures to 2 times of the original feature sizes through up-sampling operation, and then carrying out deconvolution through a multilayer convolution network to generate a super-resolution image. The network structure of the depth model may also be other networks for Super Resolution Generation (SRGAN).
According to the embodiment of the disclosure, when the resolution of the frame picture to be processed is greater than the resolution threshold, the frame picture of the video to be processed is determined in the acquired video to be processed, a partial region in the frame picture to be processed, namely a first processing region, is determined, and the first processing region is subjected to super-resolution processing based on the deep learning model. The calculated amount of the super-resolution algorithm under the high-resolution video is reduced, the video super-resolution processing effect of the video to be processed is ensured, meanwhile, the real-time requirement of video super-resolution is met, and the balance between the video super-resolution effect and the video super-resolution real-time is realized.
Fig. 2 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment, and referring to fig. 2, the video super-resolution processing method includes the following steps.
In step S201, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined.
In step S202, if the resolution of the frame picture to be processed is greater than the resolution threshold, a first processing region in the frame picture to be processed is determined, and the first processing region is subjected to a super-resolution process based on the deep learning model.
In step S203, if the resolution of the frame picture to be processed is smaller than the resolution threshold, the entire region of the frame picture to be processed is subjected to the super-resolution processing based on the deep learning model.
In the embodiment of the disclosure, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined. The first processing area is a partial area in the frame picture to be processed, when the resolution of the frame picture to be processed is larger than a resolution threshold, the first processing area in the frame picture to be processed is determined, and video super-division processing is carried out on the first processing area based on the deep learning model. When the resolution of the frame picture to be processed is larger than the resolution threshold, the video super-resolution processing is carried out on the first processing area in the frame picture to be processed by utilizing the video super-resolution model based on deep learning.
The time for generating the hyper-resolution image based on the hyper-resolution network model of the deep learning is related to the resolution of the input frame picture to be processed. For the same hyper-resolution network model, the generation time of the hyper-resolution image generated by the frame picture to be processed with the resolution of 360P is about 1/3 to 1/4 of the generation time of the hyper-resolution image generated by the frame picture to be processed with the resolution of 720P. Therefore, when the resolution of the frame picture to be processed is small, namely when the resolution of the frame picture to be processed is smaller than the set resolution threshold, the video super-resolution processing is performed on all the areas in the video frame picture by using the video super-resolution model based on the deep learning, so that excessive calculation cost is not caused, and the video super-resolution processing speed is not influenced.
According to the embodiment of the disclosure, when the resolution of the frame picture to be processed is greater than the resolution threshold, the frame picture to be processed is determined in the acquired video to be processed, a partial region in the frame picture to be processed, namely a first processing region, is determined, and the first processing region is subjected to super-resolution processing based on the deep learning model. And when the resolution of the frame picture to be processed is smaller than the set resolution threshold, carrying out video super-resolution processing on all the areas in the frame picture to be processed by using a video super-resolution model based on deep learning. The calculated amount of the super-resolution algorithm under the high-resolution video is reduced, and the video super-resolution processing effect of the video to be processed is ensured, and meanwhile, the real-time requirement of video super-resolution is met.
Fig. 3 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment, and referring to fig. 3, the video super-resolution processing method includes the following steps.
In step S301, a video to be processed is acquired, and a frame to be processed in the video to be processed is determined.
In step S302, if the resolution of the frame to be processed is greater than the resolution threshold, determining an area within a set range centered on the geometric center of the frame to be processed in the frame to be processed as a first processing area; and/or determining the area of the pixel point at the same position in the adjacent frame pictures to be processed and with the pixel value difference value larger than the set pixel difference value threshold value in the frame pictures to be processed as the first processing area.
In step S303, the first processing region is subjected to the super-resolution processing based on the deep learning model.
In the embodiment of the present disclosure, the first processing region is a motion region in the frame picture to be processed and/or a central region of the frame picture to be processed. The central area, different from the background area in the frame picture to be processed, may be an area within a set range with the geometric center of the frame picture to be processed as the center. For example, the area within the set range is centered on the geometric center of the frame to be processed, and the shape of the area may be a regular graphic area such as a circle, a square, a rectangle, or other areas within the set range. The region where the pixel points with the same positions and the pixel value difference values larger than the set pixel difference value threshold value are located in the adjacent frame pictures of the video to be processed is a motion region, namely the region where the moving object is located in the video. The central area and/or the motion area are/is an area with important attention in video super-resolution processing, including a large amount of information. For the same frame to be processed, the central area and the motion area may be overlapped, or may be partially overlapped or not overlapped.
And acquiring a video to be processed, and determining a frame picture to be processed in the video to be processed. The first processing area is a partial area in a frame picture to be processed, when the resolution of the frame picture to be processed is larger than a resolution threshold, a central area and/or a motion area in the frame picture to be processed are determined, and video super-separation processing is carried out on the central area and/or the motion area on the basis of a deep learning model. When the resolution of the frame picture to be processed is larger than the resolution threshold, the video super-resolution processing is carried out on the key attention area in the frame picture to be processed by using the video super-resolution model based on deep learning.
According to the embodiment of the disclosure, when the resolution of the frame picture to be processed is greater than the resolution threshold, the frame picture to be processed is determined in the acquired video to be processed, a central area and/or a motion area in the frame picture to be processed, namely a first processing area, is determined, and the first processing area is subjected to super-resolution processing based on a deep learning model. The calculated amount of the super-resolution algorithm under the high-resolution video is reduced, and the video super-resolution processing effect of the video to be processed is ensured, and meanwhile, the real-time requirement of video super-resolution is met.
Fig. 4 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment, and referring to fig. 4, the video super-resolution processing method includes the following steps.
In step S401, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined.
In step S402, if the resolution of the frame picture to be processed is greater than the resolution threshold, a first processing area in the frame picture to be processed is determined.
In step S403, the frame picture to be processed is subjected to deblurring processing, and the first processing region is subjected to super-resolution processing based on the deep learning model.
In the embodiment of the disclosure, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined. And when the resolution of the frame picture to be processed is greater than the resolution threshold, determining a first processing area in the frame picture to be processed, wherein the first processing area is a partial area in the frame picture to be processed, and performing super-resolution processing on the video in the first processing area in the frame picture to be processed based on a deep learning model. When the resolution of the frame picture to be processed is larger than the resolution threshold, the video super-resolution model is used for carrying out video super-resolution processing on the first area in the frame picture to be processed.
And before the frame picture to be processed is subjected to the super-resolution processing, the frame picture to be processed is subjected to the deblurring processing. The video has the problems of motion blur phenomenon caused by moving objects or scene switching, the frame picture to be processed is deblurred, the blur caused by the motion is removed, the quality of the video frame to be processed input as a model is improved, the interference information is effectively removed, and the quality of video super-resolution processing is improved. It can be understood that the processing method for performing deblurring processing on the frame picture to be processed in the embodiment of the present disclosure may be a method based on deep learning or based on conventional image optimization adopted in the current technology, and the processing method for deblurring processing is not limited in the embodiment of the present disclosure.
According to the embodiment of the disclosure, when the resolution of the frame picture to be processed is greater than the resolution threshold, the frame picture to be processed is determined in the acquired video to be processed, a partial region in the frame picture to be processed, namely a first processing region, is determined, and the first processing region is subjected to super-resolution processing based on the deep learning model. And before the frame picture to be processed is subjected to the super-resolution processing, the frame picture to be processed is subjected to the deblurring processing. The quality of a frame picture to be processed is improved, the calculated amount of a super-resolution algorithm under a high-resolution video is reduced, the video super-resolution processing effect of the video to be processed is ensured, meanwhile, the video super-resolution processing effect is further improved, information contained in the super-resolution processed video is more effectively acquired, and meanwhile, the real-time requirement of video super-resolution is met.
Fig. 5 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment, and referring to fig. 5, the video super-resolution processing method includes the following steps.
In step S501, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined.
In step S502, if the resolution of the frame picture to be processed is greater than the resolution threshold, a first processing area in the frame picture to be processed is determined.
In step S503, the frame picture to be processed is deblurred based on the deep learning model, and the first processing region is subjected to the super-resolution processing.
In the embodiment of the disclosure, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined. And when the resolution of the frame picture to be processed is greater than the resolution threshold, determining a first processing area in the frame picture to be processed, wherein the first processing area is a partial area in the frame picture to be processed. When the resolution of the frame picture to be processed is larger than the resolution threshold, the video super-resolution model is used for carrying out video super-resolution processing on the first area in the frame picture to be processed.
And before the frame picture to be processed is subjected to the super-resolution processing, the frame picture to be processed is subjected to the deblurring processing. The video has the problems of motion blur phenomenon caused by moving objects or scene switching, the frame picture to be processed is deblurred, the blur caused by the motion is removed, the quality of the video frame to be processed input as a model is improved, the interference information is effectively removed, and the quality of video super-resolution processing is improved.
In the embodiment of the present disclosure, in order to further increase the operation speed of the video super-frequency processing model, a neural network for performing deblurring processing on the frame picture to be processed may be combined with a neural network for performing super-resolution processing on the frame picture to be processed. Namely, the first half of the deep learning model is used for deblurring the frame picture to be processed, and the second half of the deep learning model is used for super-dividing the frame picture to be processed, and when the overall loss function of the model is determined, the weighted sum of the deblurring loss function and the super-dividing loss function is taken.
According to the embodiment of the disclosure, when the resolution of the frame picture to be processed is greater than the resolution threshold, the frame picture to be processed is determined in the acquired video to be processed, a partial region in the frame picture to be processed, namely a first processing region, is determined, and the first processing region is subjected to super-resolution processing based on the deep learning model. And before the frame picture to be processed is subjected to the super-resolution processing, the frame picture to be processed is subjected to the deblurring processing. And the neural network used for deblurring the frame picture to be processed is combined with the neural network used for super-dividing the frame picture to be processed, and the output data of the network part used for super-dividing the frame picture to be processed in the deep learning model is used for deblurring, so that the model calculation amount is reduced, the model operation speed is increased, and the video super-division effect is further improved.
Fig. 6 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment, and referring to fig. 6, the video super-resolution processing method includes the following steps.
In step S601, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined.
In step S602, if the resolution of the frame picture to be processed is greater than the resolution threshold, a first processing region in the frame picture to be processed is determined, and the first processing region is subjected to a super-resolution process based on the deep learning model.
In step S603, a second processing region of the frame picture to be processed is subjected to a super-resolution process based on the image smoothing process method, where the second processing region is a region other than the first processing region in the frame picture to be processed.
In the embodiment of the disclosure, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined. The first processing area is a partial area in the frame picture to be processed, when the resolution of the frame picture to be processed is larger than a resolution threshold, the first processing area in the frame picture to be processed is determined, and video super-division processing is carried out on the first processing area based on the deep learning model. When the resolution of the frame picture to be processed is larger than the resolution threshold, the video super-resolution model is used for carrying out video super-resolution processing on the first area in the frame picture to be processed. In the frame picture to be processed, the area which is taken as the first processing area is taken as the second processing area. The second processing area may be an area with a small user attention, such as a background area or a non-moving area in the frame picture to be processed, the second processing area includes a small amount of information in video super-division processing, and super-division processing is performed based on an image smoothing processing method (for example, a bicubic interpolation method, or the like) in the second processing area. Compared with a processing method based on a deep learning model, the processing speed based on the image smoothing processing method is higher, the calculated amount of the super-resolution processing under a high-resolution video is obviously reduced, and excessive calculation cost is avoided.
According to the embodiment of the disclosure, when the resolution of the frame picture to be processed is greater than the resolution threshold, the frame picture to be processed is determined in the acquired video to be processed, a partial region in the frame picture to be processed, namely a first processing region, is determined, and the first processing region is subjected to super-resolution processing based on the deep learning model. And in the second processing area, performing the super-resolution processing on the video to be processed based on an image smoothing processing method. The calculated amount of the super-resolution algorithm under the high-resolution video is reduced, and the video super-resolution processing effect of the video to be processed is ensured, and meanwhile, the real-time requirement of video super-resolution is met.
Fig. 7 is a flowchart illustrating a video super-resolution processing method according to an exemplary embodiment of the present disclosure, and referring to fig. 7, the video super-resolution processing method includes the following steps.
In step S701, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined.
In step S702, the frame picture to be processed is subjected to deblurring processing based on the deep learning model.
In step S703, it is determined whether the resolution of the frame picture to be processed is greater than a resolution threshold.
When it is determined that the resolution of the frame picture to be processed is greater than the resolution threshold, step S704 is performed.
When it is determined that the resolution of the frame picture to be processed is less than the resolution threshold, step S705 is performed.
In step S704, a first processing region in the frame picture to be processed is determined, and the first processing region is subjected to a super-resolution process based on the deep learning model.
In step S705, the entire region of the frame picture to be processed is subjected to the super-resolution processing based on the deep learning model.
In step S706, the second processing region of the frame picture to be processed is subjected to the super-resolution processing based on the image smoothing processing method.
In step S707, the processing results of the first processing area and the second processing area are combined to generate a super-resolution video.
In the embodiment of the disclosure, a video to be processed is acquired, and a frame picture to be processed in the video to be processed is determined. And before the frame picture to be processed is subjected to the super-resolution processing, the frame picture to be processed is subjected to the deblurring processing. In order to further increase the running speed of the video super-frequency processing model, a neural network for performing deblurring processing on the frame picture to be processed and a neural network for performing super-division processing on the frame picture to be processed can be combined.
In the embodiment of the disclosure, the first processing region is a partial region in the frame picture to be processed, when the resolution of the frame picture to be processed is greater than a resolution threshold, the first processing region in the frame picture to be processed is determined, and the video is subjected to super-resolution processing on the first processing region based on the deep learning model. When the resolution of the frame picture to be processed is larger than the resolution threshold, the video super-resolution processing is carried out on the first area in the video frame picture only by using the video super-resolution model based on the deep learning. In the frame picture to be processed, the area which is taken as the first processing area is taken as the second processing area. In the second processing area, the super-resolution processing is performed based on an image smoothing processing method, for example, a bicubic interpolation method, and the processing speed is not much different from that of the super-resolution processing based on the deep learning model. And merging the processing results of the first processing area and the second processing area, removing the overlapped part in the frame picture to be processed, and generating the super-resolution video of the video to be processed.
When the resolution of the frame picture to be processed is smaller than the set resolution threshold, video super-resolution processing is carried out on all the areas in the video frame picture by using a video super-resolution model based on deep learning, so that excessive calculation cost is avoided, and the video super-resolution processing speed is not influenced.
According to the embodiment of the disclosure, when the resolution of the frame picture to be processed is greater than the resolution threshold, the frame picture to be processed is determined in the acquired video to be processed, a partial region in the frame picture to be processed, namely a first processing region, is determined, and the first processing region is subjected to super-resolution processing based on the deep learning model. The calculated amount of the super-resolution algorithm under the high-resolution video is reduced, the video super-resolution processing effect of the video to be processed is ensured, meanwhile, the real-time requirement of video super-resolution is met, and the balance between the video super-resolution effect and the video super-resolution real-time is realized.
Fig. 8 is an application scenario and effect diagram illustrating a video super-resolution processing method according to an exemplary embodiment of the present disclosure. Fig. 8 shows a process of performing super-resolution processing on a frame picture to be processed based on a deep learning model, the frame picture to be processed is input as a model, a deep learning network model is input, after features are extracted by performing convolution on a plurality of frame pictures to be processed for a plurality of times, the feature sizes of the plurality of input frame pictures to be processed are enlarged to 2 times of the original feature sizes by performing up-sampling operation, then deconvolution is performed by using a multilayer convolution network, and a super-resolution image of the plurality of frame pictures to be processed is output by the model, so that video super-resolution processing of a video to be processed is realized.
Based on the same conception, the embodiment of the disclosure also provides a video super-resolution processing device.
It is understood that, in order to implement the above functions, the video super-resolution processing device provided in the embodiments of the present disclosure includes a hardware structure and/or a software module for performing each function. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Fig. 9 is a block diagram illustrating a video super-resolution processing apparatus according to an exemplary embodiment of the present disclosure. Referring to fig. 9, the video super-resolution processing apparatus 100 includes an acquisition module 101, a determination module 102, and a super-resolution processing module 103.
The acquiring module 101 is configured to acquire a video to be processed.
The determining module 102 is configured to determine a frame picture to be processed in a video to be processed, and if a resolution of the frame picture to be processed is greater than a resolution threshold, determine a first processing area in the frame picture to be processed, where the first processing area is a partial area in the frame picture to be processed.
And the super-sorting processing module 103 is used for carrying out super-sorting processing on the first processing area based on the deep learning model.
In an embodiment, the super-divide processing module 103 is further configured to: and when the resolution of the frame picture to be processed is smaller than the resolution threshold, carrying out the super-resolution processing on the whole area of the frame picture to be processed based on the deep learning model.
In an embodiment, the determining module 102 is further configured to: determining an area in a set range taking the geometric center of a frame picture to be processed as the center in the frame picture to be processed as a first processing area; and/or determining the area of the pixel point at the same position in the adjacent frame pictures to be processed and with the pixel value difference value larger than the set pixel difference value threshold value in the frame pictures to be processed as the first processing area.
Fig. 10 is a block diagram illustrating a video super-resolution processing device according to an exemplary embodiment of the present disclosure. Referring to fig. 10, the video super-resolution processing apparatus 100 further includes: a deblurring processing module 104.
And the deblurring processing module 104 is configured to perform deblurring processing on the frame picture to be processed.
In an embodiment, the deblurring processing module 104 performs deblurring processing on the frame picture to be processed in the following manner: and carrying out deblurring processing on the frame picture to be processed based on the deep learning model.
In an embodiment, the super-divide processing module 103 is further configured to: and performing super-division processing on a second processing area of the frame picture to be processed based on the image smoothing processing method, wherein the second processing area is other areas except the first processing area in the frame picture to be processed.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 11 is a block diagram illustrating an apparatus 200 for video super-resolution processing according to an exemplary embodiment of the present disclosure. For example, the apparatus 200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 11, the apparatus 200 may include one or more of the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an input/output (I/O) interface 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the device 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 202 may include one or more processors 220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interaction between the processing component 202 and other components. For example, the processing component 202 can include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support operations at the apparatus 200. Examples of such data include instructions for any application or method operating on the device 200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 204 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 206 provide power to the various components of device 200. Power components 206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 200.
The multimedia component 208 includes a screen that provides an output interface between the device 200 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 210 is configured to output and/or input audio signals. For example, audio component 210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 also includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing component 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 214 includes one or more sensors for providing various aspects of status assessment for the device 200. For example, the sensor assembly 214 may detect an open/closed state of the device 200, the relative positioning of components, such as a display and keypad of the device 200, the sensor assembly 214 may also detect a change in the position of the device 200 or a component of the device 200, the presence or absence of user contact with the device 200, the orientation or acceleration/deceleration of the device 200, and a change in the temperature of the device 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate wired or wireless communication between the apparatus 200 and other devices. The device 200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 216 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 204, comprising instructions executable by processor 220 of device 200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is understood that "a plurality" in this disclosure means two or more, and other words are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A video super-resolution processing method, the method comprising:
acquiring a video to be processed, and determining a frame picture to be processed in the video to be processed;
if the resolution of the frame picture to be processed is larger than a resolution threshold, determining a first processing area in the frame picture to be processed, and performing super-resolution processing on the first processing area based on a deep learning model, wherein the first processing area is a partial area in the frame picture to be processed.
2. The video super-resolution processing method according to claim 1, wherein the method further comprises:
and if the resolution of the frame picture to be processed is smaller than the resolution threshold, performing super-resolution processing on the whole area of the frame picture to be processed based on the deep learning model.
3. The video super-resolution processing method according to claim 1, wherein determining the first processing region in the frame picture to be processed comprises:
determining an area in a set range taking the geometric center of the frame picture to be processed as the center in the frame picture to be processed as the first processing area; and/or
And determining the area of the pixel point, at the same position in the adjacent frame pictures to be processed, of the frame pictures to be processed, where the pixel value difference is greater than a set pixel difference threshold value, as the first processing area.
4. The video super-resolution processing method according to claim 1 or 2, wherein before the super-resolution processing, the method further comprises:
and carrying out deblurring processing on the frame picture to be processed.
5. The video super-resolution processing method according to claim 4, wherein the deblurring processing of the frame picture to be processed comprises:
and performing deblurring processing on the frame picture to be processed based on the deep learning model.
6. The video super-resolution processing method according to claim 1, wherein the method further comprises:
and performing hyper-differentiation processing on a second processing area of the frame picture to be processed based on an image smoothing processing method, wherein the second processing area is other areas except the first processing area in the frame picture to be processed.
7. A video super-resolution processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring a video to be processed;
a determining module, configured to determine a frame to be processed in the video to be processed, and if a resolution of the frame to be processed is greater than a resolution threshold, determine a first processing area in the frame to be processed, where the first processing area is a partial area in the frame to be processed;
and the super-division processing module is used for carrying out super-division processing on the first processing area based on a deep learning model.
8. The video super-resolution processing device according to claim 7, wherein the processing module is further configured to:
and when the resolution of the frame picture to be processed is smaller than the resolution threshold, performing super-resolution processing on the whole area of the frame picture to be processed based on the deep learning model.
9. The video super-resolution processing device according to claim 7, wherein the determining module is further configured to:
determining an area in a set range taking the geometric center of the frame picture to be processed as the center in the frame picture to be processed as the first processing area; and/or
And determining the area of the pixel point, at the same position in the adjacent frame pictures to be processed, of the frame pictures to be processed, where the pixel value difference is greater than a set pixel difference threshold value, as the first processing area.
10. The video super-resolution processing device according to claim 7 or 8, wherein the video super-resolution processing device further comprises:
and the deblurring processing module is used for deblurring the frame picture to be processed.
11. The video super-resolution processing device according to claim 10, wherein the deblurring processing module deblurrs the frame picture to be processed by:
and performing deblurring processing on the frame picture to be processed based on the deep learning model.
12. The video super-resolution processing device according to claim 7, wherein the super-resolution processing module is further configured to:
and performing hyper-differentiation processing on a second processing area of the frame picture to be processed based on an image smoothing processing method, wherein the second processing area is other areas except the first processing area in the frame picture to be processed.
13. A video super-resolution processing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: -performing the video super-resolution processing method of any of claims 1 to 6.
14. A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the video super-resolution processing method of any one of claims 1 to 6.
CN202110104274.5A 2021-01-26 2021-01-26 Video super-resolution processing method, video super-resolution processing device and storage medium Pending CN112950465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110104274.5A CN112950465A (en) 2021-01-26 2021-01-26 Video super-resolution processing method, video super-resolution processing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110104274.5A CN112950465A (en) 2021-01-26 2021-01-26 Video super-resolution processing method, video super-resolution processing device and storage medium

Publications (1)

Publication Number Publication Date
CN112950465A true CN112950465A (en) 2021-06-11

Family

ID=76237070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110104274.5A Pending CN112950465A (en) 2021-01-26 2021-01-26 Video super-resolution processing method, video super-resolution processing device and storage medium

Country Status (1)

Country Link
CN (1) CN112950465A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120051667A1 (en) * 2010-08-27 2012-03-01 Korea University Research And Business Foundation Method and system of reconstructing super-resolution image
CN107133915A (en) * 2017-04-21 2017-09-05 西安科技大学 A kind of image super-resolution reconstructing method based on study
CN108765282A (en) * 2018-04-28 2018-11-06 北京大学 Real-time super-resolution method and system based on FPGA
KR101931804B1 (en) * 2018-03-26 2018-12-21 장경익 Device and method of increasing a recognition rate in a license plate
CN110390666A (en) * 2019-06-14 2019-10-29 平安科技(深圳)有限公司 Road damage detecting method, device, computer equipment and storage medium
CN111402126A (en) * 2020-02-15 2020-07-10 北京中科晶上科技股份有限公司 Video super-resolution method and system based on blocks
CN111738951A (en) * 2020-06-22 2020-10-02 北京字节跳动网络技术有限公司 Image processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120051667A1 (en) * 2010-08-27 2012-03-01 Korea University Research And Business Foundation Method and system of reconstructing super-resolution image
CN107133915A (en) * 2017-04-21 2017-09-05 西安科技大学 A kind of image super-resolution reconstructing method based on study
KR101931804B1 (en) * 2018-03-26 2018-12-21 장경익 Device and method of increasing a recognition rate in a license plate
CN108765282A (en) * 2018-04-28 2018-11-06 北京大学 Real-time super-resolution method and system based on FPGA
CN110390666A (en) * 2019-06-14 2019-10-29 平安科技(深圳)有限公司 Road damage detecting method, device, computer equipment and storage medium
CN111402126A (en) * 2020-02-15 2020-07-10 北京中科晶上科技股份有限公司 Video super-resolution method and system based on blocks
CN111738951A (en) * 2020-06-22 2020-10-02 北京字节跳动网络技术有限公司 Image processing method and device

Similar Documents

Publication Publication Date Title
CN110428378B (en) Image processing method, device and storage medium
CN109922372B (en) Video data processing method and device, electronic equipment and storage medium
EP3099075B1 (en) Method and device for processing identification of video file
CN109118430B (en) Super-resolution image reconstruction method and device, electronic equipment and storage medium
CN110060215B (en) Image processing method and device, electronic equipment and storage medium
CN107798654B (en) Image buffing method and device and storage medium
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
CN111340733B (en) Image processing method and device, electronic equipment and storage medium
CN110944230B (en) Video special effect adding method and device, electronic equipment and storage medium
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN112634160A (en) Photographing method and device, terminal and storage medium
CN108171222B (en) Real-time video classification method and device based on multi-stream neural network
CN109840890B (en) Image processing method and device, electronic equipment and storage medium
CN109544490B (en) Image enhancement method, device and computer readable storage medium
CN113160039B (en) Image style migration method and device, electronic equipment and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN113660531A (en) Video processing method and device, electronic equipment and storage medium
CN109816620B (en) Image processing method and device, electronic equipment and storage medium
CN112288657A (en) Image processing method, image processing apparatus, and storage medium
CN113099038B (en) Image super-resolution processing method, image super-resolution processing device and storage medium
CN114943657A (en) Image processing method, image processing device, electronic device, and storage medium
CN110312117B (en) Data refreshing method and device
CN112950465A (en) Video super-resolution processing method, video super-resolution processing device and storage medium
CN115641269A (en) Image repairing method and device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination