CN112686811A - Video processing method, video processing apparatus, electronic device, and storage medium - Google Patents

Video processing method, video processing apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN112686811A
CN112686811A CN202011359736.XA CN202011359736A CN112686811A CN 112686811 A CN112686811 A CN 112686811A CN 202011359736 A CN202011359736 A CN 202011359736A CN 112686811 A CN112686811 A CN 112686811A
Authority
CN
China
Prior art keywords
video
sub
diagnosis result
degree
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011359736.XA
Other languages
Chinese (zh)
Inventor
陈海波
权甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenlan Artificial Intelligence Application Research Institute Shandong Co ltd
Original Assignee
Deep Blue Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Blue Technology Shanghai Co Ltd filed Critical Deep Blue Technology Shanghai Co Ltd
Priority to CN202011359736.XA priority Critical patent/CN112686811A/en
Publication of CN112686811A publication Critical patent/CN112686811A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the application relates to the technical field of video processing, and provides a video processing method, a video processing device, electronic equipment and a storage medium, wherein the video processing method comprises the following steps: segmenting a video to be processed into a plurality of sub-videos; performing video interlaced scanning diagnosis on the sub-video, and outputting an interlaced scanning diagnosis result; carrying out noise fuzzy degree diagnosis on the sub-video, and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result; determining the resolution and frame rate of the sub-video; and correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video. According to the video processing method provided by the application, high-quality videos can be obtained by performing video segmentation, performing problem type identification and finally repairing and enhancing each sub-video one by one according to the problem type.

Description

Video processing method, video processing apparatus, electronic device, and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method, a video processing apparatus, an electronic device, and a storage medium.
Background
Video enhancement is currently a major type of demand. In the prior art, an algorithm can only enhance a video in one direction, for example, a denoising algorithm has a good effect on processing a video with noise, but has a poor effect on processing a blurred video. However, in reality, a plurality of quality problems often exist in a video, for example, problems of interlacing, noise, blurring, low resolution, low frame rate and the like exist at the same time, and the existing technology is difficult to process. In the related art, in order to solve the above technical problems, the types of the problems of the video are manually identified, and then the problems are handled as much as possible.
Disclosure of Invention
The application provides a video processing method for realizing high-quality video restoration.
The application provides a video processing method, which comprises the following steps:
segmenting a video to be processed into a plurality of sub-videos;
performing video interlaced scanning diagnosis on the sub-video, and outputting an interlaced scanning diagnosis result; carrying out noise fuzzy degree diagnosis on the sub-video, and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result; determining the resolution and frame rate of the sub-video;
and correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video.
According to a video processing method provided by the present application, the correspondingly enhancing the sub-video based on the alternate line scanning diagnosis result, the noise level diagnosis result, the blur level diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video includes:
determining options to be enhanced from video de-interlacing, de-noising, de-blurring, video super-resolution reconstruction and video interpolation based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the blurring degree diagnosis result, the resolution and the frame rate of the sub-video;
the options to be enhanced are arranged according to the following sequence: removing interlacing of the video, removing noise, removing blurring, reconstructing super-resolution of the video and performing frame interpolation of the video.
According to a video processing method provided by the present application, performing video interlaced scanning diagnosis on the sub-video, and outputting an interlaced scanning diagnosis result includes:
performing difference on the odd lines and the even lines adjacent to the image in the sub-video, and outputting a line difference value;
and outputting an interlaced scanning diagnosis result based on the line difference value.
According to the video processing method provided by the application, the noise fuzzy degree diagnosis is performed on the sub-video, and a noise degree diagnosis result and a fuzzy degree diagnosis result are output, and the method comprises the following steps:
determining the difference value of each pixel of any frame image of the sub-video and the surrounding pixels of the pixel;
based on the difference, a noise degree diagnosis result and a blur degree diagnosis result are determined.
According to a video processing method provided by the present application, the dividing a video to be processed into a plurality of sub-videos includes:
determining the difference degree of two adjacent frames of images of the video to be processed;
under the condition that the difference degree is not larger than a target value, classifying the two adjacent frames of images into the same sub-video;
and under the condition that the difference degree is larger than a target value, classifying the two adjacent frames of images into different sub-videos.
According to a video processing method provided by the present application, the determining a difference degree between two adjacent frames of images of the video to be processed includes:
subtracting the pixels at the same position between the two adjacent frames of images, and outputting a pixel difference value;
determining the degree of difference based on the pixel difference value.
According to a video processing method provided by the present application, the determining the difference degree based on the pixel difference value includes:
calculating the average value of pixel difference values of all pixels of the two adjacent frames of images, wherein the average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
weighting different areas in the two adjacent frame images, and determining a weighted average value of pixel difference values of all pixels of the two adjacent frame images based on the weight of the area where the pixel corresponding to the pixel difference value is located, wherein the weighted average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
determining the position of a target object in two adjacent frame images, weighting pixel difference values based on pixels corresponding to the target object, and determining a weighted average value of the pixel difference values of all the pixels of the two adjacent frame images, wherein the weighted average value is used for representing the difference degree.
The present application also provides a video processing apparatus, including:
the segmentation module is used for segmenting the video to be processed into a plurality of sub-videos;
the first diagnosis module is used for carrying out video interlaced scanning diagnosis on the sub-video and outputting an interlaced scanning diagnosis result;
the second diagnosis module is used for carrying out noise fuzzy degree diagnosis on the sub-video and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result;
the reading module is used for determining the resolution and the frame rate of the sub-video;
and the enhancement module is used for correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video.
According to a video processing apparatus provided by the present application, the enhancement module includes:
the screening module is used for determining options to be enhanced from video interlacing removal, noise removal, blur removal, video super-resolution reconstruction and video frame interpolation based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the blur degree diagnosis result, the resolution and the frame rate of the sub-video;
the execution module is used for enabling the options to be enhanced to be in the following sequence: removing interlacing of the video, removing noise, removing blur, reconstructing super-resolution of the video and executing video frame interpolation.
According to a video processing apparatus provided by the present application, the first diagnostic module includes:
the first processing module is used for performing difference on the odd lines and the even lines adjacent to the image in the sub-video and outputting a line difference value;
a first determining module for outputting an interlaced scanning diagnosis result based on the line difference value.
According to a video processing apparatus provided by the present application, the second diagnostic module includes:
the second processing module is used for determining the difference value of each pixel of any frame image of the sub-video and the pixels around the pixel;
a second determination module to determine a noise level diagnostic result and a blur level diagnostic result based on the difference.
According to a video processing apparatus provided by the present application, the segmentation module includes:
the judging module is used for determining the difference degree of two adjacent frames of images of the video to be processed;
the dividing module is used for classifying the two adjacent frames of images into the same sub-video under the condition that the difference degree is not greater than a target value; and under the condition that the difference degree is larger than a target value, classifying the two adjacent frames of images into different sub-videos.
According to the video processing apparatus provided by the present application, the determining module includes:
the third processing module is used for subtracting the pixels at the same position between the two adjacent frames of images and outputting a pixel difference value;
a third determining module for determining the difference degree based on the pixel difference value.
According to the video processing apparatus provided by the present application, the third determining module,
the image difference value calculating unit is further used for calculating an average value of pixel difference values of all pixels of the two adjacent frames of images, and the average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
the weighted average value of the pixel difference values of all the pixels of the two adjacent frame images is determined based on the weight of the area where the pixel difference value corresponds to the pixel, and the weighted average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
the method is further used for determining the position of a target object in two adjacent frames of images, weighting pixel difference values based on pixels corresponding to the target object, and determining a weighted average value of the pixel difference values of all pixels of the two adjacent frames of images, wherein the weighted average value is used for representing the difference degree.
The present application further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of any of the video processing methods described above.
The present application also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the video processing methods described above.
According to the video processing method, the video processing device, the electronic equipment and the storage medium, the high-quality video can be obtained by firstly carrying out video segmentation, then carrying out problem type identification and finally repairing and enhancing each sub-video one by one according to the problem type.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a video processing method provided in the present application;
FIG. 2 is a schematic flow chart diagram illustrating an embodiment of step 110 in a video processing method provided in the present application;
FIG. 3 is a schematic flow chart diagram illustrating an embodiment of step 111 in a video processing method provided in the present application;
FIG. 4 is a schematic flow chart diagram illustrating an embodiment of step 130 of the video processing method provided in the present application;
FIG. 5 is a schematic structural diagram of a video processing apparatus provided in the present application;
FIG. 6 is a schematic structural diagram of a segmentation module of a video processing apparatus provided in the present application;
fig. 7 is a schematic structural diagram of a determining module of the video processing apparatus provided in the present application;
fig. 8 is a schematic structural diagram of a first diagnostic module of the video processing apparatus provided in the present application;
fig. 9 is a schematic structural diagram of a second diagnostic module of the video processing apparatus provided in the present application;
fig. 10 is a schematic structural diagram of an enhancement module of a video processing apparatus provided in the present application;
fig. 11 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
To make the purpose, technical solutions and advantages of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the present application, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The video processing method of the present application is described below with reference to fig. 1 to 4.
As shown in fig. 1, the video processing method provided by the present application includes: step 110-step 130.
Step 110, segmenting a video to be processed into a plurality of sub-videos;
the video to be processed comprises a plurality of frames of images which are arranged according to a time sequence.
Each sub-video comprises at least one frame of image, and for the sub-video comprising a plurality of frames of images, the plurality of frames of images are also arranged according to the time sequence of the video to be processed.
The problems of each frame of image in each sub-video are basically the same, and the video to be processed is firstly segmented into a plurality of sub-videos, so that corresponding enhancement processing is conveniently executed according to the respective problem of each sub-video in the subsequent enhancement process, and the processing mode has stronger correspondence with the problem.
Step 121, performing video interlaced scanning diagnosis on the sub-video, and outputting an interlaced scanning diagnosis result;
step 122, carrying out noise fuzzy degree diagnosis on the sub-video, and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result;
step 123, determining the resolution and the frame rate of the sub-video;
the execution sequence of step 121, step 122 and step 123 is not limited.
For example, the video interlaced scanning diagnosis may be performed on the sub-video first, the interlaced scanning diagnosis result may be output, the noise blur degree diagnosis may be performed on the sub-video, the noise blur degree diagnosis result and the blur degree diagnosis result may be output, and the resolution and the frame rate of the sub-video may be determined finally.
Or firstly determining the resolution and the frame rate of the sub-video, then carrying out noise fuzzy degree diagnosis on the sub-video, outputting a noise degree diagnosis result and a fuzzy degree diagnosis result, and finally carrying out video interlaced scanning diagnosis on the sub-video and outputting an interlaced scanning diagnosis result.
In addition, video interlacing diagnosis and noise fuzziness diagnosis are required for each sub-video to identify the type of problem that each sub-video has.
In some embodiments, determining the resolution and the frame rate of the sub-video may be equivalent to determining the resolution and the frame rate of the video to be processed, and may be obtained directly from the video attribute.
And step 130, correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video.
After the diagnosis and the attribute determination are completed in steps 121 to 123, corresponding enhancement is performed on each sub-video according to the diagnosis result.
For example, according to the diagnostic result of the ith sub-video, the ith sub-video has interlaced scanning and noise, and both the resolution and the frame rate meet the target values, then in step 130, video de-interlacing and noise removal are performed on the ith sub-video, and blurring does not need to be removed.
After each sub-video is correspondingly enhanced, the problem of each sub-video is repaired in a targeted manner, and the enhanced sub-videos are synthesized into a complete video according to the original sequence, so that the target video can be obtained.
According to the video processing method provided by the embodiment of the application, high-quality videos can be obtained by firstly carrying out video segmentation, then carrying out problem type identification, and finally repairing and enhancing each sub-video one by one according to the problem type.
In some embodiments, as shown in fig. 2, the step 110 of dividing the video to be processed into a plurality of sub-videos includes: step 111, step 112a and step 112 b.
Step 111, determining the difference degree of two adjacent frames of images of a video to be processed;
step 112a, under the condition that the difference degree is not greater than the target value, grouping two adjacent frames of images into the same sub-video;
and step 112b, under the condition that the difference degree is larger than the target value, grouping the two adjacent frames of images into different sub-videos.
In other words, for images that are linked in time series and similar, they are assigned to the same sub-video, and in the case that two adjacent frames of images are not similar, the video to be processed is segmented from between the two frames of images.
In an actual implementation, video segmentation may be performed as follows: (1) setting a target value; (2) solving the difference degree of two adjacent frames of images; (3) comparing the difference degree result with a target value, if the difference degree result is not greater than the target value, determining that the two frames of images are similar to each other, classifying the two frames of images into a sub-video, and then repeating the steps (2) and (3) on the second frame of image and the next adjacent image; (4) if the difference degree result is larger than the target value, the second frame image is classified as the next sub-video, and the steps (2), (3) and (4) are repeated until all the images are divided.
The video segmentation method is simple to execute, can practically cluster images with similar problems, and is convenient for repair enhancement in subsequent steps.
In some embodiments, as shown in fig. 3, the step 111 of determining the difference between two adjacent frames of images of the video to be processed includes: step 111a and step 111 b.
Step 111a, subtracting pixels at the same position between two adjacent frames of images, and outputting a pixel difference value;
for example, a pixel of the coordinate (a, b) in the ith frame image is subtracted from a pixel of the coordinate (a, b) in the (i + 1) th frame image, and a pixel difference value is output; and processing each pixel in the ith frame image according to the mode to obtain the pixel difference value of each pixel of the ith frame image and the (i + 1) th frame image.
And step 111b, determining the difference degree based on the pixel difference value.
The difference between the pixel values of the ith frame image and the pixel values of the (i + 1) th frame image can be obtained in a plurality of ways.
Firstly, calculating the mean value of pixel difference values of all pixels of two adjacent frames of images, wherein the mean value is used for representing the difference degree;
in the processing mode, all pixel difference values are directly averaged to obtain the difference degree. The whole treatment process is simple.
Weighting different areas in two adjacent frames of images, and determining a weighted average value of pixel difference values of all pixels of the two adjacent frames of images based on the weight of the area where the pixel difference value corresponds to the pixel, wherein the weighted average value is used for representing the difference degree;
in this processing method, the image needs to be partitioned, and different regions are given different authorities, for example, the image may be divided into a plurality of regions, for example, three regions, according to a rectangular ring, where the rectangular region at the innermost circle is usually the focus of attention and has the highest authority, and the boundary region at the outermost circle has the lowest attention and has the lowest authority.
And multiplying the pixel difference values corresponding to the pixels in different areas by the corresponding weights, and then calculating the average value to obtain the difference.
According to the method, attention difference of different areas is considered, and more accurate video segmentation can be achieved.
And thirdly, determining the position of the target object in the two adjacent frames of images, giving weights to the pixel difference values based on the pixels corresponding to the target object, and determining a weighted average value of the pixel difference values of all the pixels of the two adjacent frames of images, wherein the weighted average value is used for representing the difference degree.
In this processing method, it is necessary to identify a target object in an image, for example, in the case of a soccer video, the target object is set as a soccer ball and/or a specific player, the pixel in which the target object is located is assigned the highest weight, and the other pixels are assigned lower weights.
And multiplying the pixel difference value corresponding to each pixel by the corresponding weight, and then calculating the average value to obtain the difference.
According to the method, attention difference of different objects is considered, and more accurate video segmentation can be achieved.
In some embodiments, step 121, performing video interlaced diagnosis on the sub-video, and outputting an interlaced diagnosis result, includes: making difference between odd lines and even lines adjacent to the image in the sub video, and outputting a line difference value; and outputting an interlaced scanning diagnosis result based on the line difference value.
In other words, for a single frame image, the difference between the adjacent odd lines and the even lines is made, the difference is averaged to obtain a line difference, when the line difference is greater than the target line difference, the interlaced scanning diagnosis result is that interlaced scanning exists, and when the line difference is not greater than the target line difference, the interlaced scanning diagnosis result is that interlaced scanning does not exist.
In some embodiments, step 122, performing noise ambiguity diagnosis on the sub-video, and outputting a noise ambiguity diagnosis result and an ambiguity diagnosis result, includes: determining the difference value of each pixel of any frame image of the sub-video and the pixels around the pixel; based on the difference, a noise degree diagnostic result and a blur degree diagnostic result are determined.
In an actual implementation, each pixel of any frame of image is subtracted from the surrounding pixels, for example, the pixel located at the vertex is subtracted from the surrounding 3 pixels, the pixel located at the non-vertex of the boundary is subtracted from the surrounding 5 pixels, and the other pixels are subtracted from the surrounding 8 pixels, and the obtained differences are averaged to obtain the difference value between the pixel and the surrounding pixels.
And after the difference value of each pixel is determined, averaging to obtain the difference x of the image.
3 thresholds u1, u2, u3 were set, where u1< u2< u 3. If x < u1, it is determined that the image has more severe blur, then in step 130, a high level of deblurring needs to be performed on the sub-video; if u1< x < u2 is determined to be light blur, then in step 130, a low level of deblurring needs to be performed on the sub-video; if u2< x < u3 is determined to be mild noise, then in step 130, low-level denoising needs to be performed on the sub-video; if x > u3, it is determined to be heavily noisy, then in step 130, a high level of denoising needs to be performed on the sub-video.
In some embodiments, as shown in fig. 4, step 130, correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise level diagnosis result, the blur level diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video, includes: step 131 and step 132.
Step 131, based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video, selecting corresponding options to be enhanced from the interlacing removal, the noise removal, the fuzzy removal, the video super-resolution reconstruction and the video frame interpolation executed on the sub-video;
and 132, sequentially executing corresponding options in the interlacing removal, the noise removal, the blurring removal, the video super-resolution reconstruction and the video frame interpolation according to the sequence of the interlacing removal, the noise removal, the blurring removal, the video super-resolution reconstruction and the video frame interpolation on the selected option to be enhanced to obtain the target video.
It can be understood that, for the sub-video with one problem diagnosed in steps 121 to 123, only one item of the corresponding to-be-enhanced option is present, for example, a certain sub-video has a problem of blur, and then only the blur needs to be removed.
For the sub-videos with multiple problems diagnosed in steps 121-123, there are multiple corresponding to-be-enhanced options, and these to-be-enhanced options need to be executed in the order of video de-interlacing, de-noising, blur removal, video super-resolution reconstruction, and video frame interpolation, and are not necessarily executed without corresponding problems.
For example, if a sub-video has video interlacing and blurring problems, it needs to be performed in the order of video de-interlacing and de-blurring.
For the video interlacing removal, noise removal, blur removal, video super-resolution reconstruction and video frame insertion, the existing related algorithm can be adopted.
Through a great deal of research, the inventor of the application finds that the problems generated in the processing of the former enhancement option can be solved by adopting the sequence of video de-interlacing, noise removal, blurring removal, video super-resolution reconstruction and video frame interpolation.
According to the video processing method, the problems and the degrees of the sub-videos are automatically obtained through video segmentation and then the algorithm of video quality analysis, and then the videos are automatically processed in sequence according to the preset processing sequence, and finally the high-quality videos are obtained.
The following describes the video processing apparatus provided in the present application, and the video processing apparatus described below and the video processing method described above may be referred to in correspondence with each other.
As shown in fig. 5, the present application provides a video processing apparatus, including: a segmentation module 510, a first diagnostic module 521, a second diagnostic module 522, a read module 523, and an enhancement module 530.
A segmentation module 510, configured to segment a video to be processed into a plurality of sub-videos;
the first diagnostic module 521 is configured to perform video interlaced scanning diagnosis on the sub-video and output an interlaced scanning diagnosis result;
the second diagnosis module 522 is configured to perform noise ambiguity diagnosis on the sub-video, and output a noise ambiguity diagnosis result and an ambiguity diagnosis result;
a reading module 523, configured to determine a resolution and a frame rate of the sub-video;
the enhancing module 530 is configured to correspondingly enhance the sub-video based on the interlaced scanning diagnosis result, the noise level diagnosis result, the blur level diagnosis result, the resolution, and the frame rate of the sub-video, so as to obtain the target video.
According to the video processing device provided by the embodiment of the application, high-quality videos can be obtained by firstly carrying out video segmentation, then carrying out problem type identification, and finally repairing and enhancing each sub-video one by one according to the problem type.
In some embodiments, as shown in fig. 6, the segmentation module 510 includes: a judging module 511 and a dividing module 512.
The judging module 511 is configured to determine a difference between two adjacent frames of images of the video to be processed;
a dividing module 512, configured to, when the difference is not greater than the target value, classify two adjacent frames of images into the same sub-video; and under the condition that the difference degree is larger than the target value, classifying the two adjacent frames of images into different sub-videos.
In some embodiments, as shown in fig. 7, the determining module 511 comprises: a third processing module 511a and a third determining module 511 b.
The third processing module 511a is configured to subtract pixels at the same position between two adjacent frames of images, and output a pixel difference value;
a third determining module 511b, configured to determine the difference degree based on the pixel difference value.
In some embodiments, the third determining module is further configured to average pixel difference values of all pixels of two adjacent frames of images, where the average value is used to characterize the difference;
alternatively, the first and second electrodes may be,
the third determining module is further used for weighting different areas in the two adjacent frames of images, and determining a weighted average value of pixel difference values of all pixels of the two adjacent frames of images based on the weight of the area where the pixel difference value corresponds to the pixel, wherein the weighted average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
the third determining module is further configured to determine positions of the target object in the two adjacent frames of images, assign weights to the pixel difference values based on the pixels corresponding to the target object, and determine a weighted average value of the pixel difference values of all the pixels in the two adjacent frames of images, where the weighted average value is used for representing the difference degree.
In some embodiments, as shown in fig. 8, the first diagnostic module 521, includes: a first processing module 521a and a first determining module 521 b.
The first processing module 521a is configured to perform a difference between an odd line and an even line adjacent to an image in the sub-video, and output a line difference value;
a first determining module 521b, configured to output an interlaced scanning diagnosis result based on the line difference.
In some embodiments, as shown in fig. 9, the second diagnostic module 522 includes: a second processing module 522a and a second determining module 522 b.
A second processing module 522a, configured to determine a difference value between each pixel of any frame of image of the sub-video and a pixel around the pixel;
a second determining module 522b is used for determining the noise degree diagnostic result and the fuzzy degree diagnostic result based on the difference value.
In some embodiments, as shown in fig. 10, the boost module 530 includes: a screening module 531 and an execution module 532.
The screening module 531 is configured to determine an option to be enhanced from video de-interlacing, de-noising, de-blurring, video super-resolution reconstruction, and video interpolation based on an interlaced scanning diagnosis result, a noise degree diagnosis result, a blurring degree diagnosis result, a resolution and a frame rate of the sub-video;
an executing module 532, configured to execute the options to be enhanced in the following order: removing interlacing of the video, removing noise, removing blur, reconstructing super-resolution of the video and executing video frame interpolation.
The video processing apparatus provided in the embodiment of the present application is configured to execute the video processing method, and a specific implementation manner of the video processing apparatus is consistent with the method implementation manner and can achieve the same beneficial effects, which is not described herein again.
Fig. 11 illustrates a physical structure diagram of an electronic device, and as shown in fig. 11, the electronic device may include: a processor (processor)1110, a communication Interface (Communications Interface)1120, a memory (memory)1130, and a communication bus 1140, wherein the processor 1110, the communication Interface 1120, and the memory 1130 communicate with each other via the communication bus 1140. Processor 1110 may invoke logic instructions in memory 1130 to perform a video processing method comprising: segmenting a video to be processed into a plurality of sub-videos; performing video interlaced scanning diagnosis on the sub-video, and outputting an interlaced scanning diagnosis result; carrying out noise fuzzy degree diagnosis on the sub-video, and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result; determining the resolution and frame rate of the sub-video; and correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video.
In addition, the logic instructions in the memory 1130 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The processor 1110 in the electronic device provided in this embodiment of the present application may call a logic instruction in the memory 1130 to implement the video processing method, and a specific implementation manner of the method is consistent with that of the method, and the same beneficial effects may be achieved, which is not described herein again.
In another aspect, the present application further provides a computer program product, which is described below, and the computer program product described below and the video processing method described above may be referred to correspondingly.
The computer program product comprises a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform a video processing method provided by the above methods, the method comprising: segmenting a video to be processed into a plurality of sub-videos; performing video interlaced scanning diagnosis on the sub-video, and outputting an interlaced scanning diagnosis result; carrying out noise fuzzy degree diagnosis on the sub-video, and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result; determining the resolution and frame rate of the sub-video; and correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video.
When the computer program product provided in the embodiment of the present application is executed, the video processing method is implemented, and the specific implementation manner is consistent with the method implementation manner, and the same beneficial effects can be achieved, which is not described herein again.
In yet another aspect, the present application further provides a non-transitory computer-readable storage medium, which is described below, and the non-transitory computer-readable storage medium described below and the video processing method described above may be referred to in correspondence.
The present application also provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, is implemented to perform the video processing method provided above, the method comprising: segmenting a video to be processed into a plurality of sub-videos; performing video interlaced scanning diagnosis on the sub-video, and outputting an interlaced scanning diagnosis result; performing noise fuzzy degree diagnosis on the sub-video, and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result; determining the resolution and frame rate of the sub-video; and correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video.
When the computer program stored on the non-transitory computer readable storage medium provided in the embodiment of the present application is executed, the video processing method is implemented, and the specific implementation manner is consistent with the method implementation manner and can achieve the same beneficial effects, which is not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without undue invasive labor.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may be modified or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (16)

1. A video processing method, comprising:
segmenting a video to be processed into a plurality of sub-videos;
performing video interlaced scanning diagnosis on the sub-video, and outputting an interlaced scanning diagnosis result;
carrying out noise fuzzy degree diagnosis on the sub-video, and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result;
determining the resolution and frame rate of the sub-video;
and correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video.
2. The video processing method according to claim 1, wherein said correspondingly enhancing the sub-video based on the interlace diagnosis result, the noise level diagnosis result, the blur level diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video comprises:
selecting corresponding options to be enhanced from interlacing removal, noise removal, blur removal, video super-resolution reconstruction and video frame interpolation performed on the sub-video based on an interlaced scanning diagnosis result, a noise degree diagnosis result, a blur degree diagnosis result, a resolution and a frame rate of the sub-video;
and sequentially executing the corresponding options in the interlacing removal, the noise removal, the blurring removal, the video super-resolution reconstruction and the video frame interpolation according to the sequence of the interlacing removal, the noise removal, the blurring removal, the video super-resolution reconstruction and the video frame interpolation to the selected option to be enhanced to obtain the target video.
3. The video processing method according to claim 1, wherein said performing video interlaced diagnosis on said sub-video and outputting interlaced diagnosis results comprises:
performing difference on the odd lines and the even lines adjacent to the image in the sub-video, and outputting a line difference value;
and outputting an interlaced scanning diagnosis result based on the line difference value.
4. The video processing method according to claim 1, wherein said performing noise blur degree diagnosis on said sub-video, and outputting a noise degree diagnosis result and a blur degree diagnosis result comprises:
determining the difference value of each pixel of any frame image of the sub-video and the surrounding pixels of the pixel;
based on the difference, a noise degree diagnosis result and a blur degree diagnosis result are determined.
5. The video processing method according to any of claims 1 to 4, wherein said splitting the video to be processed into a plurality of sub-videos comprises:
determining the difference degree of two adjacent frames of images of the video to be processed;
under the condition that the difference degree is not larger than a target value, classifying the two adjacent frames of images into the same sub-video;
and under the condition that the difference degree is larger than a target value, classifying the two adjacent frames of images into different sub-videos.
6. The video processing method according to claim 5, wherein said determining a difference between two adjacent frames of images of the video to be processed comprises:
subtracting the pixels at the same position between the two adjacent frames of images, and outputting a pixel difference value;
determining the degree of difference based on the pixel difference value.
7. The video processing method of claim 6, wherein said determining said degree of difference based on said pixel difference value comprises:
calculating the average value of pixel difference values of all pixels of the two adjacent frames of images, wherein the average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
weighting different areas in the two adjacent frame images, and determining a weighted average value of pixel difference values of all pixels of the two adjacent frame images based on the weight of the area where the pixel corresponding to the pixel difference value is located, wherein the weighted average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
determining the position of a target object in two adjacent frame images, weighting pixel difference values based on pixels corresponding to the target object, and determining a weighted average value of the pixel difference values of all the pixels of the two adjacent frame images, wherein the weighted average value is used for representing the difference degree.
8. A video processing apparatus, comprising:
the segmentation module is used for segmenting the video to be processed into a plurality of sub-videos;
the first diagnosis module is used for carrying out video interlaced scanning diagnosis on the sub-video and outputting an interlaced scanning diagnosis result;
the second diagnosis module is used for carrying out noise fuzzy degree diagnosis on the sub-video and outputting a noise degree diagnosis result and a fuzzy degree diagnosis result;
the reading module is used for determining the resolution and the frame rate of the sub-video;
and the enhancement module is used for correspondingly enhancing the sub-video based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the fuzzy degree diagnosis result, the resolution and the frame rate of the sub-video to obtain the target video.
9. The video processing apparatus of claim 8, wherein the enhancement module comprises:
the screening module is used for selecting corresponding options to be enhanced from interlacing removal, noise removal, blur removal, video super-resolution reconstruction and video interpolation performed on the sub-videos based on the interlaced scanning diagnosis result, the noise degree diagnosis result, the blur degree diagnosis result, the resolution and the frame rate of the sub-videos;
and the execution module is used for sequentially executing the corresponding options in the interlacing removal, the noise removal, the blurring removal, the video super-resolution reconstruction and the video interpolation according to the sequence of the interlacing removal, the noise removal, the blurring removal, the video super-resolution reconstruction and the video interpolation to the selected option to be enhanced to obtain the target video.
10. The video processing apparatus of claim 8, wherein the first diagnostic module comprises:
the first processing module is used for performing difference on the odd lines and the even lines adjacent to the image in the sub-video and outputting a line difference value;
a first determining module for outputting an interlaced scanning diagnosis result based on the line difference value.
11. The video processing apparatus of claim 8, wherein the second diagnostic module comprises:
the second processing module is used for determining the difference value of each pixel of any frame image of the sub-video and the pixels around the pixel;
and the second determination module is used for determining the noise degree diagnosis result and the fuzzy degree diagnosis result based on the difference.
12. The video processing apparatus according to any of claims 8 to 11, wherein the segmentation module comprises:
the judging module is used for determining the difference degree of two adjacent frames of images of the video to be processed;
the dividing module is used for classifying the two adjacent frames of images into the same sub-video under the condition that the difference degree is not greater than a target value; and under the condition that the difference degree is larger than a target value, classifying the two adjacent frames of images into different sub-videos.
13. The video processing apparatus according to claim 12, wherein the judging module comprises:
the third processing module is used for subtracting the pixels at the same position between the two adjacent frames of images and outputting a pixel difference value;
a third determining module for determining the difference degree based on the pixel difference value.
14. The video processing apparatus of claim 13, wherein the third determining module,
the image difference value calculating unit is further used for calculating an average value of pixel difference values of all pixels of the two adjacent frames of images, and the average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
the weighted average value of the pixel difference values of all the pixels of the two adjacent frame images is determined based on the weight of the area where the pixel difference value corresponding to the pixel is located, and the weighted average value is used for representing the difference degree;
alternatively, the first and second electrodes may be,
the method is further used for determining the position of a target object in two adjacent frames of images, weighting pixel difference values based on pixels corresponding to the target object, and determining a weighted average value of the pixel difference values of all pixels of the two adjacent frames of images, wherein the weighted average value is used for representing the difference degree.
15. An electronic device comprising a memory, a processor and a computer program stored on said memory and executable on said processor, characterized in that said processor, when executing the program, carries out the steps of the video processing method according to any of claims 1 to 7.
16. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the video processing method according to any one of claims 1 to 7.
CN202011359736.XA 2020-11-27 2020-11-27 Video processing method, video processing apparatus, electronic device, and storage medium Pending CN112686811A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011359736.XA CN112686811A (en) 2020-11-27 2020-11-27 Video processing method, video processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011359736.XA CN112686811A (en) 2020-11-27 2020-11-27 Video processing method, video processing apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN112686811A true CN112686811A (en) 2021-04-20

Family

ID=75446893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011359736.XA Pending CN112686811A (en) 2020-11-27 2020-11-27 Video processing method, video processing apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN112686811A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961186A (en) * 2018-06-29 2018-12-07 赵岩 A kind of old film reparation recasting method based on deep learning
CN110446062A (en) * 2019-07-18 2019-11-12 平安科技(深圳)有限公司 Receiving handling method, electronic device and the storage medium of large data files transmission
CN110738611A (en) * 2019-09-20 2020-01-31 网宿科技股份有限公司 video image quality enhancement method, system and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961186A (en) * 2018-06-29 2018-12-07 赵岩 A kind of old film reparation recasting method based on deep learning
CN110446062A (en) * 2019-07-18 2019-11-12 平安科技(深圳)有限公司 Receiving handling method, electronic device and the storage medium of large data files transmission
CN110738611A (en) * 2019-09-20 2020-01-31 网宿科技股份有限公司 video image quality enhancement method, system and equipment

Similar Documents

Publication Publication Date Title
CN108805829B (en) Image data processing method, device, equipment and computer readable storage medium
EP2164040A1 (en) System and method for high quality image and video upscaling
KR20110002858A (en) Filtering method and apparatus for anti-aliasing
JP2012208553A (en) Image processing device, image processing method, and program
US9613405B2 (en) Scalable massive parallelization of overlapping patch aggregation
CN111951172A (en) Image optimization method, device, equipment and storage medium
Erkan et al. Improved adaptive weighted mean filter for salt-and-pepper noise removal
JP5105286B2 (en) Image restoration apparatus, image restoration method, and image restoration program
KR20140109801A (en) Method and apparatus for enhancing quality of 3D image
CN111882565A (en) Image binarization method, device, equipment and storage medium
CN111696064B (en) Image processing method, device, electronic equipment and computer readable medium
CN113012061A (en) Noise reduction processing method and device and electronic equipment
CN110136085B (en) Image noise reduction method and device
CN101141655A (en) Video signal picture element point chromatic value regulation means
CN111598794A (en) Image imaging method and device for removing underwater overlapping condition
CN111147804B (en) Video frame reconstruction method
CN112686811A (en) Video processing method, video processing apparatus, electronic device, and storage medium
CN111415317A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111311610A (en) Image segmentation method and terminal equipment
CN111754413A (en) Image processing method, device, equipment and storage medium
CN111415365A (en) Image detection method and device
CN115330637A (en) Image sharpening method and device, computing device and storage medium
CN114994098A (en) Foreign matter detection method and device
CN110111286B (en) Method and device for determining image optimization mode
CN113674144A (en) Image processing method, terminal equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220812

Address after: 13th Floor, Jingu Jingu Artificial Intelligence Building, Jingshi Road, Jinan Free Trade Pilot Zone, Jinan City, Shandong Province, 250000

Applicant after: Shenlan Artificial Intelligence Application Research Institute (Shandong) Co.,Ltd.

Address before: 200336 unit 1001, 369 Weining Road, Changning District, Shanghai

Applicant before: DEEPBLUE TECHNOLOGY (SHANGHAI) Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210420