CN114598874B - Video quantization coding and decoding method, device, equipment and storage medium - Google Patents

Video quantization coding and decoding method, device, equipment and storage medium Download PDF

Info

Publication number
CN114598874B
CN114598874B CN202210068433.5A CN202210068433A CN114598874B CN 114598874 B CN114598874 B CN 114598874B CN 202210068433 A CN202210068433 A CN 202210068433A CN 114598874 B CN114598874 B CN 114598874B
Authority
CN
China
Prior art keywords
video
video frame
frame
quantization
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210068433.5A
Other languages
Chinese (zh)
Other versions
CN114598874A (en
Inventor
王卫宁
刘静
孙铭真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Zidong Taichu (Beijing) Technology Co.,Ltd.
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202210068433.5A priority Critical patent/CN114598874B/en
Publication of CN114598874A publication Critical patent/CN114598874A/en
Application granted granted Critical
Publication of CN114598874B publication Critical patent/CN114598874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a video quantization coding and decoding method, a device, equipment and a storage medium, wherein the method comprises the following steps: inputting N video frames of an original video into a coding module of a video processing model, and outputting quantization feature codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N; inputting the quantization feature codes of the M video frames into a decoding module of a video processing model, and mapping the quantization feature codes of the M video frames into M first video frame features; according to the M first video frame characteristics, reconstructing first reference frame characteristics corresponding to each first video frame characteristic through a time axis attention mechanism to obtain M first reference frame characteristics; outputting a reconstructed video based on the M first video frame features and the M first reference frame features.

Description

Video quantization coding and decoding method, device, equipment and storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video quantization encoding and decoding method, apparatus, device, and storage medium.
Background
Almost all video currently seen, including television, internet, cell phones, etc., is encoded and decoded.
The video coding mode is a mode of converting a file in an original video format into a file in another video format by a compression technology. Video coding is increasingly used because of the reduced amount of encoded file data and the ease of processing. The video decoding is to restore the encoded video file to the original video format.
The video reconstruction task in the existing video decoding is only simple expansion of image reconstruction, and a video with high restoration degree is difficult to synthesize.
Disclosure of Invention
The invention provides a video quantization coding and decoding method, a device, equipment and a storage medium, which are used for solving the defect of low video decoding reduction degree in the prior art.
The invention provides a video quantization coding and decoding method, which comprises the following steps: inputting N video frames of an original video into a coding module of a video processing model, and outputting quantization characteristic codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N; inputting the quantization feature codes of the M video frames into a decoding module of a video processing model, and mapping the quantization feature codes of the M video frames into M first video frame features; according to the M first video frame characteristics, reconstructing first reference frame characteristics corresponding to each first video frame characteristic through a time axis attention mechanism to obtain M first reference frame characteristics; outputting a reconstructed video based on the M first video frame features and the M first reference frame features.
According to a video quantization encoding and decoding method provided by the present invention, the encoding module for inputting N video frames of an original video into a video processing model and outputting quantization characteristic codes of M video frames comprises: inputting N video frames of an original video into a coding module of a video processing model, and coding the N video frames of the original video to obtain first feature codes of M video frames; determining the corresponding feature code of each video frame in the codebook based on the Euclidean distance between the first feature code of each video frame and each feature code in the codebook, wherein the codebook comprises a plurality of discrete hidden layer feature codes; and outputting the quantized feature codes of the M video frames based on the corresponding feature codes of each video frame in the codebook.
According to a video quantization encoding and decoding method provided by the present invention, the outputting a reconstructed video based on the M first video frame characteristics and the M first reference frame characteristics includes: respectively aligning each first video frame feature with the corresponding first reference frame feature to obtain M aligned first video frame features; fusing the M aligned first video frame features and the M first video frame features through a time and space attention mechanism to obtain M fused first video frame features; up-sampling the M fused first video frame characteristics to obtain X reconstructed video frame characteristics; and outputting a reconstructed video based on the X reconstructed video frame characteristics, wherein X is a positive integer.
According to the video quantization encoding and decoding method provided by the present invention, outputting a reconstructed video based on the X reconstructed video frame characteristics includes: taking the X reconstructed video frame characteristics as X second video frame characteristics, and reconstructing a second reference frame characteristic corresponding to each second video frame characteristic through a time axis attention mechanism according to the X second video frame characteristics to obtain X second reference frame characteristics; respectively aligning each second video frame feature with the corresponding second reference frame feature to obtain X aligned second video frame features; fusing the X aligned second video frame features and the X second video frame features through a time and space attention mechanism to obtain X fused second video frame features; up-sampling the X fused second video frame features to obtain Y reconstructed video frame target features; performing up-sampling on the Y reconstructed video frame target characteristics to obtain target video characteristics; and outputting a reconstructed video based on the target video characteristics, wherein Y is a positive integer.
According to a video quantization encoding and decoding method provided by the present invention, before inputting N video frames of an original video into an encoding module of a video processing model, the method further comprises: for any video sample, inputting the video sample into the video processing model, and outputting a prediction reconstruction video corresponding to the video sample; calculating a loss value according to the prediction reconstruction video corresponding to the video sample and the video sample by using a preset loss function; and if the loss value is smaller than a preset threshold value, finishing the training of the video processing model.
According to the video quantization encoding and decoding method provided by the invention, the preset loss function is
Figure BDA0003481119040000031
Wherein X is the original video, X rec In order to be able to reconstruct the video,
Figure BDA0003481119040000032
and in order to obtain mean square error loss, sg is gradient stopping operation, E is video characteristics output by a video coding module, VQ is characteristic quantization operation, and beta is a hyper-parameter of model training.
The present invention also provides a video quantization encoding and decoding device, which comprises:
the first output module is used for inputting N video frames of an original video into the coding module of the video processing model and outputting quantization characteristic codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N;
the mapping module is used for inputting the quantization feature codes of the M video frames into a decoding module of a video processing model and mapping the quantization feature codes of the M video frames into M first video frame features;
the reconstruction module is used for reconstructing the first reference frame characteristics corresponding to each first video frame characteristic through a time axis attention mechanism according to the M first video frame characteristics to obtain M first reference frame characteristics;
a second output module, configured to output a reconstructed video based on the M first video frame features and the M first reference frame features.
The present invention further provides an electronic device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the video quantization encoding and decoding method as described in any one of the above when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the video quantization encoding and decoding method as described in any one of the above.
The present invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the video quantization coding and decoding method according to any of the above.
According to the video quantization coding and decoding method, device, equipment and storage medium provided by the invention, firstly, the original video is coded by utilizing the video processing model obtained by pre-training to obtain the quantization characteristic code, so that the data volume can be reduced, and compared with the mode that a user processes the original video, the processing of the quantization characteristic code is more convenient. Secondly, when the quantized feature codes are decoded by using the video processing model obtained by pre-training, a time axis attention mechanism is adopted to reconstruct the reference frames, so that accurate and effective reference frames can be obtained, and the reconstructed video is output based on the M first video frame features and the M first reference frame features, so that the reconstructed video has richer details, and further the high-quality reconstructed video is obtained.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating a video quantization encoding and decoding method according to an embodiment of the present invention;
fig. 2 is a second flowchart of a video quantization encoding and decoding method according to an embodiment of the present invention;
fig. 3 is a detailed flowchart of a video quantization encoding and decoding method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video quantization encoding and decoding device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
The following describes the video processing method, apparatus, device and storage medium provided by the present application in detail by specific embodiments with reference to the accompanying drawings.
Fig. 1 is a flowchart of a video quantization encoding and decoding method according to an embodiment of the present invention, as shown in fig. 1, including:
step 110, inputting N video frames of an original video into a coding module of a video processing model, and outputting quantization feature codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N;
specifically, the original video described in the embodiment of the present application may be a black and white video, or may be a color video. The color original video can be represented by X, X belongs to R N×H×W×3 Where N denotes that the original video contains N video frames, H and W denote that the spatial resolution of each video frame is H × W, and 3 denotes three color channels R, G, and B.
The video processing model described in the embodiment of the application is a model trained in advance, and the video processing model comprises a coding module, can carry out quantization coding on an original video and outputs quantization characteristic codes of M video frames. The quantization feature codes of the M video frames are the quantization feature codes corresponding to the original video, and can be represented by VQ (E).
Specifically, downsampling processing is performed during quantization encoding of an original video, and the number of video frames output by an encoding module of the video processing model may be less than the number of original video frames, that is, M is less than or equal to N.
Furthermore, after the quantization feature code corresponding to the original video is obtained, the user can directly perform required operation on the quantization feature code without processing the complex original video.
Step 120, inputting the quantization feature codes of the M video frames into a decoding module of a video processing model, and mapping the quantization feature codes of the M video frames into M first video frame features;
specifically, in this embodiment of the present application, the quantization feature codes of the M video frames input to the decoding module may be identical quantization feature codes output by the encoding module, or may be quantization feature codes processed by the user.
The video processing model described in the embodiment of the present application further includes a decoding module, and after the quantized feature codes of the M video frames are input to the decoding module of the video processing model, the quantized feature codes are first mapped into M first video features in a full link layer and a residual convolutional network, where the M first video features may be represented by z.
Step 130, reconstructing a first reference frame feature corresponding to each first video frame feature through a time axis attention mechanism according to the M first video frame features to obtain M first reference frame features;
specifically, reconstructing the first reference frame feature corresponding to the first video frame feature at the time t is to use the first video frame feature at the time t as a reference frame, and when reconstructing the reference frame, use a time axis attention mechanism to fuse features of the same pixel position in all other frames (video frames at all times).
Similarly, the M first reference frame features can be obtained by reconstructing the first reference frame feature corresponding to each first video frame feature. And the dimensions of the M first reference frame features are the same as the dimensions of the M first reference frame features.
More specifically, the M first reference frame features may be represented as z r ,z r =RFC(z)。
Step 140, outputting a reconstructed video based on the M first video frame characteristics and the M first reference frame characteristics.
Specifically, the subsequent processing is continued in a decoding module of the video processing model, and the reconstructed video is output based on the M first video frame features and the M first reference frame features.
In this embodiment, first, a video processing model obtained through pre-training is used to encode an original video to obtain a quantization feature code, so that the data size can be reduced, and compared with the case that a user processes the original video, the processing of the quantization feature code is more convenient. Secondly, when the quantized feature codes are decoded by using the video processing model obtained by pre-training, a time axis attention mechanism is adopted to reconstruct the reference frames, so that accurate and effective reference frames can be obtained, and the reconstructed video is output based on the M first video frame features and the M first reference frame features, so that the reconstructed video has richer details, and the high-quality reconstructed video is obtained.
Optionally, the inputting N video frames of the original video into a coding module of the video processing model and outputting quantized feature codes of M video frames includes:
inputting N video frames of an original video into a coding module of a video processing model, and coding the N video frames of the original video to obtain first feature codes of M video frames;
determining the corresponding feature code of each video frame in the codebook based on the Euclidean distance between the first feature code of each video frame and each feature code in the codebook, wherein the codebook comprises a plurality of discrete hidden layer feature codes;
and outputting the quantized feature codes of the M video frames based on the corresponding feature codes of each video frame in the codebook.
Specifically, the original video is expressed as X epsilon R N×H×W×3 Inputting the video data into a video processing model, firstly, encoding an original video X by using a 3D convolution network and a residual error network to obtain continuous video feature vectors, namely first feature encodings of M video framesAnd (6) code. The expression for encoding the original video is E = f (X).
The down-sampling is completed in the encoding process, and the down-sampling proportion is (s, f, f), wherein s corresponds to a video frame and f corresponds to a resolution. The resulting first feature code of M video frames can be expressed as E ∈ R M×H/f×W/f×D I.e. E ∈ R N/s×H/f×W/f×D And D is the number of the hidden layer nodes.
The codebook e ∈ R described in the examples of this application T×D T discrete implicit feature encodings are included for determining the quantized feature encoding for each video frame.
Specifically, the feature code corresponding to the video frame at the time t in the codebook is determined as calculating a first feature code E at the time t t,i,j And Euclidean distances of T hidden layer feature codes in the codebook, and E in the codebook is selected t,i,j And coding the nearest hidden layer as the quantization characteristic coding of the video frame at the time t. The formula is expressed as follows:
Figure BDA0003481119040000081
similarly, a quantization encoding for each video frame may be determined. Furthermore, the quantization coding of each video frame is determined, that is, the quantization feature coding of M video frames can be obtained, and is marked as VQ (E) epsilon R N/s×H/f×W/f×D And outputting the quantization feature codes of the M video frames.
In this embodiment, the original video is encoded first, and then the quantization feature codes of M video frames are determined based on the distance between the original video and each hidden layer feature code in the codebook, so that the accurate quantization feature code of the original video can be obtained. In the prior art, the quantization coding of a video is usually processed into the concatenation of the quantization coding of a plurality of image frames, so that the video quantization characteristic coding length is too long, and the processing is inconvenient for users. The original video is compressed from two dimensions of time and space, so that the length of video quantization coding is reduced, and convenience is brought to a user to process.
Optionally, the outputting a reconstructed video based on the M first video frame features and the M first reference frame features includes:
respectively aligning each first video frame feature with the corresponding first reference frame feature to obtain M aligned first video frame features;
fusing the M aligned first video frame features and the M first video frame features through a time and space attention mechanism to obtain M fused first video frame features;
up-sampling the M fused first video frame features to obtain X reconstructed video frame features;
and outputting a reconstructed video based on the X reconstructed video frame characteristics, wherein X is a positive integer.
Specifically, in the embodiment of the present application, the expression of the M first video frame features is z ∈ R N/s×H/f×W/f×D I.e. z ∈ R M ×H/f×W/f×D
Fig. 2 is a second flowchart of the video quantization encoding and decoding method according to the embodiment of the present invention, and as shown in fig. 2, after feature extraction is performed on an original video to be processed by using a 3D network convolution neural system and a residual error network. And calculating Euclidean distances between the first characteristic codes of the video frame at each moment and the plurality of characteristic codes in the codebook, and inquiring the characteristic code closest to the first characteristic code in the codebook as the quantized characteristic code of the video frame. And then inputting the quantized feature codes into a full connection layer and a residual error network to obtain the video frame features. And reconstructing the characteristics of the reference frame through a time axis attention mechanism according to the characteristics of the video frame at each moment.
Image frames are aligned at a feature level using pyramid concatenation and deformable convolution, specifically, due to a reconstructed first reference frame feature
Figure BDA0003481119040000091
And a first video frame characteristic z t There may be misalignment, so Pyramid cascade and Deformable convolution (PCD) are adopted as the video frame alignment module, which is characterized in thatThe image frames are aligned on level, namely, the first video frame characteristic at the time t and the first reference frame characteristic at the time t are aligned. Similarly, each first video frame feature is aligned with a corresponding first reference frame feature. The expression for obtaining the M aligned first video frame features is:
z a =PCD(z r ∣z′)
z a ∈R N/s×H/f×W/f×D
the dimensions of the M aligned first video frame features are the same as the dimensions of the M first video frame features.
Further, a temporal and spatial attention fusion module is adopted to perform feature fusion on video frame features, specifically, due to lens jitter, target motion and other reasons, different video frames in the same video generate blurs in different situations, and therefore, the contributions of the different video frames to the recovery/enhancement of the reference frame are different. Conventional methods generally consider them to be equally well, but not so. Therefore, attention is drawn to a mechanism of giving different weights to different feature maps in two dimensions of space and time, that is, a Temporal and Spatial Attention module (TSA) is adopted as a fusion module of video frames to combine M first video frame features z with M aligned first video frame features z a And performing fusion to obtain M fused first video frame features z', wherein the formula is expressed as follows:
z′=TSA(z,z a )
z′∈R N/s×H/f×W/f×D
the dimensions of the M aligned first video frame features are the same as the dimensions of the M first video frame features.
And (3) upsampling the fused video frame characteristics by adopting 3D convolution to obtain a reconstructed video, specifically, upsampling the video characteristics z' according to the proportion of (b, c, c) by using 3D convolution to obtain X reconstructed video frame characteristics z ×n The formula is expressed as:
z ×n =Upsample b×c×c (z′)
z ×n ∈R (b×N/s)×(c×H/f)×(c×W/f)×D
optionally, the X reconstructed video frame features are used as X second video frame features, and according to the X second video frame features, a second reference frame feature corresponding to each second video frame feature is reconstructed through a time axis attention mechanism to obtain X second reference frame features; respectively aligning each second video frame feature with the corresponding second reference frame feature to obtain X aligned second video frame features; fusing the X aligned second video frame features and the X second video frame features through a time and space attention mechanism to obtain X fused second video frame features; up-sampling the X fused second video frame features to obtain Y reconstructed video frame target features; performing up-sampling on the Y reconstructed video frame target characteristics to obtain target video characteristics; and outputting a reconstructed video based on the target video characteristics, wherein Y is a positive integer.
Specifically, X reconstructed video frame characteristics z are obtained ×n Then, the X reconstructed video frame features are used as X second video frame features, and reconstruction of a second reference frame, image frame alignment, and image frame fusion are performed, which are the same as the principles of the foregoing embodiments and are not repeated here.
Furthermore, the dimensions of the X second reference frame features, the X aligned second video frame features, the X fused second video frame features, and the X second video frame features are the same, i.e., the dimensions are (b × N/s) × (c × H/f) × (c × W/f) × D.
And (5) utilizing 3D convolution to perform up-sampling on the X fused second video frame features according to the proportion of (b, c, c) to obtain Y reconstructed video frame target features.
Using 3D convolution to up-sample Y reconstructed video frame target characteristics to obtain target video characteristics, namely obtaining reconstructed video X rec
More specifically, respective up-sampling ratios and down-sampling ratios may be set, for example, the down-sampling ratio (s, f, f) may have a value of (4, 8), and the up-sampling ratio (b, c, c) may have a value of (2, 2). Also for example, in order to make the reconstructed video more restorable, the video frame number and resolution of the reconstructed video and the original video may be set to be the same. The up-sampling ratio and the down-sampling ratio are not particularly limited herein.
In this embodiment, when the quantized feature code is decoded, a reconstructed video with a higher reduction degree can be obtained through reference frame construction, video frame alignment, video frame fusion and 3D upsampling convolution.
Optionally, before the inputting N video frames of the original video into the encoding module of the video processing model, the method further includes: for any video sample, inputting the video sample into the video processing model, and outputting a prediction reconstruction video corresponding to the video sample; calculating a loss value according to the prediction reconstruction video corresponding to the video sample and the video sample by using a preset loss function; and if the loss value is smaller than a preset threshold value, finishing the training of the video processing model.
Specifically, before the video processing model is used for encoding and decoding, the video processing model also needs to be trained, and the specific training process is as follows:
after obtaining a plurality of video samples, for any one video sample, inputting the video sample into a video processing model, and outputting a prediction reconstructed video corresponding to the video sample. On the basis, a preset loss function is utilized to calculate a loss value according to the video sample and the prediction reconstruction video. The preset loss function can be set according to actual requirements, and the times are not specifically limited. After the loss value is obtained through calculation, the training process is finished, model parameters in the video processing model are updated, and then the next training is carried out. In the training process, if the loss value obtained by calculation for a certain video sample is smaller than a preset threshold value, the training of the video processing model is completed.
Further, in the embodiment of the present invention, for the video processing model and the conventional video coding and decoding model of the present invention, the MSE index, PSNR index, and SSIM index of the iterative training of 150000 times and 200000 times are detected respectively. The results are shown in table 1 below:
table 1 conventional video coding and decoding model and video processing model detection results of the present invention:
Figure BDA0003481119040000121
as can be seen from the table, the accuracy of the reconstructed video obtained by decoding with the conventional single upsampled convolutional layer is significantly lower than that of the video reconstructed by using the decoding module of the present application. The difference increases with the increase of the total iteration times, after 20 ten thousand iterations, the difference of MSE loss reaches 0.007, the difference of PSNR index reaches 0.75, and the difference of SSIM index reaches 0.02. And the method and the device can accurately reconstruct the video with the resolution as high as 256 multiplied by 256.
In addition, the updating of the model parameters in the training process also comprises updating of the codebook, namely updating of the discrete hidden layer feature codes in the codebook, and because argmin is not differentiable, updating of the discrete hidden layer feature codes in the codebook is performed by using a momentum updating moving index average method.
In this embodiment, the loss value of the video processing model is favorably controlled within a preset range by training the video processing model, so that the reduction degree of the video processing model for video reconstruction is favorably improved.
Optionally, the predetermined loss function is
Figure BDA0003481119040000131
Wherein X is the original video, X rec In order to be able to reconstruct the video,
Figure BDA0003481119040000132
and in terms of mean square error loss, sg is gradient stopping operation, E is video characteristics output by a video coding module, VQ is characteristic quantization operation, and beta is a hyper-parameter of model training.
Specifically, the loss value calculated by the preset loss function described in the embodiment of the present application includes a video reconstruction loss value and a quantization coding loss value.
In the embodiment, the loss value is calculated while considering the video reconstruction loss value and the quantization coding loss value, so that the model can be better trained, and the video reconstruction restoration degree of the video processing model can be improved.
Fig. 3 is a detailed flowchart of a video quantization encoding and decoding method according to an embodiment of the present invention, as shown in fig. 3, including:
the method comprises the steps of inputting an original video into a video processing model, coding the original video by utilizing a 3D convolution downsampling network and a residual error network to obtain a first feature code, calculating Euclidean distance based on the first feature code and a discrete hidden layer feature code in a codebook (codebook), determining a quantization feature code, and outputting the quantization feature code. And the full connection layer and the residual error network map the quantized feature codes into first video frame features, and perform reference frame reconstruction, image frame alignment, fusion and 3D convolution upsampling on the first video frame features by the time and space attention fusion module to obtain reconstructed video frame features, and perform reference frame reconstruction, image frame alignment, fusion and 3D convolution upsampling on the reconstructed video frame features as second video frame features by the time and space attention fusion module again to obtain reconstructed video frame target features. And performing 3D convolution upsampling on the target characteristics of the reconstructed video frame to obtain target video characteristics, and outputting a reconstructed video.
The following describes the video quantization coding and decoding device provided by the present invention, and the video quantization coding and decoding device described below and the video quantization coding and decoding method described above can be referred to correspondingly.
Fig. 4 is a schematic structural diagram of a video quantization encoding and decoding device according to an embodiment of the present invention, as shown in fig. 4, including: a first output module 410, a mapping module 420, a reconstruction module 430, and a second output module 440; the first output module 410 is configured to input N video frames of an original video to a coding module of a video processing model, and output quantization feature codes of M video frames, where M and N are positive integers, and M is less than or equal to N; the mapping module 420 is configured to input the quantization feature codes of the M video frames into a decoding module of a video processing model, and map the quantization feature codes of the M video frames into M first video frame features; the reconstruction module 430 is configured to reconstruct, according to the M first video frame features, a first reference frame feature corresponding to each first video frame feature through a time axis attention mechanism, to obtain M first reference frame features; the second output module 440 is configured to output a reconstructed video based on the M first video frame characteristics and the M first reference frame characteristics.
Optionally, the first output module is specifically configured to input N video frames of an original video to a coding module of a video processing model, and perform coding processing on the N video frames of the original video to obtain first feature codes of M video frames; determining the corresponding feature code of each video frame in the codebook based on the Euclidean distance between the first feature code of each video frame and each feature code in the codebook, wherein the codebook comprises a plurality of discrete hidden layer feature codes; and outputting the quantized feature codes of the M video frames based on the corresponding feature codes of each video frame in the codebook.
Optionally, the second output module is specifically configured to align each first video frame feature with the corresponding first reference frame feature, respectively, to obtain M aligned first video frame features; fusing the M aligned first video frame features and the M first video frame features through a temporal and spatial attention mechanism to obtain M fused first video frame features; up-sampling the M fused first video frame features to obtain X reconstructed video frame features; outputting a reconstructed video based on the X reconstructed video frame characteristics, wherein X is a positive integer
Optionally, the second output module is specifically configured to use the X reconstructed video frame features as X second video frame features, and reconstruct, according to the X second video frame features, a second reference frame feature corresponding to each second video frame feature through a time axis attention mechanism, so as to obtain X second reference frame features; respectively aligning each second video frame feature with the corresponding second reference frame feature to obtain X aligned second video frame features; fusing the X aligned second video frame features and the X second video frame features through a time and space attention mechanism to obtain X fused second video frame features; up-sampling the X fused second video frame features to obtain Y reconstructed video frame target features; performing up-sampling on the Y reconstructed video frame target characteristics to obtain target video characteristics; and outputting a reconstructed video based on the target video characteristics, wherein Y is a positive integer.
Optionally, the apparatus further comprises:
the training module is used for inputting the video sample to the video processing model for any video sample and outputting a prediction reconstruction video corresponding to the video sample; calculating a loss value according to the prediction reconstruction video corresponding to the video sample and the video sample by using a preset loss function; and if the loss value is smaller than a preset threshold value, finishing the training of the video processing model.
Optionally, the predetermined loss function is
Figure BDA0003481119040000151
Wherein X is the original video, X rec In order to be able to reconstruct the video,
Figure BDA0003481119040000161
and in order to obtain mean square error loss, sg is gradient stopping operation, E is video characteristics output by a video coding module, VQ is characteristic quantization operation, and beta is a hyper-parameter of model training.
In this embodiment, first, a video processing model obtained through pre-training is used to encode an original video to obtain a quantization feature code, so that the data size can be reduced, and compared with the case that a user processes the original video, the processing of the quantization feature code is more convenient. Secondly, when the quantized feature codes are decoded by using the video processing model obtained by pre-training, a time axis attention mechanism is adopted to reconstruct the reference frames, so that accurate and effective reference frames can be obtained, and the reconstructed video is output based on the M first video frame features and the M first reference frame features, so that the reconstructed video has richer details, and further the high-quality reconstructed video is obtained.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 5, the electronic device may include: a processor (processor) 510, a communication Interface (Communications Interface) 520, a memory (memory) 530, and a communication bus 540, wherein the processor 510, the communication Interface 520, and the memory 530 communicate with each other via the communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform a video quantization codec method comprising: inputting N video frames of an original video into a coding module of a video processing model, and outputting quantization characteristic codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N; inputting the quantization feature codes of the M video frames into a decoding module of a video processing model, and mapping the quantization feature codes of the M video frames into M first video frame features; according to the M first video frame characteristics, reconstructing first reference frame characteristics corresponding to each first video frame characteristic through a time axis attention mechanism to obtain M first reference frame characteristics; outputting a reconstructed video based on the M first video frame features and the M first reference frame features.
Furthermore, the logic instructions in the memory 530 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, where the computer program product includes a computer program, the computer program can be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, a computer can execute the video quantization coding and decoding method provided by the above methods, where the method includes: inputting N video frames of an original video into a coding module of a video processing model, and outputting quantization feature codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N; inputting the quantization feature codes of the M video frames into a decoding module of a video processing model, and mapping the quantization feature codes of the M video frames into M first video frame features; according to the M first video frame characteristics, reconstructing first reference frame characteristics corresponding to each first video frame characteristic through a time axis attention mechanism to obtain M first reference frame characteristics; outputting a reconstructed video based on the M first video frame features and the M first reference frame features.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the video quantization coding and decoding method provided by the above methods, the method including: inputting N video frames of an original video into a coding module of a video processing model, and outputting quantization feature codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N; inputting the quantization feature codes of the M video frames into a decoding module of a video processing model, and mapping the quantization feature codes of the M video frames into M first video frame features; according to the M first video frame characteristics, reconstructing first reference frame characteristics corresponding to each first video frame characteristic through a time axis attention mechanism to obtain M first reference frame characteristics; outputting a reconstructed video based on the M first video frame features and the M first reference frame features.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A video quantization encoding and decoding method, comprising:
inputting N video frames of an original video into a coding module of a video processing model, and outputting quantization characteristic codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N;
inputting the quantization feature codes of the M video frames into a decoding module of a video processing model, and mapping the quantization feature codes of the M video frames into M first video frame features;
according to the M first video frame characteristics, reconstructing first reference frame characteristics corresponding to each first video frame characteristic through a time axis attention mechanism to obtain M first reference frame characteristics;
outputting a reconstructed video based on the M first video frame features and the M first reference frame features;
wherein, the inputting N video frames of the original video into the coding module of the video processing model and outputting quantization feature codes of M video frames includes:
inputting N video frames of an original video into a coding module of a video processing model, and coding the N video frames of the original video to obtain first feature codes of M video frames;
determining the corresponding feature code of each video frame in the codebook based on the Euclidean distance between the first feature code of each video frame in the M video frames and each feature code in the codebook, wherein the codebook comprises a plurality of discrete hidden layer feature codes;
and outputting the quantized feature codes of the M video frames based on the corresponding feature codes of each video frame in the codebook.
2. The method of claim 1, wherein outputting the reconstructed video based on the M first video frame characteristics and the M first reference frame characteristics comprises:
respectively aligning each first video frame feature with the corresponding first reference frame feature to obtain M aligned first video frame features;
fusing the M aligned first video frame features and the M first video frame features through a temporal and spatial attention mechanism to obtain M fused first video frame features;
up-sampling the M fused first video frame features to obtain X reconstructed video frame features;
and outputting a reconstructed video based on the X reconstructed video frame characteristics, wherein X is a positive integer.
3. The method of claim 2, wherein outputting the reconstructed video based on the X reconstructed video frame characteristics comprises:
taking the X reconstructed video frame characteristics as X second video frame characteristics, and reconstructing a second reference frame characteristic corresponding to each second video frame characteristic through a time axis attention mechanism according to the X second video frame characteristics to obtain X second reference frame characteristics;
respectively aligning each second video frame feature with the corresponding second reference frame feature to obtain X aligned second video frame features;
fusing the X aligned second video frame features and the X second video frame features through a time and space attention mechanism to obtain X fused second video frame features;
the X fused second video frame features are subjected to up-sampling to obtain Y reconstructed video frame target features;
performing up-sampling on the Y reconstructed video frame target characteristics to obtain target video characteristics;
and outputting a reconstructed video based on the target video characteristics, wherein Y is a positive integer.
4. The method of claim 1, wherein before inputting the N video frames of the original video into the coding module of the video processing model, the method further comprises:
for any video sample, inputting the video sample into the video processing model, and outputting a prediction reconstruction video corresponding to the video sample;
calculating a loss value according to the prediction reconstruction video corresponding to the video sample and the video sample by using a preset loss function;
and if the loss value is smaller than a preset threshold value, finishing the training of the video processing model.
5. The method of claim 4, wherein the predetermined loss function is
Figure FDA0003889940400000031
Wherein X is the original video, X rec In order to be able to reconstruct the video,
Figure FDA0003889940400000032
and in terms of mean square error loss, sg is gradient stopping operation, E is video characteristics output by a video coding module, VQ is characteristic quantization operation, and beta is a hyper-parameter of model training.
6. An apparatus for video quantization coding and decoding, comprising:
the first output module is used for inputting N video frames of an original video into the coding module of the video processing model and outputting quantization characteristic codes of M video frames, wherein M and N are positive integers, and M is less than or equal to N;
the mapping module is used for inputting the quantization feature codes of the M video frames into a decoding module of a video processing model and mapping the quantization feature codes of the M video frames into M first video frame features;
the reconstruction module is used for reconstructing the first reference frame characteristics corresponding to each first video frame characteristic through a time axis attention mechanism according to the M first video frame characteristics to obtain M first reference frame characteristics;
a second output module, configured to output a reconstructed video based on the M first video frame features and the M first reference frame features;
wherein the apparatus is further configured to:
inputting N video frames of an original video into a coding module of a video processing model, and coding the N video frames of the original video to obtain first feature codes of M video frames;
determining the corresponding feature code of each video frame in the codebook based on the Euclidean distance between the first feature code of each video frame in the M video frames and each feature code in the codebook, wherein the codebook comprises a plurality of discrete hidden layer feature codes;
and outputting the quantized feature codes of the M video frames based on the corresponding feature codes of each video frame in the codebook.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the video quantization coding and decoding method according to any one of claims 1 to 5 when executing the program.
8. A non-transitory computer readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the video quantization coding-decoding method according to any one of claims 1 to 5.
CN202210068433.5A 2022-01-20 2022-01-20 Video quantization coding and decoding method, device, equipment and storage medium Active CN114598874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210068433.5A CN114598874B (en) 2022-01-20 2022-01-20 Video quantization coding and decoding method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210068433.5A CN114598874B (en) 2022-01-20 2022-01-20 Video quantization coding and decoding method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114598874A CN114598874A (en) 2022-06-07
CN114598874B true CN114598874B (en) 2022-12-06

Family

ID=81804392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210068433.5A Active CN114598874B (en) 2022-01-20 2022-01-20 Video quantization coding and decoding method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114598874B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401273A (en) * 2020-03-19 2020-07-10 支付宝(杭州)信息技术有限公司 User feature extraction system and device for privacy protection
CN111970509A (en) * 2020-08-10 2020-11-20 杭州海康威视数字技术股份有限公司 Video image processing method, device and system
CN113610707A (en) * 2021-07-23 2021-11-05 广东工业大学 Video super-resolution method based on time attention and cyclic feedback network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401273A (en) * 2020-03-19 2020-07-10 支付宝(杭州)信息技术有限公司 User feature extraction system and device for privacy protection
CN111970509A (en) * 2020-08-10 2020-11-20 杭州海康威视数字技术股份有限公司 Video image processing method, device and system
CN113610707A (en) * 2021-07-23 2021-11-05 广东工业大学 Video super-resolution method based on time attention and cyclic feedback network

Also Published As

Publication number Publication date
CN114598874A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
KR20230074137A (en) Instance adaptive image and video compression using machine learning systems
CN113259676B (en) Image compression method and device based on deep learning
CN109949222B (en) Image super-resolution reconstruction method based on semantic graph
US11869221B2 (en) Data compression using integer neural networks
CN109451308A (en) Video compression method and device, electronic equipment and storage medium
CN111263161A (en) Video compression processing method and device, storage medium and electronic equipment
CN107454412A (en) A kind of processing method of video image, apparatus and system
CN116309148A (en) Image restoration model training method, image restoration device and electronic equipment
CN113628116B (en) Training method and device for image processing network, computer equipment and storage medium
CN114792347A (en) Image compression method based on multi-scale space and context information fusion
CN116600119B (en) Video encoding method, video decoding method, video encoding device, video decoding device, computer equipment and storage medium
CN114598874B (en) Video quantization coding and decoding method, device, equipment and storage medium
CN115393452A (en) Point cloud geometric compression method based on asymmetric self-encoder structure
CN113554719B (en) Image encoding method, decoding method, storage medium and terminal equipment
CN113132732B (en) Man-machine cooperative video coding method and video coding system
CN112669240A (en) High-definition image restoration method and device, electronic equipment and storage medium
CN116828184B (en) Video encoding method, video decoding method, video encoding device, video decoding device, computer equipment and storage medium
CN110717948A (en) Image post-processing method, system and terminal equipment
CN113628108B (en) Image super-resolution method and system based on discrete representation learning and terminal
CN114663536B (en) Image compression method and device
CN117915107B (en) Image compression system, image compression method, storage medium and chip
WO2024093627A1 (en) Video compression method, video decoding method, and related apparatuses
KR20240025629A (en) Video compression using optical flow
CN116546224A (en) Training method of filter network, video coding method, device and electronic equipment
Zhou et al. MOC-RVQ: Multilevel Codebook-assisted Digital Generative Semantic Communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240419

Address after: Room 524, Automation Building, No. 95 Zhongguancun East Road, Haidian District, Beijing, 100190

Patentee after: BEIJING ZHONGZI SCIENCE AND TECHNOLOGY BUSINESS INCUBATOR CO.,LTD.

Country or region after: China

Address before: 100190 No. 95 East Zhongguancun Road, Beijing, Haidian District

Patentee before: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240423

Address after: 200-19, 2nd Floor, Building B, Wanghai Building, No.10 West Third Ring Middle Road, Haidian District, Beijing, 100190

Patentee after: Zhongke Zidong Taichu (Beijing) Technology Co.,Ltd.

Country or region after: China

Address before: Room 524, Automation Building, No. 95 Zhongguancun East Road, Haidian District, Beijing, 100190

Patentee before: BEIJING ZHONGZI SCIENCE AND TECHNOLOGY BUSINESS INCUBATOR CO.,LTD.

Country or region before: China

TR01 Transfer of patent right