CN116996697B - HEVC (high efficiency video coding) frame-oriented video recovery method - Google Patents
HEVC (high efficiency video coding) frame-oriented video recovery method Download PDFInfo
- Publication number
- CN116996697B CN116996697B CN202310910490.8A CN202310910490A CN116996697B CN 116996697 B CN116996697 B CN 116996697B CN 202310910490 A CN202310910490 A CN 202310910490A CN 116996697 B CN116996697 B CN 116996697B
- Authority
- CN
- China
- Prior art keywords
- quality
- frame
- video
- quality frame
- hevc
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000011084 recovery Methods 0.000 title claims abstract description 17
- 238000000605 extraction Methods 0.000 claims abstract description 25
- 230000007246 mechanism Effects 0.000 claims abstract description 10
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 8
- 238000005728 strengthening Methods 0.000 claims abstract description 8
- 238000010606 normalization Methods 0.000 claims abstract description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 230000006872 improvement Effects 0.000 abstract description 3
- 238000012360 testing method Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 abstract description 2
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/65—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a video recovery method for an HEVC (high efficiency video coding) frame, and belongs to the technical field of HEVC-based video processing. The technical problem of poor video quality caused by severe fluctuation of video frame level under HEVC low-delay coding configuration is solved. The technical proposal is as follows: the method comprises the following steps: s1, designing a forward feature coarse extraction module, removing a batch normalization layer, and reserving detail information of an original image; s2, designing a time sequence information extraction module, and strengthening the space-time information expression of the features by using a bidirectional long-short-time memory network; s3, designing a quality enhancement module, introducing a residual error learning mechanism, and improving the network convergence speed and performance. The beneficial effects of the invention are as follows: the improvement of video viewing quality of the decoding end is realized, and compared with the direct decoding display by utilizing HM-16.5, on the standard test sequence, PSNR is improved by 0.4419dB.
Description
Technical Field
The invention relates to the technical field of HEVC (high efficiency video coding) -based video processing, in particular to a video recovery method for an HEVC coding framework.
Background
Due to the introduction of related coding techniques such as coding unit division, quantization and the like, the HEVC standard video codec framework is essentially based on lossy coding of pixel blocks, and in the decoding process, a large amount of compression distortion such as blocking effect and ringing effect is inevitably caused in the reconstructed video consisting of the reconstructed images in the decoder buffer. Although a corresponding loop filter has been designed in the coding framework of the HEVC standard to mitigate the compression distortion effect caused by related coding techniques such as coding unit division and quantization, researchers at home and abroad have made continuous efforts on the optimization problem of the loop filter in order to further improve the quality of the reconstructed video. However, due to the limitation of the coding framework, the existing loop filtering method has very limited improvement of the image quality of the reconstructed video.
The existing loop filtering method essentially utilizes intra-frame information to balance and compensate pixel levels, does not mine deep relations among the intra-frame information, does not fully utilize precious inter-frame information to restore reconstructed video, and has the advantages that the restoration quality of the reconstructed video is difficult to meet the actual requirements of terminal display, and the subjective and objective quality of the reconstructed video is still greatly improved. Especially under the condition of limited bandwidth transmission, compression distortion caused by the existing codec frame and fast encoding will reduce the picture quality of the terminal reconstructed video. Meanwhile, as the computation performance of the terminal chip is increasingly improved, how to change the video coding technology from unidirectional load to balanced load is also the focus of research in the industry. Therefore, effective restoration of the image quality of the reconstructed video at the decoding end is a problem to be solved.
Research has shown that convolutional neural networks possess a powerful ability to learn the nonlinear mapping relationship between reconstructed video to original video. The convolutional neural network can recover the original video according to the incomplete pixel information which is not lost in the reconstructed video. Therefore, researchers have gradually put their eyes into the study of image quality restoration of reconstructed video using convolutional neural networks. The method can effectively recover the image quality of the reconstructed video by utilizing the strong computing power of the terminal on the premise of not increasing the code rate consumption, and is beneficial to the full utilization of the information in frames and between frames and the balance of the pressures of the encoding end and the decoding end. The image quality restoration method of the reconstructed video can be divided into a method based on single frame input and a method based on multi-frame input according to the difference of the input frame numbers. The method of single frame input is essentially an image quality restoration technique at the image level, ignoring the inherent strong timing correlation of video content, which makes the image quality restoration method performance of reconstructed video based on single frame input a bottleneck. The multi-frame input method considers that adjacent frames may contain useful information missing from the current frame, which makes the reconstructed video image quality restoration method using multi-frame input important for researchers. However, the existing reconstructed video image quality recovery method only positions itself as a common video recovery task, and does not deeply mine compression distortion and a video coding frame caused by coding in the reconstructed video, which also affects the image quality recovery effect of the reconstructed video.
Therefore, on the basis of analyzing the unique characteristics of the reconstructed video, how to utilize the input space-time information to the greatest extent and design a reconstructed video image quality recovery method which is more suitable for the HEVC coding framework are imperative, and a new idea is provided for breaking through the bottleneck of the current reconstructed video image quality recovery.
Disclosure of Invention
The invention aims to provide a video recovery method for an HEVC (high efficiency video coding) frame, which solves the problem of video frame level quality fluctuation under HEVC low-delay configuration, finally realizes the improvement of video viewing quality of a decoding end and proves the effectiveness of the method on a standard test sequence.
The invention is characterized in that: the invention provides a video recovery method for HEVC (high efficiency video coding) frames, which comprises the steps of firstly designing a forward characteristic rough extraction module for removing a batch normalization layer, reserving detail information of an original image, then designing a time sequence information extraction module by utilizing a bidirectional long-short-time memory network, strengthening space-time information expression of characteristics, finally designing a quality enhancement module, introducing a residual error learning mechanism, and improving network convergence speed and performance.
In order to achieve the aim of the invention, the technical scheme adopted by the invention is as follows: a video recovery method facing HEVC coding framework comprises the following steps:
1.1, designing a forward feature coarse extraction module, removing a batch normalization layer, and reserving detail information of an original image;
1.2, designing a time sequence information extraction module, and strengthening the space-time information expression of the characteristics by using a bidirectional long-short-time memory network;
and 1.3, designing a quality enhancement module, introducing a residual error learning mechanism, and improving the network convergence speed and performance.
Further, the step 1.1 specifically includes the following steps:
2.1, constructing three groups of convolution layers with the same channel number and convolution kernel size;
2.2, reading the forward high-quality frame, and performing rough extraction of forward characteristics on the forward high-quality frame through a first group of convolution layers;
2.3, reading the low-quality frame to be enhanced, and performing rough extraction of forward features on the low-quality frame to be enhanced through a second group of convolution layers;
and 2.4, reading the backward high-quality frame, and performing rough extraction of forward features on the backward high-quality frame through a third group of convolution layers.
Further, the step 1.2 specifically includes the following steps:
3.1, sequentially sending the forward high-quality frame, the low-quality frame to be enhanced and the backward high-quality frame into an LSTM+ unit according to the positive sequence of the video playing sequence;
3.2, sequentially sending the forward high-quality frame, the low-quality frame to be enhanced and the backward high-quality frame into an LSTM-unit according to the reverse sequence of the video playing sequence;
3.3, splicing the output characteristics of 'LSTM+' and 'LSTM-';
and 3.4, sending the spliced output characteristics into a channel attention mechanism formed by global convolution, and strengthening the expression capacity of the characteristics in two dimensions of time and space to provide better characteristics for a subsequent enhancement module.
Further, the step 1.3 specifically includes the following steps:
4.1, constructing a residual unit formed by combining a convolution layer, a ReLU activation function layer and a jump connection structure;
4.2, splicing a plurality of groups of residual units to serve as a basic component of the quality enhancement module;
4.3, performing structural dimension reduction on the output characteristics of the spliced residual error unit;
and 4.4, sending the feature subjected to dimension reduction into 1 group of convolution layers, carrying out further feature extraction, and then overlapping the feature with the low-quality frame to be enhanced, so as to promote the network to learn residual information of the low-quality frame and the output target frame, and strengthen the learning capacity and convergence rate of the network.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention proves that the frame-level quality fluctuation of the video does exist from two angles of the coding structure and the statistical experiment of the HEVC standard.
2. In the invention, multi-frame information is introduced into the network design process, a BN layer in the traditional algorithm is removed in the forward characteristic coarse extraction module, and the detailed information of the enhanced video is reserved.
3. The invention utilizes the bidirectional long-short time memory structure to capture the time sequence information of the video from two directions in the time sequence information extraction module, improves the expression capability of the characteristics, adopts an improved channel attention mechanism, improves the discrimination capability of the network to the bidirectional time sequence characteristics, and strengthens the expression of the time sequence characteristics.
4. In the quality enhancement module, the residual error learning thought is utilized, the capability of recovering video of the network is improved, and compared with HM-16.5 direct decoding display, the PSNR is improved by 0.4419dB on the standard test sequence from the experimental result of the invention; in addition, compared with other methods, the method provided by the invention is an effective quality recovery algorithm, can effectively relieve the quality fluctuation phenomenon of video frame level under HEVC low-delay configuration, and improves the watching quality of video.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Fig. 1 is an overall flowchart of a video restoration method for an HEVC coding framework of the present invention.
Fig. 2 is a network architecture diagram of the video restoration method facing the HEVC coding framework of the present invention.
Fig. 3 is an algorithm flow chart of a video recovery method for an HEVC coding framework provided by the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
Example 1
Referring to fig. 1 and fig. 3, the present embodiment provides a video recovery method for an HEVC coding framework, including the following steps:
1. designing a forward feature coarse extraction module, removing a batch normalization layer, and reserving detail information of an original image;
2. designing a time sequence information extraction module, and strengthening the space-time information expression of the features by using a bidirectional long-short-time memory network;
3. and a quality enhancement module is designed, a residual error learning mechanism is introduced, and the network convergence speed and performance are improved.
Specifically, referring to fig. 2, in step 1, a forward feature coarse extraction module is designed, a batch normalization layer is removed, and detailed information of an original image is reserved, and specifically the method includes the following steps:
1) Combining 3 x 3 convolutional layers (Conv) of 3 groups of 64 channels into a forward feature coarse extraction module;
2) Reading a forward high quality frame, and performing rough extraction of forward characteristics through a 3×3 convolution layer (Conv) of a first group of 64 channels;
3) Reading a low-quality frame to be enhanced, and performing rough extraction of forward features on the low-quality frame through a 3×3 convolution layer (Conv) of a second group of 64 channels;
4) The backward high quality frame is read and coarsely extracted for the forward features by a third set of 64-channel 3 x 3 convolutional layers (Conv).
Specifically, referring to fig. 2, in step 2, a timing information extraction module is designed, and the spatio-temporal information expression of the features is enhanced by using a bidirectional long-short-time memory network, and specifically includes the following steps:
1) Sequentially sending the forward high-quality frame, the low-quality frame to be enhanced and the backward high-quality frame into an LSTM+ unit according to the positive sequence of the video playing sequence;
2) Sequentially sending the forward high-quality frame, the low-quality frame to be enhanced and the backward high-quality frame into an LSTM-unit according to the reverse sequence of the video playing sequence;
3) Splicing the output characteristics of 'LSTM+' and 'LSTM-';
4) And sending the spliced output features into a channel attention mechanism (SEBlock) formed by global convolution, and strengthening the expression capacity of the features in two dimensions of time and space to provide better features for subsequent enhancement modules.
Specifically, referring to fig. 2, in step 3, a quality enhancement module is designed, a residual error learning mechanism is introduced, and the convergence speed and performance of the network are improved, and specifically the method comprises the following steps:
1) Combining 2 groups of convolution layers, 1 group of ReLU activation function layers and 1 hop-join structure into 1 residual unit (ResBlock);
2) Splicing 16 ResBlocks to serve as a basic component of the quality enhancement module;
3) Carrying out structural dimension reduction on the output characteristics of the 16 ResBlocks;
4) And (3) sending the feature subjected to dimension reduction (Down sample) into 1 group of convolution layers (Conv), further extracting the feature, and then overlapping the feature with the low-quality frame to be enhanced, so as to promote the network to learn the residual information of the low-quality frame and the output target frame, and strengthen the learning capacity and convergence speed of the network.
To examine the performance of the proposed method of this example, the method of this example was compared with the original method. The experimental data set construction work of the embodiment is carried out in reference software HM-16.5 of HEVC standard, the video to be reconstructed at the decoding end is generated by adopting default LD coding configuration to original YUV video and setting QP to 37 codes, and the data set consists of 130 videos with different kinds of resolutions from CIF (352X 288) to WQXGA (2560X 1600), and each video is 20 frames. Meanwhile, the embodiment performs verification experiments on 18 standard sequences of 5 classes (A, B, C, D and E) given by JCT-VC under different resolutions.
Table 1 shows a comparison of the video quality of the proposed method with respect to the HM-16.5 direct decoding
As can be seen from table 1, the present embodiment proposes a method that improves PSNR by 0.4419dB on average compared to the video quality after HM-16.5 direct decoding. Thus, it is explained that the use of multi-frame input and video timing information is of great value for video quality recovery at the decoding end.
Example 2
To further examine the performance of the proposed method of this embodiment, the method of this embodiment is also compared with reference [1]Ren Y,Mai X,Wang Z,et al.Multi-frame Quality Enhancement for Compressed Video [ C ].2018IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE,2018.
Table 2 shows a comparison of the video quality of the proposed method of the present example with respect to reference [1]
As shown in Table 2, the PSNR of the method of this example was improved by 0.1999dB on average compared to reference [1 ].
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (1)
1. The video recovery method for the HEVC coding framework is characterized by comprising the following steps of:
step one, designing a forward feature coarse extraction module, removing a batch normalization layer, and reserving detail information of an original image;
designing a time sequence information extraction module, and strengthening the space-time information expression of the features by using a bidirectional long-short-time memory network;
step three, designing a quality enhancement module, introducing a residual error learning mechanism, and improving the convergence speed and performance of the network;
the first step specifically comprises the following steps:
s11, constructing three groups of convolution layers with the same channel number and convolution kernel size;
s12, reading a forward high-quality frame, and performing rough extraction of forward features on the forward high-quality frame through a first group of convolution layers;
s13, reading a low-quality frame to be enhanced, and performing rough extraction of forward features on the low-quality frame to be enhanced through a second group of convolution layers;
s14, reading the backward high-quality frames, and performing rough extraction of forward features on the backward high-quality frames through a third group of convolution layers;
the second step specifically comprises the following steps:
s21, sequentially sending a forward high-quality frame, a low-quality frame to be enhanced and a backward high-quality frame into an LSTM+ unit according to the positive sequence of the video playing sequence;
s22, sequentially sending the forward high-quality frame, the low-quality frame to be enhanced and the backward high-quality frame into an LSTM-unit according to the reverse sequence of the video playing sequence;
s23, splicing the output characteristics of 'LSTM+' and 'LSTM-';
s24, sending the spliced output characteristics into a channel attention mechanism formed by global convolution, and strengthening the expression capacity of the characteristics in two dimensions of time and space;
the third step specifically comprises the following steps:
s31, constructing a residual unit formed by combining a convolution layer, a ReLU activation function layer and a jump connection structure;
s32, splicing a plurality of groups of residual units to serve as a basic component of the quality enhancement module;
s33, performing structural dimension reduction on the output characteristics of the spliced residual error unit;
and S34, sending the feature subjected to dimension reduction into a convolution layer for feature extraction, and then overlapping the feature with a low-quality frame to be enhanced, so that the network is promoted to learn residual information of the low-quality frame and an output target frame, and the learning capacity and convergence speed of the network are enhanced.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310910490.8A CN116996697B (en) | 2023-07-24 | 2023-07-24 | HEVC (high efficiency video coding) frame-oriented video recovery method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310910490.8A CN116996697B (en) | 2023-07-24 | 2023-07-24 | HEVC (high efficiency video coding) frame-oriented video recovery method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116996697A CN116996697A (en) | 2023-11-03 |
CN116996697B true CN116996697B (en) | 2024-02-23 |
Family
ID=88527750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310910490.8A Active CN116996697B (en) | 2023-07-24 | 2023-07-24 | HEVC (high efficiency video coding) frame-oriented video recovery method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116996697B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299401A (en) * | 2018-07-12 | 2019-02-01 | 中国海洋大学 | Metropolitan area space-time stream Predicting Technique based on deep learning model LSTM-ResNet |
CN111031315A (en) * | 2019-11-18 | 2020-04-17 | 复旦大学 | Compressed video quality enhancement method based on attention mechanism and time dependency |
CN111711817A (en) * | 2019-03-18 | 2020-09-25 | 四川大学 | HEVC intra-frame coding compression performance optimization research combined with convolutional neural network |
CN113111865A (en) * | 2021-05-13 | 2021-07-13 | 广东工业大学 | Fall behavior detection method and system based on deep learning |
CN114220061A (en) * | 2021-12-28 | 2022-03-22 | 青岛科技大学 | Multi-target tracking method based on deep learning |
CN114374846A (en) * | 2022-01-10 | 2022-04-19 | 昭通亮风台信息科技有限公司 | Video compression method, device, equipment and storage medium |
CN116012272A (en) * | 2023-01-19 | 2023-04-25 | 电子科技大学 | Compressed video quality enhancement method based on reconstructed flow field |
CN116172517A (en) * | 2023-02-21 | 2023-05-30 | 武汉大学 | Seizure interval epileptiform discharge detection method and device based on double-view feature fusion framework |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10595727B2 (en) * | 2018-01-25 | 2020-03-24 | Siemens Healthcare Gmbh | Machine learning-based segmentation for cardiac medical imaging |
US11568247B2 (en) * | 2019-03-22 | 2023-01-31 | Nec Corporation | Efficient and fine-grained video retrieval |
-
2023
- 2023-07-24 CN CN202310910490.8A patent/CN116996697B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299401A (en) * | 2018-07-12 | 2019-02-01 | 中国海洋大学 | Metropolitan area space-time stream Predicting Technique based on deep learning model LSTM-ResNet |
CN111711817A (en) * | 2019-03-18 | 2020-09-25 | 四川大学 | HEVC intra-frame coding compression performance optimization research combined with convolutional neural network |
CN111031315A (en) * | 2019-11-18 | 2020-04-17 | 复旦大学 | Compressed video quality enhancement method based on attention mechanism and time dependency |
CN113111865A (en) * | 2021-05-13 | 2021-07-13 | 广东工业大学 | Fall behavior detection method and system based on deep learning |
CN114220061A (en) * | 2021-12-28 | 2022-03-22 | 青岛科技大学 | Multi-target tracking method based on deep learning |
CN114374846A (en) * | 2022-01-10 | 2022-04-19 | 昭通亮风台信息科技有限公司 | Video compression method, device, equipment and storage medium |
CN116012272A (en) * | 2023-01-19 | 2023-04-25 | 电子科技大学 | Compressed video quality enhancement method based on reconstructed flow field |
CN116172517A (en) * | 2023-02-21 | 2023-05-30 | 武汉大学 | Seizure interval epileptiform discharge detection method and device based on double-view feature fusion framework |
Non-Patent Citations (2)
Title |
---|
Thangarajah Akilan.A 3D CNN-LSTM-Based Image-to-Image Foreground Segmentation.IEEE Transactions on Intelligent Transportation Systems.2019,全文. * |
王婷 ; 何小海 ; 孙伟恒 ; 熊淑华 ; Karn Pradeep ; .结合卷积神经网络的HEVC帧内编码压缩改进算法.太赫兹科学与电子信息学报.2020,(第02期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN116996697A (en) | 2023-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111355956B (en) | Deep learning-based rate distortion optimization rapid decision system and method in HEVC intra-frame coding | |
CN108347612B (en) | Monitoring video compression and reconstruction method based on visual attention mechanism | |
CN113766249B (en) | Loop filtering method, device, equipment and storage medium in video coding and decoding | |
CN111711817B (en) | HEVC intra-frame coding compression performance optimization method combined with convolutional neural network | |
CN1767644A (en) | Non-integer pixel sharing for video encoding | |
CN106961610B (en) | Novel super-high-definition video compression framework combined with super-resolution reconstruction | |
CN101291436B (en) | Video coding/decoding method and device thereof | |
CN112422993A (en) | HEVC video quality enhancement algorithm framework combined with convolutional neural network | |
CN112188196A (en) | Method for rapid intra-frame prediction of general video coding based on texture | |
CN110677654A (en) | Quantization parameter cascade method of high-efficiency video coding standard low-delay coding structure | |
CN111669588B (en) | Ultra-high definition video compression coding and decoding method with ultra-low time delay | |
CN111031315A (en) | Compressed video quality enhancement method based on attention mechanism and time dependency | |
Li et al. | Multi-scale grouped dense network for vvc intra coding | |
CN110677624A (en) | Monitoring video-oriented foreground and background parallel compression method based on deep learning | |
CN113055674B (en) | Compressed video quality enhancement method based on two-stage multi-frame cooperation | |
CN107682699B (en) | A kind of nearly Lossless Image Compression method | |
CN116996697B (en) | HEVC (high efficiency video coding) frame-oriented video recovery method | |
CN112001854A (en) | Method for repairing coded image and related system and device | |
CN114827616B (en) | Compressed video quality enhancement method based on space-time information balance | |
CA2921884C (en) | Multi-level spatial resolution increase of video | |
CN111726636A (en) | HEVC (high efficiency video coding) coding optimization method based on time domain downsampling and frame rate upconversion | |
CN114189688B (en) | Chrominance component prediction method based on luminance template matching | |
CN111866511B (en) | Video damage repairing method based on convolution long-short term memory neural network | |
KR20060043050A (en) | Method for encoding and decoding video signal | |
CN111212288A (en) | Video data encoding and decoding method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |