CN111246213A - Video compressed sensing sampling rate self-adaptive hierarchical block matching reconstruction method - Google Patents

Video compressed sensing sampling rate self-adaptive hierarchical block matching reconstruction method Download PDF

Info

Publication number
CN111246213A
CN111246213A CN202010069771.1A CN202010069771A CN111246213A CN 111246213 A CN111246213 A CN 111246213A CN 202010069771 A CN202010069771 A CN 202010069771A CN 111246213 A CN111246213 A CN 111246213A
Authority
CN
China
Prior art keywords
frame
reconstruction
key
key frame
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010069771.1A
Other languages
Chinese (zh)
Other versions
CN111246213B (en
Inventor
周健
刘浩
魏冬
田伟
廖荣生
魏国林
黄震
应晓清
王凯巡
沈港
时庭庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN202010069771.1A priority Critical patent/CN111246213B/en
Publication of CN111246213A publication Critical patent/CN111246213A/en
Application granted granted Critical
Publication of CN111246213B publication Critical patent/CN111246213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/583Motion compensation with overlapping blocks

Abstract

The invention discloses a sampling rate self-adaptive hierarchical block matching reconstruction method, which comprises the following steps of firstly, respectively carrying out intra-frame initial reconstruction on the measured value of each frame in a video code stream, and then, sequentially carrying out hierarchical processing on group of pictures (GOP) one by one according to the increase rate condition of the sampling rate of a key frame compared with the sampling rate of a non-key frame: for the case of low growth rate, the non-key frame in each GOP selects the nearest key frame as a reference frame to reconstruct; for the case of high growth rate, the non-key frames in each GOP dynamically pick the bidirectional reference frame, and further complete the reconstruction of the image. Through a more refined reference frame selection mechanism, the method can obtain better video reconstruction quality, and obtains better performance compromise in the aspects of reconstruction quality and complexity.

Description

Video compressed sensing sampling rate self-adaptive hierarchical block matching reconstruction method
Technical Field
The invention relates to a hierarchical block matching reconstruction method with a self-adaptive video compressed sensing sampling rate, in particular to a video compressed sensing reconstruction technology under different sampling rates, and belongs to the field of video processing and communication.
Background
The compressed sensing theory aims at sampling signals at a rate lower than the nyquist rate, and reducing the dimension of the compressed signals while acquiring the signals rapidly. This theory shows that a signal with a certain structure can recover the original signal from a small number of random measurements with great probability without distortion, for example an image signal with sparseness in the transform domain. The complexity of the sensor can be reduced by acquiring the video signal through compressed sensing, so that low-cost and low-power-consumption signal acquisition is realized, and a video code stream is generated, so that the compressed sensing has great application potential in video signal sensing.
A video stream usually consists of several groups of pictures (GOPs), each GOP consisting of one key frame followed by some non-key frames. Since the residual is usually more sparse than the original signal, the block matching reconstruction method commonly used for video compressed sensing reconstructs the residual by thinning the residual between the current block and its matching block. The multi-hypothesis prediction is the mainstream technology for obtaining the residual error at present, and motion estimation and compensation are carried out at a reconstruction end through block matching, namely, an adjacent frame is used as a reference frame, and a matching block is selected from the reference frame to be used as a prediction of a current block. The existing block matching reconstruction method adopts a sparse residual method based on multi-hypothesis prediction to recover video signals, but when motion estimation is carried out, the selection mode of reference frames under different sampling rates is single, and the mode of selecting matching blocks from the reference frames is fixed, so that the video signals with time-varying contents are difficult to adapt. Therefore, for video compressed sensing reconstruction at different sampling rates, a more efficient matching block reconstruction mechanism needs to be proposed.
Disclosure of Invention
The invention aims to solve the technical problem of how to improve the video reconstruction quality for video compressed sensing under different sampling rates.
In order to solve the technical problem, the technical scheme of the invention is to provide a sampling rate self-adaptive hierarchical block matching reconstruction method, which determines a reference frame according to different sampling rate conditions and performs iterative reconstruction hierarchically. To control computational complexity and memory requirements, the measurement side uses block-based random measurements for video acquisition on a frame-by-frame basis, with keyframes acquired using a relatively high compression rate and non-keyframes acquired using a lower sampling rate. At the reconstruction end, the video reconstruction steps are as follows:
the method comprises the following steps: the video code stream is composed of a plurality of group of pictures (GOP), the total number of GOP frames is M, the first frame of each GOP is a high-sampling-rate key frame, and the rest M-1 frames are non-key frames. Firstly, each frame of measured value of the video code stream is subjected to independent block matching recovery. And selecting the multi-hypothesis matching blocks from each frame, performing intra-frame initial reconstruction on each frame by using a sparse residual method, completing reconstruction of all key frames, and sequentially performing reconstruction of non-key frames one by one GOP.
Step two: obtaining key frame sampling rate keySubrate and non-key frame sampling rate subrate of current GOP, calculating the increasing rate of key frame sampling rate compared with non-key frame sampling rate
Figure BDA0002377006230000021
And compared to a sample rate increase threshold τ: if GR<Tau, using the mode I in the step three to carry out secondary reconstruction on the non-key frame; otherwise, the mode in step three is usedII, performing two-stage reconstruction on the non-key frames.
Step three: this step only reconstructs each non-key frame again. And executing a mode I or a mode II on the non-key frames of the current GOP according to the analysis selection of the step two.
Mode I: based on Euclidean distance, selecting a key frame closest to the current non-key frame as a reference frame, and then performing interframe residual error reconstruction based on a single reference frame by using a sparse residual error method.
Mode II: in order to fully utilize the reconstruction result of the high sampling rate key frame, the mode is divided into a 1 st stage and a 2 nd stage. Numbering the results obtained by reconstructing each frame according to the sequence of reconstruction, and recording as ReciWhere i is the reconstruction sequence number of a frame in mode II. The results also include reconstructing the resulting image intermediate the stages, as some frames may be reconstructed multiple times.
Stage 1: selecting a reference frame for a frame M in a current GOP (group of pictures), namely the last non-key frame, taking a key frame of the current GOP and a key frame of a next GOP as two reference frames, and then solving the two reference frames by using a sparse residual method to obtain a reconstruction result Rec of a sequence number 11. Selecting a reference frame for a frame 2 in a GOP (group of pictures), namely a first non-key frame, selecting the reference frame as a key frame of a current GOP and a key frame of a next GOP, and performing a sparse residue method on the key frames to obtain a reconstruction result Rec of a sequence number 22. A reference frame is selected for frame 3, i.e., the 2 nd non-key frame, in a GOP, with reference frame selected as Rec1And Rec2And executing a sparse residual method to obtain a reconstruction result Rec of sequence number 33. A reference frame is selected for frame M-1, the 2 nd last non-key frame, in the GOP, the reference frame is selected as Rec3And Rec2And then reconstructed to obtain a reconstruction result Rec of sequence number 44. And the like until the reconstruction of the positive M/4 frame and the reciprocal M/4 frame of the non-key frame in the GOP is completed.
In the 2 nd stage, based on the reconstruction result of the first non-key frame in the GOP in the 1 st stage, the reconstruction is carried out again, the reference frame is selected as the reconstruction result of the key frame of the next GOP and the adjacent non-key frame in the 1 st stage,namely the reconstruction result of the 2 nd non-key frame in the current GOP in the 1 st stage, and the two selected reference frames are utilized to execute a sparse residual method to obtain the reconstruction result
Figure BDA0002377006230000022
Reconstructing the 1 st non-key frame from the current GOP and the 2 nd non-key frame in the current GOP in the stage 1
Figure BDA0002377006230000031
Next, the 2 nd key frame of the current GOP is reconstructed, and the reference frame is selected as
Figure BDA0002377006230000032
And
Figure BDA0002377006230000033
the reconstructed result
Figure BDA0002377006230000034
And then, reconstructing the rest non-reference frames from the two ends of the current GOP to the center alternately, wherein the reference frame of each non-key frame is the reconstruction result of the first two times of reconstruction in the 2 nd stage until the reconstruction of the current image group is completed.
Compared with the prior art, the invention has the following advantages and positive effects: the method fully excavates the correlation between frames, uses the correlation information of adjacent GOPs, and makes up the defect that the prior method only references in the same GOP; in the reconstruction process, the flexible selection and the hierarchical reconstruction of the reference frame also improve the reconstruction effect, and for the condition that the difference between the sampling rate of the key frame and the sampling rate of the non-key frame is small, the reference information provided by the key frame is less, so that a single reference frame is adopted, otherwise, a bidirectional multi-reference frame is adopted. The method can avoid interference introduced by interframe prediction under various sampling rates, thereby always keeping better reconstruction effect.
Drawings
FIG. 1 is a diagram of the relationship between GOPs and frames in a video stream;
FIG. 2 is a flow chart of a reconstruction method for dynamically selecting multi-hypothesis matching blocks under a multi-sampling rate condition;
FIG. 3 is a diagram illustrating determination of reference frames for mode I;
fig. 4 is a diagram illustrating determination of a reference frame in mode II.
Detailed Description
In order to make the invention more comprehensible, preferred embodiments are described in detail below with reference to the accompanying drawings.
Examples
For a given video code stream, a plurality of groups of GOPs are selected for testing, and sequential numbering is performed for video frames to be reconstructed from 1, as shown in fig. 1, the total number M of GOP frames is 8, the first frame of each GOP is a key frame, then frame 1, frame 9, and … are key frames, and the rest frames are non-key frames. Fig. 2 is a flow chart of a method for reconstructing video compressed sensing sampling rate adaptive hierarchical block matching. In order to verify the reconstruction effect of the method under various sampling rate conditions, the video stream is subjected to reconstruction tests under the three sampling rate conditions of (0.3,0.2), (0.5,0.2) and (0.7,0.2), the former value in the brackets is the key frame sampling rate, and the latter value in the brackets is the non-key frame sampling rate. Some necessary reconstruction parameter settings are as follows: and (3) adopting a random Gaussian observation matrix, wherein the block size s is 16 multiplied by 16, the size L of a search window of multiple hypothesis matching blocks in a reference frame is 30 multiplied by 30, and the total number C of the matching blocks in the reference frame is 10. In general, there is a key frame with a higher sampling rate in a GOP, the other frames are non-key frames with a low sampling rate, the key frame can be used as a reference frame for the rest frames in the GOP, and the sampling rate increase threshold τ is selected to be 1. The reconstruction process of the proposed method is as follows:
the method comprises the following steps: and performing intra-frame initial reconstruction on the measured value of each frame of the current GOP. In each frame, the current reconstruction block needs to pick C ═ 10 multi-hypothesis matching blocks from the frame, and the selection of the matching blocks is based on the following mean square error criterion:
Figure BDA0002377006230000041
where s is the block size, ref represents the reference frame of the current frame, prev represents the previous reconstruction result in the iterative process, k represents the order of the current reconstructed block, and j represents the order of the currently chosen matching block. Then, a residual sparse model is formed by using the selected C matched blocks
Figure BDA0002377006230000042
Wherein
Figure BDA0002377006230000043
For the k block x of the current framekThe (j) th matching block of (a),
Figure BDA0002377006230000044
the weight corresponding to the matching block. Then, the residual model is weighted: f (x)k)=Wk·R(xk) In the formula WkIs the weight corresponding to the residual block. Thereby obtaining a weighted norm
Figure BDA0002377006230000045
In the formula, D is the number of the overlapped blocks which are formed by splitting the current frame, and the primary reconstruction in the frame of the current frame is realized by solving the above formula.
Step two: the growth rate GR of the key frame sampling rate compared to the non-key frame sampling rate is analyzed. Obtaining key frame sampling rate keySubrate and non-key frame sampling rate subrate from current GOP, calculating growth rate by formula
Figure BDA0002377006230000046
And if the growth rate GR of the current GOP is smaller than tau, processing by using a mode I, and otherwise, processing by using a mode II. In this example, mode I is performed for sample rate (0.3,0.2) and mode II is performed for both sample rates (0.5,0.2) and (0.7, 0.2).
Step three: and executing a corresponding mode according to the mode selection in the step two:
for mode I:
based on the initial reconstruction result in the first frame in step, inter-frame multi-hypothesis reconstruction is performed, and the selection of the multi-hypothesis reference frame is shown in fig. 3. In order to reduce the introduction of interference blocks, each non-key frame selects a frame from two adjacent key frames before and after the non-key frame as a reconstructed reference frame based on Euclidean distance, namely if the current non-key frame to be reconstructed is closer to the forward key frame, the forward key frame is selected as the reference frame, and if not, the backward key frame is selected as the reference frame. And performing the operation on each non-key frame in the current GOP to determine a reference frame of the non-key frame, and then reconstructing by using a sparse residual error method, wherein the obtained recovery image is the final reconstruction result of the frame.
For mode II:
based on the initial reconstruction result in the first frame in the step one, the reconstruction is carried out in two stages. Fig. 4 shows the reference frame selection and reconstruction sequence number of each frame for non-key frames in the first GOP. ReciIndicating the result of the reconstruction of a frame, the index i indicating the number of reconstruction of the frame in mode II, e.g. the result of the reconstruction of a frame is denoted Rec3It indicates that the reconstruction number of the frame is 3, and 2 times of sparse residual methods have been performed before it.
The stage 1 is as follows: firstly, reconstructing a frame 2 in a GOP (group of pictures), wherein a reference frame is an initial reconstruction result of a current GOP key frame and a next adjacent key frame in the first step, reconstructing a serial number 1 by using a sparse residue method based on two selected reference frames, and a reconstruction result is recorded as Rec1(ii) a Reconstructing a frame 8 in a GOP (group of pictures), selecting reference frames as the frame 2, reconstructing a serial number 2 by using a sparse residue method based on the two selected reference frames, and recording a reconstruction result as Rec2(ii) a The frame 3 in the GOP is reconstructed, and the reference frame is selected as Rec1And Rec2And the sequence number 3 is reconstructed, and the reconstruction result is recorded as Rec3(ii) a The frame 7 within the GOP is reconstructed with the reference frame chosen as Rec2And Rec3And reconstructing the sequence number 4, and recording the reconstruction result as Rec4
The 2 nd stage is as follows: on the basis of stage 1, frame 2 in the GOP is firstly reconstructed again, and the reference frame is selected as Rec in stage 13And reconstructing the key frame of the current GOP by using the sequence number 5, wherein the reconstruction result is recorded as Rec5(ii) a The frames 8 within the GOP are reconstructed with reference frames selected as Rec in stage 14And reconstructing the key frame of the current GOP by using the sequence number 6, wherein the reconstruction result is recorded as Rec6(ii) a The frame 3 in the GOP is reconstructed, and the reference frame is selected as Rec5And Rec6Then, the sequence number 7 is reconstructed, and the result of the reconstruction is denoted as Rec7(ii) a The frame 7 within the GOP is reconstructed with the reference frame chosen as Rec6And Rec7Then, the sequence number 8 is reconstructed, and the result of the reconstruction is denoted as Rec8(ii) a The frame 4 within the GOP is reconstructed with the reference frame chosen as Rec7And Rec8Then, the sequence number 9 is reconstructed, and the result of the reconstruction is denoted as Rec9(ii) a The frame 6 within the GOP is reconstructed with the reference frame chosen as Rec8And Rec9Then, the sequence number 10 is reconstructed, and the result of the reconstruction is denoted as Rec10(ii) a The frame 5 in the GOP is reconstructed, and the reference frame is selected as Rec9And Rec10Then, the serial number 11 is reconstructed, and the result of the reconstruction is denoted as Rec11. This completes the reconstruction of the current GOP, and the reconstruction of the remaining GOPs is similar. By a finer self-adaptive block matching mechanism, the method effectively improves the video reconstruction quality under the condition of increasing certain complexity.

Claims (1)

1. A sampling rate self-adaptive hierarchical block matching reconstruction method is characterized by comprising the following steps:
the method comprises the following steps: the video code stream is composed of a plurality of image groups, the total number of image group frames is M, the first frame of each image group is a high-sampling-rate key frame, the rest M-1 frames are non-key frames, independent block matching recovery is carried out on each frame measurement value of the video code stream, multiple hypothesis matching blocks are selected from each frame, intra-frame primary reconstruction is carried out on each frame by using a sparse residual method, all key frames are reconstructed till the completion, and then non-key frame reconstruction is carried out one by one in sequence;
step two: obtaining key frame sampling rate keySubrate and non-key frame sampling rate subrate of current image group, calculating increase of key frame sampling rate compared with non-key frame sampling rateLength of growth
Figure FDA0002377006220000011
And compared to a sample rate increase threshold τ: if GR<Tau, using the mode I in the step three to carry out secondary reconstruction on the non-key frame; otherwise, using the mode II in the third step to carry out two-stage reconstruction on the non-key frame;
step three: in the step, only the non-key frames are reconstructed again, and according to the analysis and selection in the step two, the mode I or the mode II is executed on the non-key frames of the current image group:
mode I: based on Euclidean distance, selecting a key frame closest to the current non-key frame as a reference frame, and then performing interframe residual error reconstruction based on a single reference frame by using a sparse residual error method;
mode II: in order to fully utilize the reconstruction result of the high sampling rate key frame, the mode is divided into a 1 st stage and a 2 nd stage, the results obtained by reconstructing each frame are numbered according to the reconstruction sequence and are marked as ReciWhere i is a reconstruction sequence number of a frame in mode II: the results also include reconstructing the resulting image intermediate the stages, as some frames may be reconstructed multiple times.
Stage 1: selecting a reference frame for a frame M in a current image group, taking a key frame of the current image group and a key frame of a next image group as two reference frames, and solving the two reference frames by using a sparse residual method to obtain a reconstruction result Rec with a sequence number of 11(ii) a Selecting a reference frame for a frame 2 in the image group, wherein the reference frame is selected to be a key frame of the current image group and a key frame of the next image group, and performing a sparse residual method on the key frames to obtain a reconstruction result Rec of a sequence number 22(ii) a Selecting a reference frame for frame 3 in the group of pictures, the reference frame being selected as Rec1And Rec2And executing a sparse residual method to obtain a reconstruction result Rec of sequence number 33(ii) a Selecting a reference frame for frame M-1 in the group of pictures, the reference frame being selected as Rec3And Rec2And then reconstructed to obtain a reconstruction result Rec of sequence number 44(ii) a And the like until the M/4 frames before the positive number and the M/4 frames before the reciprocal number of the non-key frames in the image group are finishedReconstructing;
stage 2: based on the reconstruction result of the first non-key frame in the image group in the 1 st stage, carrying out reconstruction again, selecting the reference frame as the reconstruction result of the key frame of the next image group and the adjacent non-key frame in the 1 st stage, and executing a sparse residual method by using the two selected reference frames to obtain the reconstruction result
Figure FDA0002377006220000012
Reconstructing the 1 st non-key frame from the current image group and the reference frame selected from the key frame of the current image group and the reconstruction result of the 2 nd non-key frame from the current image group in the 1 st stage
Figure FDA0002377006220000021
Next, the 2 nd key frame of the current image group is reconstructed, and the reference frame is selected as
Figure FDA0002377006220000022
And
Figure FDA0002377006220000023
the reconstructed result
Figure FDA0002377006220000024
And then, reconstructing the rest non-reference frames from the two ends of the current image group to the center alternately in sequence, wherein the reference frame of each non-key frame is the reconstruction result of the first two times of reconstruction in the 2 nd stage until the reconstruction of the current image group is completed.
CN202010069771.1A 2020-01-21 2020-01-21 Video compressed sensing sampling rate self-adaptive hierarchical block matching reconstruction method Active CN111246213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010069771.1A CN111246213B (en) 2020-01-21 2020-01-21 Video compressed sensing sampling rate self-adaptive hierarchical block matching reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010069771.1A CN111246213B (en) 2020-01-21 2020-01-21 Video compressed sensing sampling rate self-adaptive hierarchical block matching reconstruction method

Publications (2)

Publication Number Publication Date
CN111246213A true CN111246213A (en) 2020-06-05
CN111246213B CN111246213B (en) 2022-05-13

Family

ID=70878097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010069771.1A Active CN111246213B (en) 2020-01-21 2020-01-21 Video compressed sensing sampling rate self-adaptive hierarchical block matching reconstruction method

Country Status (1)

Country Link
CN (1) CN111246213B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166070A1 (en) * 2008-12-31 2010-07-01 Nxp B.V. Low-resolution video coding content extraction
CN102427527A (en) * 2011-09-27 2012-04-25 西安电子科技大学 Method for reconstructing non key frame on basis of distributed video compression sensing system
CN104822063A (en) * 2015-04-16 2015-08-05 长沙理工大学 Compressed sensing video reconstruction method based on dictionary learning residual-error reconstruction
CN106385584A (en) * 2016-09-28 2017-02-08 江苏亿通高科技股份有限公司 Spatial correlation-based distributed video compressive sensing adaptive sampling and coding method
CN107360426A (en) * 2017-07-13 2017-11-17 福州大学 A kind of video sequence reconstructing method based on compressed sensing
US20180213225A1 (en) * 2017-01-21 2018-07-26 OrbViu Inc. Method and system for layer based view optimization encoding of 360-degree video
CN110290389A (en) * 2019-07-11 2019-09-27 东华大学 It is selected based on shot and long term reference frame and assumes match block video compress sensing reconstructing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166070A1 (en) * 2008-12-31 2010-07-01 Nxp B.V. Low-resolution video coding content extraction
CN102427527A (en) * 2011-09-27 2012-04-25 西安电子科技大学 Method for reconstructing non key frame on basis of distributed video compression sensing system
CN104822063A (en) * 2015-04-16 2015-08-05 长沙理工大学 Compressed sensing video reconstruction method based on dictionary learning residual-error reconstruction
CN106385584A (en) * 2016-09-28 2017-02-08 江苏亿通高科技股份有限公司 Spatial correlation-based distributed video compressive sensing adaptive sampling and coding method
US20180213225A1 (en) * 2017-01-21 2018-07-26 OrbViu Inc. Method and system for layer based view optimization encoding of 360-degree video
CN107360426A (en) * 2017-07-13 2017-11-17 福州大学 A kind of video sequence reconstructing method based on compressed sensing
CN110290389A (en) * 2019-07-11 2019-09-27 东华大学 It is selected based on shot and long term reference frame and assumes match block video compress sensing reconstructing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN,C: "Perceptual hash algorithm-based adaptive GOP selection algorithm for distributed compressive video sensing", 《IET IMAGE PROCESSING》 *
周健,刘浩: "基于动态多模式匹配的视频压缩感知三步重构", 《哈尔滨工业大学学报》 *

Also Published As

Publication number Publication date
CN111246213B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN109064507B (en) Multi-motion-stream deep convolution network model method for video prediction
JP5844394B2 (en) Motion estimation using adaptive search range
CN107360426B (en) Video sequence reconstruction method based on compressed sensing
CN112884851B (en) Construction method of deep compressed sensing network based on expansion iteration optimization algorithm
CN108289224B (en) A kind of video frame prediction technique, device and neural network is compensated automatically
JP2005229600A (en) Motion compensation interpolating method by motion estimation of superposed block base and frame rate conversion device applying the same
CN110290389B (en) Video compression sensing reconstruction method based on long-term and short-term reference frame selection hypothesis matching block
CN111369466A (en) Image distortion correction enhancement method of convolutional neural network based on deformable convolution
CN114170286A (en) Monocular depth estimation method based on unsupervised depth learning
CN110505479B (en) Video compressed sensing reconstruction method with same measurement rate frame by frame under time delay constraint
CN107730468A (en) The restoration methods of picture rich in detail under a kind of UAV Fuzzy noise image
CN111246213B (en) Video compressed sensing sampling rate self-adaptive hierarchical block matching reconstruction method
CN109934882B (en) Video compressed sensing reconstruction method based on dynamically selecting multi-hypothesis matching blocks
CN117376575A (en) Compressed domain video anomaly detection method based on conditional diffusion model
CN110728728B (en) Compressed sensing network image reconstruction method based on non-local regularization
CN113225568B (en) Iterative progressive hypothesis prediction method for video compressed sensing low-delay interframe reconstruction
CN101389032A (en) Intra-frame predictive encoding method based on image value interposing
CN108510464B (en) Compressed sensing network based on block observation and full-image reconstruction method
CN106937125B (en) Multi-hypothesis prediction method for dynamically changing size of search window
CN114926335A (en) Video super-resolution method and system based on deep learning and computer equipment
CN110958417B (en) Method for removing compression noise of video call video based on voice clue
CN113902647A (en) Image deblurring method based on double closed-loop network
CN113556546A (en) Two-stage multi-hypothesis prediction video compressed sensing reconstruction method
CN113643399B (en) Color image self-adaptive reconstruction method based on tensor chain rank
Chen et al. Revisiting Event-Based Video Frame Interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant