CN109302614B - Video compression method based on third-order tensor self-coding network - Google Patents

Video compression method based on third-order tensor self-coding network Download PDF

Info

Publication number
CN109302614B
CN109302614B CN201811168316.6A CN201811168316A CN109302614B CN 109302614 B CN109302614 B CN 109302614B CN 201811168316 A CN201811168316 A CN 201811168316A CN 109302614 B CN109302614 B CN 109302614B
Authority
CN
China
Prior art keywords
network
tensor
video
integer
equal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811168316.6A
Other languages
Chinese (zh)
Other versions
CN109302614A (en
Inventor
刘光灿
李阳
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Xinshiyun Science And Technology Co ltd
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201811168316.6A priority Critical patent/CN109302614B/en
Publication of CN109302614A publication Critical patent/CN109302614A/en
Application granted granted Critical
Publication of CN109302614B publication Critical patent/CN109302614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a video compression method based on a third-order tensor self-coding network. The self-coding network is widely used in image compression, but a large number of parameters need to be stored and a large number of pictures need to be used for training the network, so that a third-order tensor is used for replacing full connection parameters between layers in the self-coding network, a self-coding mechanism and a back propagation method are used for carrying out iterative solution on the parameters in the network to achieve convergence, a convergence result is coded, and finally a compressed video is obtained.

Description

Video compression method based on third-order tensor self-coding network
Technical Field
The invention relates to a video compression method based on a third-order tensor self-coding network, and belongs to the technical field of video compression.
Background
In recent years, video technology has gained widespread use and growth. By 2020, 80% of internet traffic is video traffic. Uncompressed video, however, occupies a large amount of storage space. At present, there are mainly h.264 and h.265 video stream coding methods and neural network based methods. Based on H.264 and H.265 video stream coding methods, the compression rate is high, but the decompression speed is low; the neural network-based method has relatively high compression efficiency, but needs a large number of images for training, has more network parameters, and needs to occupy more memory space of terminal equipment.
Disclosure of Invention
The invention provides a video compression method based on a third-order tensor self-coding network, which aims to solve the problems in the prior art and has high compression rate and high decompression speed.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: a video compression method based on a third-order tensor self-coding network comprises the following steps:
firstly, preprocessing a target video;
setting full connection parameters between layers in a target video self-coding network as third-order tensors and setting iteration ending conditions;
step three, forward transmission of a video network is carried out;
step four, judging whether iteration is terminated, if so, skipping to step six, and outputting a core tensor and decoding network parameters; otherwise, continuing the step five;
fifthly, performing back propagation of the video network;
sixthly, coding and compressing the core tensor and the decoding network;
and step seven, outputting the compressed video.
The technical scheme is further designed as follows: and in the second step, parameters of the network and iteration ending conditions are set according to the required compression ratio and the peak signal-to-noise ratio.
The third step comprises the following specific steps: performing modulo-1 modulo-2 modulo-3 multiplication by using three factor matrixes between a first layer network (input video) and a next layer network, and mapping the result by using a sigmoid function to obtain a new third-order tensor (the next layer network); and respectively carrying out matrix multiplication and tensor multiplication for 5 times in sequence to obtain the final third-order tensor (output video).
The sigmoid function formula is as follows:
Figure GDA0003394228220000011
and the iteration ending condition is that the error or the iteration number reaches an upper limit. When the error value is smaller than the set error value, the iteration is ended; or when the iteration times are more than the set iteration times, ending the iteration.
The back propagation step in the step five is as follows:
step 5.1, solving the gradient from the output layer to the hidden layer;
according to the chain rule we obtain:
Figure GDA0003394228220000021
wherein:
Figure GDA0003394228220000022
Etotalis a tensor self-encoding network loss function, YrealFor input of video, YoutIn order to output the video, the video is output,
Figure GDA0003394228220000023
x is more than or equal to 1 and less than or equal to m, y is more than or equal to 1 and less than or equal to n, z is more than or equal to 1 and less than or equal to p, i is more than or equal to 1 and less than or equal to r, j is more than or equal to 1 and less than or equal to s, k is more than or equal to 1 and less than or equal to t, A, B and C are three factor matrixes; m, r represents the size of matrix A, n, s represents the size of matrix B, p, t represents the size of matrix C, x represents an integer between 1 and m, y represents an integer between 1 and n, z represents an integer between 1 and p, i represents an integer between 1 and r, j represents an integer between 1 and s, k represents an integer between 1 and t, m, n, p, r, s, t belong to positive integers,
Figure GDA0003394228220000024
represents a real number and a real number,
Figure GDA0003394228220000025
represents a derivation symbol;
Figure GDA0003394228220000026
Figure GDA0003394228220000027
Figure GDA0003394228220000028
the same can be obtained:
Figure GDA0003394228220000029
Figure GDA00033942282200000210
step 5.2, solving the gradient from the hidden layer to the hidden layer:
according to the chain rule we obtain:
Figure GDA00033942282200000211
wherein the content of the first and second substances,
Figure GDA00033942282200000212
a is more than or equal to 1 and less than or equal to u, b is more than or equal to 1 and less than or equal to v, and c is more than or equal to 1 and less than or equal to w; u, v, W represent the size of the tensor W, u, v, W are positive integers, a represents an integer between 1 and u, b represents an integer between 1 and v, and c represents an integer between 1 and W.
Solving for the hidden layer to hidden layer gradient is similar to the output layer to hidden layer gradient, except that
Figure GDA0003394228220000031
The solving formula is as follows:
Figure GDA0003394228220000032
wherein the content of the first and second substances,
Figure GDA0003394228220000033
other solving steps are the same as the steps in 1);
Figure GDA0003394228220000034
Figure GDA0003394228220000035
the following can be obtained by the same method:
Figure GDA0003394228220000036
Figure GDA0003394228220000037
step 5.3, the gradient from all hidden layers to the hidden layer is obtained by using the method in the step 5.2
Step 5.4, the tensor self-encoding network parameters are updated using the ADAM (adaptive moment estimation) method and the gradients found in step 5.1, step 5.2 and step 5.3.
The concrete steps of coding and compressing the core tensor and the compression network in the sixth step are as follows:
step 6.1, extracting and separating the integer part and the decimal part of the obtained nuclear tensor and compressed network parameter;
6.2, coding and compressing the integer part by using Huffman coding;
step 6.3, multiplying the separated fractional part by a scalar alpha and rounding the result;
step 6.4, carrying out quantization compression on the integer obtained in the step 6.3 by using a beta bit binary system;
and 6.5, storing the compression results of the integer and the decimal part.
The alpha is 2043, and the beta is 11.
The decompression step in the step 7 is as follows:
step 7.1, inputting the compression result obtained in the step six;
7.2, performing inverse coding on the core tensor and the compression network;
performing inverse Huffman transformation on the integer part to obtain a core tensor and an integer part of a network;
taking out binary compression representation of the decimal part, and dividing the binary compression representation by alpha to obtain a core tensor and a network decimal part;
combining the core tensor and the decimal and integer part of the network to obtain the parameters of the core tensor and the network;
step 7.3, carrying out decompression by using tensor modular multiplication;
and 7.4, outputting the decompressed video.
The invention has the beneficial effects that:
the invention carries out video compression through a tensor self-coding network, and the tensor has the capacity of expressing multi-dimensional complex data, so that the invention has higher compression rate and higher decompression speed than the prior method under the same condition, and can be an algorithm for monitoring video compression.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a back propagation flow chart.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
As shown in fig. 1, the video compression method based on the third-order tensor self-coding network of the present invention specifically includes the following steps:
inputting a target video (three-dimensional) and carrying out normalization and graying processing;
replacing full-connection parameters of the self-coding network with the three factor matrixes and setting iteration conditions;
setting parameters of the network and iteration ending conditions according to the required compression ratio and the peak signal-to-noise ratio; the upper limit of iteration times is set to 10^5, the error value of each pixel value is set to 0.0012, the size of the middle five hidden layers is set according to the compression rate and the peak signal-to-noise ratio, fewer hidden layer nodes are set if a larger compression rate needs to be obtained, and more hidden layer nodes are set if a higher PSNR value is obtained. The number of nodes of the third hidden layer is the minimum, the number of nodes of the first hidden layer is less than that of nodes of the input layer and greater than that of nodes of the second hidden layer, the number of nodes of the first hidden layer is equal to that of nodes of the fifth hidden layer, the number of nodes of the second hidden layer is the same as that of nodes of the fourth hidden layer, and the number of nodes of the input layer is the same as that of nodes of the output layer.
Step three, forward propagation is carried out;
performing modulo-1 modulo-2 modulo-3 multiplication by using three factor matrixes between a first layer network (input video) and a next layer network, and mapping the result by using a sigmoid function to obtain a new third-order tensor (the next layer network); and respectively carrying out matrix multiplication and tensor multiplication for 5 times in sequence to obtain the final third-order tensor (output video). sigmoid formula is as follows:
Figure GDA0003394228220000041
step four, whether an iteration end condition is reached or not is judged, and when the error value is smaller than the set error value, the iteration is ended; or when the iteration times are more than the set iteration times, the iteration is ended; if so, jumping to the sixth step to obtain a coding network and a compression network; otherwise, continuing the step five;
step five, performing reverse propagation, and then returning to the step three; the back propagation step is shown in figure 2,
step 5.1, solving the gradient from the output layer to the hidden layer: the output layer is the last third-order tensor, and except the third-order tensors of the input layer and the output layer, other third-order orderings are all hidden layers.
According to the chain rule we obtain:
Figure GDA0003394228220000051
wherein:
Figure GDA0003394228220000052
Etotalis a tensor self-encoding network loss function, YrealFor input of video, YoutIn order to output the video, the video is output,
Figure GDA0003394228220000053
x is more than or equal to 1 and less than or equal to m, y is more than or equal to 1 and less than or equal to n, z is more than or equal to 1 and less than or equal to p, i is more than or equal to 1 and less than or equal to r, j is more than or equal to 1 and less than or equal to s, k is more than or equal to 1 and less than or equal to t, A, B and C are three factor matrixes, m and r represent the size of the matrix A, n and s represent the size of the matrix B, p and t represent the size of the matrix C, x represents an integer between 1 and m, y represents an integer between 1 and n, and z represents 1 to pI represents an integer from 1 to r, j represents an integer from 1 to s, k represents an integer from 1 to t, and m, n, p, r, s, t are positive integers.
Figure GDA00033942282200000511
Represents a real number and a real number,
Figure GDA00033942282200000512
represents a derivation symbol;
Figure GDA0003394228220000054
Figure GDA0003394228220000055
Figure GDA0003394228220000056
the same can be obtained:
Figure GDA0003394228220000057
Figure GDA0003394228220000058
step 5.2, solving the gradient from the hidden layer to the hidden layer:
Figure GDA0003394228220000059
wherein the content of the first and second substances,
Figure GDA00033942282200000510
a is more than or equal to 1 and less than or equal to u, b is more than or equal to 1 and less than or equal to v, c is more than or equal to 1 and less than or equal to W, u, v and W represent the size of tensor W, u, v and W are positive integers, and a represents an integer between 1 and uB represents an integer between 1 and v, and c represents an integer between 1 and w.
Solving for the hidden layer to hidden layer gradient is similar to the output layer to hidden layer gradient, except that
Figure GDA0003394228220000061
Is solved by the formula
Figure GDA0003394228220000062
Wherein the content of the first and second substances,
Figure GDA0003394228220000063
other solving steps are the same as the steps in 1);
Figure GDA0003394228220000064
Figure GDA0003394228220000065
the following can be obtained by the same method:
Figure GDA0003394228220000066
Figure GDA0003394228220000067
step 5.3, the gradient from all hidden layers to the hidden layer is obtained by using the method in the step 5.2
Step 5.4, updating network parameters:
the tensor self-encoding network parameters are updated using the ADAM method and the gradients found in steps 5.1 to 5.3.
Sixthly, coding and compressing the core tensor and the compression network;
step 6.1, extracting and separating the integer part and the decimal part of the nuclear tensor and compressed network parameters obtained by separation
Step 6.2, using Huffman coding to code and compress the integer part
Step 6.3, multiply the fractional part obtained by separation by scalar alpha and round the result, alpha is 2043
Step 6.4, the integers obtained in the step 3 are quantized and compressed by using a beta bit binary system, wherein beta is 11
And 6.5, storing the compression results of the integer and the decimal part.
Step seven, outputting a compression result;
the decompression step is as follows:
step 7.1, inputting the compression result obtained in the step six;
7.2, performing inverse coding on the core tensor and the compression network;
performing inverse Huffman transform on the integer part in the result to obtain the integer part of the core tensor and the network
Taking out binary compression representation of the fractional part, and dividing the binary compression representation by alpha to obtain a core tensor and a fractional part of the network
Combining the nuclear tensor and the decimal and integer parts of the network to obtain the parameters of the nuclear tensor and the network
Step 7.3, carrying out decompression by using tensor modular multiplication;
and 7.4, outputting the decompressed video.
Table 1 shows some evaluation criteria of the compression result of the embodiment of the method of the present invention, Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), and compression Ratio are image quality evaluation criteria, where PSNR and SSIM range from 0 to 1, and the larger the values of the three are, the better the image quality is.
Video name miss-america bridge-close bridge-far claire grandma
PSNR 33.05 37.68 45.48 33.39 33.47
SSIM 0.93 0.97 0.98 0.93 0.90
Compression ratio 112.35 79.35 88.73 183.64 134.59
Table 1 video compression results quantitative analysis table.
The technical solutions of the present invention are not limited to the above embodiments, and all technical solutions obtained by using equivalent substitution modes fall within the scope of the present invention.

Claims (6)

1. A video compression method based on a third-order tensor self-coding network is characterized by comprising the following steps:
firstly, preprocessing graying and normalization of a target three-dimensional video;
setting full connection parameters between layers in a target video self-coding network as third-order tensors and setting iteration ending conditions;
step three, forward transmission of a video network is carried out;
performing modulo-1 modulo-2 modulo-3 multiplication by using three factor matrixes between a first layer network and a next layer network, and mapping the result by using a sigmoid function to obtain a new third-order tensor; respectively and sequentially carrying out matrix multiplication 5 times and tensor multiplication to obtain a final third-order tensor; the input video is a first-layer network, the obtained new third-order tensor is a next-layer network, and the final third-order tensor is the output video;
step four, judging whether iteration is terminated, if so, skipping to step six, and outputting a core tensor and decoding network parameters; otherwise, continuing the step five;
fifthly, performing back propagation of the video network;
sixthly, coding and compressing the core tensor and the decoding network;
step 6.1, extracting and separating the integer part and the decimal part of the obtained nuclear tensor and compressed network parameter;
6.2, coding and compressing the integer part by using Huffman coding;
step 6.3, multiplying the separated fractional part by a scalar alpha, and rounding the result, wherein alpha is 2043;
step 6.4, carrying out quantization compression on the integer obtained in the step 6.3 by using a beta bit binary system, wherein beta is 11;
6.5, storing the compression results of the integer and the decimal part;
and step seven, outputting the compressed video.
2. The video compression method based on the third order tensor self-coding network as recited in claim 1, wherein: and in the second step, parameters of the network and iteration ending conditions are set according to the required compression ratio and the peak signal-to-noise ratio.
3. The video compression method based on the third order tensor self-coding network as recited in claim 1, wherein: the sigmoid function formula is as follows:
Figure FDA0003394228210000011
4. the video compression method based on the third order tensor self-coding network as recited in claim 2, wherein: the end condition of the iteration is as follows: when the error value is smaller than the set error value, the iteration is ended; or when the iteration times are more than the set iteration times, ending the iteration.
5. The video compression method based on the third order tensor self-coding network as recited in claim 1, wherein:
the back propagation step in the step five is as follows:
step 5.1, solving the gradient from the output layer to the hidden layer;
according to the chain rule we obtain:
Figure FDA0003394228210000021
wherein:
Figure FDA0003394228210000022
Etotalis a tensor self-encoding network loss function, YrealFor input of video, YoutIn order to output the video, the video is output,
Figure FDA0003394228210000023
1≤x≤m,1≤y≤n,1≤z≤p,1≤r is more than or equal to i, j is more than or equal to 1 and less than or equal to s, k is more than or equal to 1 and less than or equal to t, A, B and C are three factor matrixes; m, r represents the size of matrix A, n, s represents the size of matrix B, p, t represents the size of matrix C, x represents an integer between 1 and m, y represents an integer between 1 and n, z represents an integer between 1 and p, i represents an integer between 1 and r, j represents an integer between 1 and s, k represents an integer between 1 and t, m, n, p, r, s, t belong to positive integers,
Figure FDA0003394228210000024
represents a real number and a real number,
Figure FDA0003394228210000025
represents a derivation symbol;
Figure FDA0003394228210000026
Figure FDA0003394228210000027
Figure FDA0003394228210000028
the same can be obtained:
Figure FDA0003394228210000029
Figure FDA00033942282100000210
step 5.2, solving the gradient from the hidden layer to the hidden layer:
according to the chain rule we obtain:
Figure FDA00033942282100000211
wherein the content of the first and second substances,
Figure FDA00033942282100000212
a is more than or equal to 1 and less than or equal to u, b is more than or equal to 1 and less than or equal to v, and c is more than or equal to 1 and less than or equal to w; u, v, W represent the size of the tensor W, u, v, W are positive integers, a represents an integer between 1 and u, b represents an integer between 1 and v, and c represents an integer between 1 and W;
Figure FDA00033942282100000213
wherein the content of the first and second substances,
Figure FDA0003394228210000031
Figure FDA0003394228210000032
Figure FDA0003394228210000033
the same can be obtained:
Figure FDA0003394228210000034
Figure FDA0003394228210000035
step 5.3, the gradient from all hidden layers to the hidden layer is obtained by using the method in the step 5.2
Step 5.4, the tensor self-encoding network parameters are updated using the ADAM (adaptive moment estimation) method and the gradients found in step 5.1, step 5.2 and step 5.3.
6. The video compression method based on the third order tensor self-coding network as recited in claim 1, wherein: the decompression step in the step 7 is as follows:
step 7.1, inputting the compression result obtained in the step six;
7.2, performing inverse coding on the core tensor and the compression network;
performing inverse Huffman transformation on the integer part to obtain a core tensor and an integer part of a network;
taking out binary compression representation of the decimal part, and dividing the binary compression representation by alpha to obtain a core tensor and a network decimal part;
combining the core tensor and the decimal and integer part of the network to obtain the parameters of the core tensor and the network;
step 7.3, carrying out decompression by using tensor modular multiplication;
and 7.4, outputting the decompressed video.
CN201811168316.6A 2018-10-08 2018-10-08 Video compression method based on third-order tensor self-coding network Active CN109302614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811168316.6A CN109302614B (en) 2018-10-08 2018-10-08 Video compression method based on third-order tensor self-coding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811168316.6A CN109302614B (en) 2018-10-08 2018-10-08 Video compression method based on third-order tensor self-coding network

Publications (2)

Publication Number Publication Date
CN109302614A CN109302614A (en) 2019-02-01
CN109302614B true CN109302614B (en) 2022-01-18

Family

ID=65161883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811168316.6A Active CN109302614B (en) 2018-10-08 2018-10-08 Video compression method based on third-order tensor self-coding network

Country Status (1)

Country Link
CN (1) CN109302614B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934458A (en) * 2015-12-31 2017-07-07 中国科学院深圳先进技术研究院 Multilayer automatic coding and system based on deep learning
CN107516129A (en) * 2017-08-01 2017-12-26 北京大学 The depth Web compression method decomposed based on the adaptive Tucker of dimension
CN107944556A (en) * 2017-12-12 2018-04-20 电子科技大学 Deep neural network compression method based on block item tensor resolution
CN107967516A (en) * 2017-10-12 2018-04-27 中科视拓(北京)科技有限公司 A kind of acceleration of neutral net based on trace norm constraint and compression method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10242313B2 (en) * 2014-07-18 2019-03-26 James LaRue Joint proximity association template for neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934458A (en) * 2015-12-31 2017-07-07 中国科学院深圳先进技术研究院 Multilayer automatic coding and system based on deep learning
CN107516129A (en) * 2017-08-01 2017-12-26 北京大学 The depth Web compression method decomposed based on the adaptive Tucker of dimension
CN107967516A (en) * 2017-10-12 2018-04-27 中科视拓(北京)科技有限公司 A kind of acceleration of neutral net based on trace norm constraint and compression method
CN107944556A (en) * 2017-12-12 2018-04-20 电子科技大学 Deep neural network compression method based on block item tensor resolution

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《BP算法的改进及其应用研究》;黄庆斌;《中国优秀硕士学位论文全文数据库 信息科技辑》;20101015;正文第12-16页 *

Also Published As

Publication number Publication date
CN109302614A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
CN109996071B (en) Variable code rate image coding and decoding system and method based on deep learning
CN109889839B (en) Region-of-interest image coding and decoding system and method based on deep learning
CN111147862B (en) End-to-end image compression method based on target coding
CN110248190B (en) Multilayer residual coefficient image coding method based on compressed sensing
WO2020237646A1 (en) Image processing method and device, and computer-readable storage medium
CN110602494A (en) Image coding and decoding system and method based on deep learning
CN110677651A (en) Video compression method
CN111246206B (en) Optical flow information compression method and device based on self-encoder
CN110753225A (en) Video compression method and device and terminal equipment
CN109903351B (en) Image compression method based on combination of convolutional neural network and traditional coding
CN111247797A (en) Method and apparatus for image encoding and decoding
CN112702600B (en) Image coding and decoding neural network layered fixed-point method
CN108182712B (en) Image processing method, device and system
US9948928B2 (en) Method and apparatus for encoding an image
CN109302614B (en) Video compression method based on third-order tensor self-coding network
Hasnat et al. Luminance approximated vector quantization algorithm to retain better image quality of the decompressed image
US20220392117A1 (en) Data compression and decompression system and method thereof
WO2023082107A1 (en) Decoding method, encoding method, decoder, encoder, and encoding and decoding system
CN115361559A (en) Image encoding method, image decoding method, image encoding device, image decoding device, and storage medium
CN111107377A (en) Depth image compression method, device, equipment and storage medium
Yudin et al. Data Compression Based on Coding Methods With a Controlled Level of Quality Loss
CN112200301B (en) Convolution computing device and method
Kamal et al. Iteration free fractal compression using genetic algorithm for still colour images
CN114882133B (en) Image coding and decoding method, system, device and medium
CN116916033B (en) Combined space-time video compression method based on random self-adaptive Fourier decomposition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221123

Address after: Building B1, Software Valley Science and Technology City, No. 34, Dazhou Road, Yuhuatai District, Nanjing City, Jiangsu Province, 210012

Patentee after: Jiangsu Xinshiyun Science and Technology Co.,Ltd.

Address before: 210044 No. 219 Ning six road, Jiangbei new district, Nanjing, Jiangsu

Patentee before: Nanjing University of Information Science and Technology

TR01 Transfer of patent right