CN107170020A - Dictionary learning still image compression method based on minimum quantization error criterion - Google Patents

Dictionary learning still image compression method based on minimum quantization error criterion Download PDF

Info

Publication number
CN107170020A
CN107170020A CN201710417963.5A CN201710417963A CN107170020A CN 107170020 A CN107170020 A CN 107170020A CN 201710417963 A CN201710417963 A CN 201710417963A CN 107170020 A CN107170020 A CN 107170020A
Authority
CN
China
Prior art keywords
mrow
msub
dictionary
mtd
msubsup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710417963.5A
Other languages
Chinese (zh)
Other versions
CN107170020B (en
Inventor
夏勇
王昊
张艳宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710417963.5A priority Critical patent/CN107170020B/en
Publication of CN107170020A publication Critical patent/CN107170020A/en
Application granted granted Critical
Publication of CN107170020B publication Critical patent/CN107170020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3059Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/60General implementation details not specific to a particular type of compression
    • H03M7/6041Compression optimized for errors
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/60General implementation details not specific to a particular type of compression
    • H03M7/6064Selection of Compressor
    • H03M7/6082Selection strategies
    • H03M7/6088Selection strategies according to the data type

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a kind of dictionary learning still image compression method based on minimum quantization error criterion, the technical problem big for solving existing still image compression method quantization error.Technical scheme is added the comentropy of sparse coefficient manipulative indexing as regular terms in the object function of sparse coding, the dictionary atomic time is being chosen using orthogonal matching pursuit algorithm, the decentralization of dictionary atom is limited by minimizing comentropy, the Coding cost of sparse coefficient manipulative indexing is reduced;Simultaneously, during dictionary learning, by being ranked up to sparse coefficient, and find the used sequence divisions of the k for causing the total sum of squares of deviations of sparse coefficient minimum, to each it divide as a quantization group, used between different quantization groups and identical quantization step is used in different quantization steps, same quantization group, so that final quantization error is minimum.

Description

Dictionary learning still image compression method based on minimum quantization error criterion
Technical field
It is more particularly to a kind of to be based on minimum quantization error criterion the present invention relates to a kind of still image compression method Dictionary learning still image compression method.
Background technology
Document " Compressibility constrained sparse representation with learnt Dictionary for low bit-rate image compression, IEEE Transactions on Circuits And Systems for Video Technology, 2014, Vol24 (10), p1743-1757 " discloses a kind of based on convex pine Relax and the sparse coding method of compression constraint is used for the lossy compression method of image.This method uses the sparse coding generation based on convex relaxation Traditional tracking matching algorithm has been replaced, the openness and stability of image representation coefficients is enhanced.Meanwhile, compression constraint is added Into the solution procedure of sparse coding, sparse coding problem is converted intoNorm optimization problem, this is approached by loop iteration The optimal solution of problem, so as to obtain sparse coefficient of the image on given super complete dictionary.Finally, by entering to sparse coefficient Row quantifies and entropy code obtains the compression image code stream of low bit- rate.The dictionary that document methods described is chosen on super complete dictionary is former Son is dispersed in whole dictionary space so that the Global Information entropy of dictionary atom is higher, it is difficult to carry out efficient coding compression;Separately Outside, in document methods described, quantization table is learnt using K-means algorithms, and the process is independently of dictionary learning process, therefore Quantization table can not the super complete dictionary that arrives of adaptive learning, cause quantization error to become big.
The content of the invention
In order to overcome the shortcomings of that existing still image compression method quantization error is big, the present invention provides a kind of based on most The dictionary learning still image compression method of quantisation errors criterion.This method is by the comentropy of sparse coefficient manipulative indexing In the object function that sparse coding is added as regular terms, the dictionary atomic time is being chosen using orthogonal matching pursuit algorithm, is being passed through Minimize comentropy to limit the decentralization of dictionary atom, reduce the Coding cost of sparse coefficient manipulative indexing;Meanwhile, in dictionary During study, by being ranked up to sparse coefficient, and the used sequences of the k for causing the total sum of squares of deviations of sparse coefficient minimum are found Divide, will each divide as a quantization group, used between different quantization groups in different quantization steps, same quantization group Using identical quantization step, so that final quantization error is minimum.
The technical solution adopted for the present invention to solve the technical problems is:A kind of dictionary based on minimum quantization error criterion Learn still image compression method, be characterized in comprising the following steps:
Step 1: carrying out piecemeal and standardization to training image.All training images are divided into 16 × 16 image block, For each image block, standardized according to formula (1)
Wherein, vijCoordinate is the gray value of the pixel of (i, j) in expression image block, and m, n represents the length of image block respectively It is wide.Image block is stretched as vectorConstitute the input signal of dictionary learning.
Step 2: carrying out preliminary clusters to parts of images block first with self-organizing feature map, then pass through K-means algorithms All image blocks are clustered.In cluster, the distance between any two image block is measured using Euclidean distance
Wherein, si, sjRepresent the different image block vector of any two, d (si, sj) represent its Euclidean distance.
Step 3: training a super complete dictionary and a quantization table, dictionary to each class cluster using dictionary learning algorithm In each atom be the shared tactic pattern of such cluster image block.Simultaneously by the quantization error of sparse coefficient and its correspondence rope The comentropy drawn is added as regular terms in the object function of dictionary learning, by iterative formula (3) and formula (5) to amount Change table and dictionary are learnt simultaneously, reduce final Coding cost.
Wherein, S is original input signal matrix, is often classified as the input signal s after image block is stretchedi, D is dictionary, and A is dilute Sparse coefficient matrix, αjIt is sparse coefficient matrix A jth row, represents signal sjThe expression coefficient decomposed on dictionary D, kmaxIt is dilute Dredge degree limitation, piIt is dictionary atom diUse probability, M be dictionary in dictionary atom number.Then, by the word of each class cluster Allusion quotation and quantization table are spliced into Global Dictionary and global quantization table respectively, are stored in coding side and decoding end.
Step 4: during Image Coding, dividing the image into DC component and AC compounent two parts.DC component is carried out DPCM is encoded.AC compounent is decomposed on Global Dictionary using sparse coding, corresponding sparse coefficient matrix is obtained, it is right The sparse coefficient matrix is quantified using global quantization table, then, to the corresponding sparse coefficient square of AC compounent after quantization Nonzero element and its index carry out Huffman codings in battle array, form final code stream.
Step 5: decoding process is the inverse process of cataloged procedure.It is multiplied by dictionary with sparse coefficient matrix and reconstructs letter Number matrix S, then to signal matrix each column is plus corresponding DC component and rearranges, so as to recover image.
The beneficial effects of the invention are as follows:This method adds the comentropy of sparse coefficient manipulative indexing as regular terms sparse In the object function of coding, the dictionary atomic time is being chosen using orthogonal matching pursuit algorithm, is being limited by minimizing comentropy The decentralization of dictionary atom, reduces the Coding cost of sparse coefficient manipulative indexing;Meanwhile, during dictionary learning, pass through Sparse coefficient is ranked up, and finds the k for causing the total sum of squares of deviations of sparse coefficient minimum and is used to sequence division, work will be each divided Used for a quantization group, between different quantization groups in different quantization steps, same quantization group and step is quantified using identical It is long, so that final quantization error is minimum.
The present invention is elaborated with reference to embodiment.
Embodiment
Dictionary learning still image compression method of the invention based on minimum quantization error criterion is comprised the following steps that:
1. image block and standardization.
All training images are first according to raster scan order, translational movement is 2, is divided into 16 × 16 image blockThen to each image block biProgress is standardized with reference to formula (1)
Finally, image block is stretched as vectorConstitute the input signal of dictionary learning.
2. image block is clustered.
In order to ensure the purity of the dictionary learnt, it is necessary to first cluster image block, make all to be similar diagram in every class As block.Due to carrying out having overlapping piecemeal to training image, the image number of blocks of acquisition is many, therefore first from all image blocks In randomly select 10% image block, using Self Organizing Feature Maps Algorithm carry out preliminary clusters, obtain preliminary clusters number of clusters k and Cluster centreThen by preliminary clusters centerAs initial value, using K-Means algorithms to all image blocks Clustered.In cluster, the distance between any two image block is measured using Euclidean distance
3. dictionary learning is with quantifying table learning.
In traditional sparse coding algorithm, because signal is decomposed on an excessively complete dictionary, one is rebuild The linear combination of multigroup different dictionary atom, signal s are there is during signal s1Dictionary atom d can be passed throughiAnd djLinear group Close s11di1djRebuild;Signal s2Both s can be passed through22dp2dqRebuild, s can be passed through again22′di2′ djRebuild.S in latter reconstruction model1And s2Identical dictionary atom is used, then the selection of dictionary atom is more sparse, compiled Shorter word length is only needed to during its corresponding index value of code, is conducive to improving compression ratio.In the present invention, dictionary atom is indexed The comentropy of value is added as regular terms in the object function of sparse coding so that final Coding cost is minimum.It is amended Object function is with reference to formula (3)
P in formulaiCalculating with reference to formula (4)
Wherein, aiSparse coefficient matrix A the i-th row is represented, δ is a minimum real number, and it is 0 to prevent denominator.
The sparse coefficient matrix A of primary signal matrix is obtained after sparse coding, due to being all floating number in A, it is difficult to Compression storage, therefore needs to carry out it quantization Q (A), and now needing to update dictionary D again minimizes reconstruction error.Update word The process of allusion quotation is referred to as dictionary learning, and object function now is with reference to formula (5)
Wherein Q () is quantization function.When quantifying, in order to reduce overall quantization error, using non-uniform quantizing, i.e., pair Larger sparse coefficient uses big quantization step, and small quantization step is used to less sparse coefficient.In order to realize above-mentioned target, Nonzero element in sparse coefficient matrix A is expressed as (aij, idxij) form, aijRepresent nonzero element, idxijRepresent that its is right The index answered.First, to aijCarry out ascending sort and obtain ascending order ordered series of numbers L.Then, ascending order ordered series of numbers L is divided into k used sequence subnumbers Arrange { L1, L2..., Lk, and make it that the overall sum of squares of deviations after dividing is minimum, that is, meet formula (6)
Wherein,Represent subnumber row LiAverage value, ljRepresent subnumber row LiIn j-th number.Now, k sub- ordered series of numbers pair Answer to use between k different quantization groups, different quantization groups and identical amount is used in different quantization steps, same quantization group Change step-length.Eight-digit binary number code e is used during quantization0e1e2e3e4e5e6e7Sparse coefficient is indicated.As k=3, sparse system Matrix number is divided into eight quantization groups, wherein e0Represent positive and negative, the e of coefficient1e2e3The sparse affiliated quantization group is represented, e4e5e6e7Represent the dynamic range of the amplitude of the quantization system number.The quantization step of n-th of quantization group is
Wherein,Quantization group L is represented respectivelynThe minimum value and maximum of expression.Coefficient aijQuantized value It is then
AllThen constitute quantization table.
During whole dictionary learning, formula (3) and formula (5) are optimized by alternating iteration, optimal word is tried to achieve Allusion quotation D and primary signal matrix sparse coefficient matrix A.The dictionary learnt and quantization table are respectively stored in coding side and decoding Used when end is for Image Coding, decoding.
4. Image Coding.
During Image Coding, first, image is pressed into raster scan order, be divided into no overlap 16 × 16 image block.Then, Image block is divided into DC component and AC compounent two parts.DC component is encoded with DPCM.For AC compounent, Sparse coding is carried out on the dictionary learnt, sparse coefficient matrix is obtained, and it is enterprising in the quantization table learnt to sparse coefficient matrix Row quantifies.Finally, Huffman codings are carried out to the nonzero element in the sparse coefficient matrix by quantifying and its index, formed Code stream.
5. image decoding.
During image decoding, first, DC component and sparse coefficient matrix are recovered from code stream.Then, by dictionary with it is dilute Sparse coefficient matrix multiple reconstructs signal matrix.Finally, to signal matrix each column is plus corresponding DC component and rearranges, So as to recover image block, original image is recovered by image block splicing.

Claims (1)

1. a kind of dictionary learning still image compression method based on minimum quantization error criterion, it is characterised in that including with Lower step:
Step 1: carrying out piecemeal and standardization to training image;All training images are divided into 16 × 16 image block, for Each image block, is standardized according to formula (1)
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <mi>&amp;mu;</mi> </mrow> <msqrt> <mrow> <mfrac> <mn>1</mn> <mrow> <mi>m</mi> <mo>&amp;times;</mo> <mi>n</mi> </mrow> </mfrac> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <mi>&amp;mu;</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>&amp;mu;</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>m</mi> <mo>&amp;times;</mo> <mi>n</mi> </mrow> </mfrac> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, vijCoordinate is the gray value of the pixel of (i, j) in expression image block, and m, n represents the length and width of image block respectively;Will Image block is stretched as vectorConstitute the input signal of dictionary learning;
Step 2: preliminary clusters are carried out to parts of images block first with self-organizing feature map, then by K-means algorithms to institute There is image block to be clustered;In cluster, the distance between any two image block is measured using Euclidean distance
<mrow> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>s</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>s</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>s</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>s</mi> <mi>j</mi> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, si, sjRepresent the different image block vector of any two, d (si, sj) represent its Euclidean distance;
Step 3: being trained using dictionary learning algorithm to each class cluster in a super complete dictionary and a quantization table, dictionary Each atom is the shared tactic pattern of such cluster image block;Simultaneously by the quantization error of sparse coefficient and its manipulative indexing Comentropy is added as regular terms in the object function of dictionary learning, by iterative formula (3) and formula (5) to quantifying table Learnt simultaneously with dictionary, reduce final Coding cost;
<mrow> <mtable> <mtr> <mtd> <mrow> <mover> <mi>A</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <mi> </mi> <msub> <mi>min</mi> <mi>A</mi> </msub> <mo>{</mo> <mo>|</mo> <mo>|</mo> <mi>S</mi> <mo>-</mo> <mi>D</mi> <mi>A</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>-</mo> <mi>&amp;lambda;</mi> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <msub> <mi>p</mi> <mi>i</mi> </msub> <mi>log</mi> <mi> </mi> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mtable> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mrow> <mo>&amp;CenterDot;</mo> <mi>j</mi> </mrow> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>0</mn> </msub> <mo>&amp;le;</mo> <msub> <mi>k</mi> <mi>max</mi> </msub> <mo>,</mo> <mn>1</mn> <mo>&amp;le;</mo> <mi>j</mi> <mo>&amp;le;</mo> <mi>N</mi> </mrow> </mtd> </mtr> </mtable> </mtd> </mtr> </mtable> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mtable> <mtr> <mtd> <mrow> <mover> <mi>D</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <mi> </mi> <msub> <mi>min</mi> <mi>D</mi> </msub> <mo>|</mo> <mo>|</mo> <mi>S</mi> <mo>-</mo> <mi>D</mi> <mo>&amp;CenterDot;</mo> <mi>Q</mi> <mrow> <mo>(</mo> <mi>A</mi> <mo>)</mo> </mrow> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mtable> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mrow> <mo>&amp;CenterDot;</mo> <mi>j</mi> </mrow> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>0</mn> </msub> <mo>&amp;le;</mo> <msub> <mi>k</mi> <mi>max</mi> </msub> <mo>,</mo> <mn>1</mn> <mo>&amp;le;</mo> <mi>j</mi> <mo>&amp;le;</mo> <mi>N</mi> </mrow> </mtd> </mtr> </mtable> </mtd> </mtr> </mtable> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein, S is original input signal matrix, is often classified as the input signal s after image block is stretchedi, D is dictionary, and A is sparse system Matrix number, a.jIt is sparse coefficient matrix A jth row, represents signal sjThe expression coefficient decomposed on dictionary D, kmaxIt is degree of rarefication Limitation, piIt is dictionary atom diUse probability, M be dictionary in dictionary atom number;Then, by the dictionary of each class cluster and Quantization table is spliced into Global Dictionary and global quantization table respectively, is stored in coding side and decoding end;
Step 4: during Image Coding, dividing the image into DC component and AC compounent two parts;DPCM volumes are carried out to DC component Code;AC compounent is decomposed on Global Dictionary using sparse coding, corresponding sparse coefficient matrix is obtained, it is sparse to this Coefficient matrix is quantified using global quantization table, then, to non-in the corresponding sparse coefficient matrix of AC compounent after quantization Neutral element and its index carry out Huffman codings, form final code stream;
Step 5: decoding process is the inverse process of cataloged procedure;It is multiplied by dictionary with sparse coefficient matrix and reconstructs signal square Battle array S, then to signal matrix each column is plus corresponding DC component and rearranges, so as to recover image.
CN201710417963.5A 2017-06-06 2017-06-06 Dictionary learning still image compression method based on minimum quantization error criterion Active CN107170020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710417963.5A CN107170020B (en) 2017-06-06 2017-06-06 Dictionary learning still image compression method based on minimum quantization error criterion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710417963.5A CN107170020B (en) 2017-06-06 2017-06-06 Dictionary learning still image compression method based on minimum quantization error criterion

Publications (2)

Publication Number Publication Date
CN107170020A true CN107170020A (en) 2017-09-15
CN107170020B CN107170020B (en) 2019-06-04

Family

ID=59825586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710417963.5A Active CN107170020B (en) 2017-06-06 2017-06-06 Dictionary learning still image compression method based on minimum quantization error criterion

Country Status (1)

Country Link
CN (1) CN107170020B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274349A (en) * 2020-01-21 2020-06-12 北方工业大学 Public security data hierarchical indexing method and device based on information entropy
CN113454975A (en) * 2018-12-13 2021-09-28 马特瑞勒耶斯公司 Method, computer program product and system for representing visual information
CN113922823A (en) * 2021-10-29 2022-01-11 电子科技大学 Social media information propagation graph data compression method based on constraint sparse representation
WO2024032775A1 (en) * 2022-08-12 2024-02-15 华为技术有限公司 Quantization method and apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014758A1 (en) * 2008-07-15 2010-01-21 Canon Kabushiki Kaisha Method for detecting particular object from image and apparatus thereof
CN102142139A (en) * 2011-03-25 2011-08-03 西安电子科技大学 Compressed learning perception based SAR (Synthetic Aperture Radar) high-resolution image reconstruction method
CN103489203A (en) * 2013-01-31 2014-01-01 清华大学 Image coding method and system based on dictionary learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014758A1 (en) * 2008-07-15 2010-01-21 Canon Kabushiki Kaisha Method for detecting particular object from image and apparatus thereof
CN102142139A (en) * 2011-03-25 2011-08-03 西安电子科技大学 Compressed learning perception based SAR (Synthetic Aperture Radar) high-resolution image reconstruction method
CN103489203A (en) * 2013-01-31 2014-01-01 清华大学 Image coding method and system based on dictionary learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JULIEN MAIRAL等: "Online Dictionary Learning for Sparse Coding", 《PROCEEDINGS OF THE 26 TH INTERNATIONAL CONFERENCE》 *
MAI XU等: "Compressibility Constrained Sparse Representation With Learnt Dictionary for Low Bit-Rate Image Compression", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
SUJIT KUMAR SAHOO等: "Signal Recovery from Random Measurements via Extended Orthogonal Matching Pursuit", 《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 *
张勋等: "基于字典学习与稀疏表示的灰度图像颜色重建算法", 《计算机辅助设计与图形学学报》 *
郑兴明等: "基于字典学习正则化的图像去噪", 《计算机工程》 *
酉霞等: "基于改进K-SVD字典学习的医学图像压缩算法", 《西南科技大学学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113454975A (en) * 2018-12-13 2021-09-28 马特瑞勒耶斯公司 Method, computer program product and system for representing visual information
CN111274349A (en) * 2020-01-21 2020-06-12 北方工业大学 Public security data hierarchical indexing method and device based on information entropy
CN113922823A (en) * 2021-10-29 2022-01-11 电子科技大学 Social media information propagation graph data compression method based on constraint sparse representation
CN113922823B (en) * 2021-10-29 2023-04-21 电子科技大学 Social media information propagation graph data compression method based on constraint sparse representation
WO2024032775A1 (en) * 2022-08-12 2024-02-15 华为技术有限公司 Quantization method and apparatus

Also Published As

Publication number Publication date
CN107170020B (en) 2019-06-04

Similar Documents

Publication Publication Date Title
CN107170020B (en) Dictionary learning still image compression method based on minimum quantization error criterion
CN107516129B (en) Dimension self-adaptive Tucker decomposition-based deep network compression method
CN106157339A (en) The animated Mesh sequence compaction algorithm extracted based on low-rank vertex trajectories subspace
US11436228B2 (en) Method for encoding based on mixture of vector quantization and nearest neighbor search using thereof
CN104867165B (en) A kind of method for compressing image based on transform domain down-sampling technology
CN108984642A (en) A kind of PRINTED FABRIC image search method based on Hash coding
CN110278444B (en) Sparse representation three-dimensional point cloud compression method adopting geometric guidance
Wu et al. Learning product codebooks using vector-quantized autoencoders for image retrieval
CN107992611A (en) The high dimensional data search method and system of hash method are distributed based on Cauchy
CN108846873A (en) A kind of Medical Image Lossless Compression method based on gray probability
Barbalho et al. Hierarchical SOM applied to image compression
CN116939226A (en) Low-code-rate image compression-oriented generated residual error repairing method and device
CN109712205A (en) A kind of compression of images perception method for reconstructing based on non local self similarity model
CN105260736A (en) Fast image feature representing method based on normalized nonnegative sparse encoder
CN114612716A (en) Target detection method and device based on adaptive decoder
CN112702600B (en) Image coding and decoding neural network layered fixed-point method
WO2022057091A1 (en) Encoding method, decoding method, encoding device, and decoding device for point cloud attribute
CN116600119B (en) Video encoding method, video decoding method, video encoding device, video decoding device, computer equipment and storage medium
CN106331719A (en) K-L transformation error space dividing based image data compression method
CN105718858B (en) A kind of pedestrian recognition method based on positive and negative broad sense maximum pond
CN116843830A (en) Mask image modeling algorithm based on self-supervision learning
CN110349228B (en) Triangular mesh compression method for data-driven least square prediction
Kekre et al. Vector quantized codebook optimization using modified genetic algorithm
Zhu et al. Learning low-rank representations for model compression
CN116740414B (en) Image recognition method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant