CN110956669A - Image compression coding method and system - Google Patents

Image compression coding method and system Download PDF

Info

Publication number
CN110956669A
CN110956669A CN201911093067.3A CN201911093067A CN110956669A CN 110956669 A CN110956669 A CN 110956669A CN 201911093067 A CN201911093067 A CN 201911093067A CN 110956669 A CN110956669 A CN 110956669A
Authority
CN
China
Prior art keywords
image
neural network
training
network
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911093067.3A
Other languages
Chinese (zh)
Inventor
蔡毫丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201911093067.3A priority Critical patent/CN110956669A/en
Publication of CN110956669A publication Critical patent/CN110956669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides an image compression coding method, which comprises the following steps: extracting image pixels from the sample picture, and performing image blocking processing to obtain column vectors of straightened gray matrixes of the sub-image blocks; establishing a BP neural network; respectively inputting each column vector after being blocked into a BP neural network, and carrying out simulation training to obtain a group of output vectors; and performing image reconstruction on a group of output vectors of the BP neural network, and restoring the output vectors into a target image. The invention also provides an image compression coding system. The invention combines the BP neural network and the image compression to form a novel image compression coding method and a novel image compression coding system, improves the quality of image reconstruction by utilizing the nonlinear mapping capability, good fault tolerance and associative memory function of the BP neural network, and solves the problems of long time consumption and low compression ratio of the traditional image compression by utilizing the characteristics of high parallelism and distributed information storage.

Description

Image compression coding method and system
Technical Field
The present invention relates to an image processing method and system, and in particular, to an image compression encoding method and system.
Background
With the rapid development of microelectronics, computers, communication and other technologies, images are becoming one of the most important carriers for information exchange in human life. The storage, transmission and exchange of image information also put more and more demands, and the large number of images not only requires a larger capacity storage medium, but also requires a wider transmission frequency band and a longer transmission time, which is also the most fundamental problem in digital image processing. In order to effectively utilize valuable resources in modern communication services and information processing, it is necessary to compress images with large data volumes. Most of the traditional image compression methods are used for encoding, stacking, compressing and decoding a plurality of images, and the steps are complex, so that the image compression is long in time consumption, low in compression ratio and large in compression error.
Disclosure of Invention
In view of this, the present invention provides an image compression encoding method, including:
s1, extracting image pixels from the sample picture, and performing image blocking processing to obtain column vectors of straightened sub-image block gray matrix;
s2, establishing a BP neural network;
s3, inputting each column vector after being blocked into a BP neural network respectively, and carrying out simulation training to obtain a group of output vectors;
and S4, carrying out image reconstruction on the group of output vectors of the BP neural network, and restoring the group of output vectors into a target image.
As a further improvement of the present invention, in S1, in the image blocking process, the sample image with Y × Y pixels is first divided into P × P sub-image blocks with n × n pixels, and then the grayscale matrices of the sub-image blocks with n × n pixels are each straightened to n2X 1 column vector, totaling P x P n2A column vector of x 1.
As a further improvement of the present invention, S2 specifically includes:
s21, establishing a network structure of the BP neural network, wherein the network structure comprises the number of neurons of an input layer, a hidden layer and an output layer;
s22, determining the network layer number of the BP neural network;
s23, selecting a transmission function and a training function;
and S24, confirming initial parameters of the BP neural network, including weight vectors of each layer, counter values, network training precision values and maximum training times values.
In a further improvement of the present invention, the image compression ratio is equal to the number of input layer neurons/the number of hidden layer neurons, the number of input layer neurons is equal to the pixel value of the sub image block, and the number of output layer neurons is equal to the number of input layer neurons.
As a further improvement of the invention, the transfer function is a non-linear transfer function.
As a further improvement of the invention, the nonlinear transfer function is chosen to be a bipolar sigmoid function.
As a further improvement of the invention, the training function uses the steepest descent method or Newton's method.
As a further improvement of the present invention, S3 specifically includes:
s31, setting initial parameters of the BP neural network, including weight vectors of each layer, counter values, network training precision values and maximum training times values;
s32, inputting each column vector to be trained and an expected output vector in the BP neural network;
s33, solving the output vector values of the units of the hidden layer and the output layer;
s34, calculating the deviation between the expected output vector and the actual output vector;
s35, judging whether the error meets the network training precision, if so, finishing the training, and if not, executing S36;
s36, judging whether the maximum training times is reached, if so, ending the training, and if not, executing S37;
s37, calculating error signals of each layer;
s38, adjusting the weight vector of each layer according to the error signal, and then returning to execute S33.
As a further improvement of the present invention, S4 specifically includes: each output vector is derived from a column vector n2X 1 is transformed into a n × n grayscale matrix, then the sub-image blocks with P × P pixels of n × n are transformed into an image with P × P pixels, and finally the image with P × P pixels is restored into an image with Y × Y pixels.
The present invention also provides an image compression encoding system, comprising:
the image preprocessing module is used for extracting image pixels from the sample picture, carrying out image blocking processing and obtaining a column vector for straightening the gray matrix of the sub-image block;
a network establishing module for establishing a BP neural network;
the network training module is used for respectively inputting each column vector after being partitioned into a BP neural network, and carrying out simulation training to obtain a group of output vectors;
and the image reconstruction module is used for reconstructing the group of output vectors of the BP neural network into a target image.
The invention has the beneficial effects that:
the invention uses the BP neural network based on error back transmission in image compression, combines the BP neural network and the image compression to form a novel image compression coding method and a novel image compression coding system, improves the quality of image reconstruction by utilizing the advantages of the BP neural network such as nonlinear mapping capability, good fault tolerance, associative memory function and the like, and solves the problems of long time consumption and low compression ratio of the traditional image compression by utilizing the advantages of the BP neural network such as high parallelism, distributed information storage and the like, thereby improving the compression ratio of image compression coding and reducing the compression time.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
FIG. 1 is a flowchart illustrating a method for image compression encoding according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of image segmentation in accordance with the present invention;
FIG. 3 is a schematic diagram of BP neural network image compression coding;
FIG. 4 is a diagram of a typical three-layer BP neural network architecture;
FIG. 5 is a schematic flow chart of the BP neural network for simulation training according to the present invention;
fig. 6 is a system block diagram of an image compression encoding system according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings.
While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art so that they can be readily implemented by those skilled in the art. As can be readily understood by those skilled in the art to which the present invention pertains, the embodiments to be described later may be modified into various forms without departing from the concept and scope of the present invention. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" include plural forms as well, unless the contrary is expressly stated. The term "comprising" as used in the specification embodies particular features, regions, constants, steps, actions, elements and/or components and does not exclude the presence or addition of other particular features, regions, constants, steps, actions, elements, components and/or groups.
All terms including technical and scientific terms used hereinafter have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms defined in dictionaries are to be interpreted as meanings complied with in the relevant technical documents and the present disclosure, and cannot be interpreted as having a very formal meaning without definition.
The invention combines the BP neural network and the image compression to form a novel image compression coding method and a novel image compression coding system, improves the quality of image reconstruction by utilizing the nonlinear mapping capability, good fault tolerance and associative memory function of the BP neural network, solves the problems of long time consumption and low compression ratio of the traditional image compression by utilizing the characteristics of high parallelism and information distributed storage, and provides possibility for real-time compression.
Embodiment 1, as shown in fig. 1, an image compression encoding method according to an embodiment of the present invention includes:
and S1, extracting image pixels from the sample picture, and performing image blocking processing to obtain the column vector of the straightened gray matrix of the sub-image block.
The compression coding method aims at a digital image, the digital image is formed by combining a plurality of pixel small blocks, each pixel small block has a gray value, the range of the gray value is (0-255), a sample image, namely the image to be compressed is input into MATLAB software, the gray value of the image is displayed in a working area of the software to form a matrix, namely image pixels are extracted, and each picture needs to be extracted independently.
If the number of samples input into the network is large, the corresponding network scale is large, the network large can directly influence the speed of network training, and finally influence the time of image compression, so that the scale of input original images needs to be controlled, and the original images are partitioned. The block _ divide () function of MATLAB software can be used to complete the process.
As shown in fig. 2, in the image blocking process, a sample image with Y × Y pixels is first divided into P × P sub-image blocks with n × n pixels, and then the grayscale matrices of the sub-image blocks with n × n pixels are each straightened to n2X 1 column vector, totaling P x P n2A column vector of x 1.
And (3) partitioning the image, wherein firstly, the number of neurons in an input layer in the BP neural network is conveniently determined, and secondly, samples which are sent into the BP neural network for training at one time are reduced. For example, taking 256 × 256 images as an example, i.e., the original image is a 256 × 256 matrix, the original image is partitioned into 64 × 64 4 small blocks (i.e., small matrices), the 4 × 4 small matrices are pulled into a 16 × 1 column vector (this process can be performed by reshape () of MATLAB software), so that there are 64 × 64 16 × 1 column vectors in total, and they are recombined into a new matrix M (each 16 × 1 column vector is used as each column of the M matrix, there are 64 × 64 columns in total, and then one column of data of the matrix is sent to the BP neural network for training, thereby reducing the number of input samples.
And S2, establishing the BP neural network. S2 specifically includes:
s21, establishing the network structure of the BP neural network, including the number of neurons of the input layer, the hidden layer and the output layer.
The image compression ratio is equal to the number of input layer neurons/the number of hidden layer neurons, the number of input layer neurons is equal to the pixel value of the sub image block (i.e., n × n), and the number of output layer neurons is equal to the number of input layer neurons. Therefore, it is necessary to manually determine the compression ratio and the number of hidden layer neurons to determine the number of input layer neurons, thereby determining the value of n, and then determining the value of p according to Y ═ p × n, Y being determined according to the original image. For example, taking Y256 and the compression ratio 8:1 as an example, if the number of hidden layer neurons is 8, the number of input layer neurons is 64, and if n 8,256 is 8 × 32, P is 32.
And S22, establishing the network layer number of the BP neural network.
The more hidden layers are, the slower the training speed is, and the longer the compression time is, so that the reasonable and practical network layer number is needed, the determination of the parameters needs to be determined according to the practical use scene and the requirements, if the requirement on the picture compression quality is higher, and the requirement on the compression time is lower, the compression quality can be met by sacrificing the compression rate; if the requirement on compression time is high and the requirement on compression quality is low, the compression quality can be sacrificed to meet the compression rate; if the compression quality and compression time are both high, multiple tests are required to find the best parameters. Taking 64 × 64 4 × 4 sub-image blocks as an example, a three-layer neural network is provided, the compression ratio is set to 4:1, the number of hidden layer neurons is 4, the number of input layer neurons is 16, and the number of output layer neurons is 16.
S23, a transfer function and a training function are selected.
The use of different transfer functions or different mathematical models gives neurons different information processing characteristics, and the neuron transfer function reflects the relationship between the input and output of the neuron. Common transfer functions of artificial neural networks are: 1) threshold transfer function: such as a bipolar threshold transfer function; 2) nonlinear transfer function: such as a bipolar sigmoid function; 3) a piecewise transfer function; 4) a probabilistic transfer function. The present embodiment preferably uses a bipolar sigmoid function, by which a nonlinear transformation and an inverse transformation are performed.
Common training functions are the steepest descent method (rainlm), newton's method (rainbfg), and adaptive change learning rate and momentum method (raingdx). Under the condition of the same compression ratio, the effect of the images compressed by the training of the first two methods is better than that of the images compressed by the third method, the reconstructed images are relatively clear, the compression effect is better, the reconstructed images are approximate to the original images, the distortion is smaller, and the convergence speed is higher. The steepest descent method is preferably employed in this embodiment.
And S24, confirming initial parameters of the BP neural network, including weight vectors of each layer, counter values, network training precision values and maximum training times values.
The value of the initial parameter is related to the quality requirement of image compression, the values of the network training precision and the maximum training times can influence the image compression quality, and the two parameters are parameter standards for judging whether to terminate the training by the network. The weight vectors of each layer are assigned to random numbers, the network training precision is set to a very small positive number (for example, set to 0.001 in this embodiment according to the requirement on the image compression quality), the maximum training number is another parameter for terminating the network training, and is also determined according to the requirement on the image compression quality, and is slightly larger as possible (for example, set to 500 times in this embodiment).
And S3, respectively inputting each column vector after being partitioned into BP neural networks, and carrying out simulation training to obtain a group of output vectors.
As shown in fig. 3, the basic principle of the BP network for image compression encoding is as follows: input samples are mapped from the input layer, through fewer hidden layer neurons, to a set of output data. The hidden layer can effectively express the input vector by using fewer neurons, and send the expressed result out. The number of the neurons of the input layer and the output layer is the same, and the whole network structure is completely symmetrical. A typical three-layer BP neural network structure is shown in fig. 4. Linear or non-linear transformation of data from the input layer to the hidden layer, called compression coding; the inverse linear or non-linear transformation of the data from the hidden layer to the output layer is called image decompression.
As shown in fig. 5, S3 specifically includes:
and S31, setting initial parameters of the BP neural network, including weight vectors of each layer, counter values, network training precision values and maximum training times values.
And S32, inputting each column vector to be trained and an expected output vector in the BP neural network.
And S33, calculating the output vector value of each unit of the hidden layer and the output layer.
S34, a deviation between the desired output vector and the actual output vector is obtained.
And S35, judging whether the error meets the network training precision, if so, finishing the training, and if not, executing S36.
And S36, judging whether the maximum training frequency is reached, if so, ending the training, and if not, executing S37.
And S37, calculating error signals of each layer.
S38, adjusting the weight vector of each layer according to the error signal, and then returning to execute S33.
The desired output vector is related to the compression error of the image. The input to the network is the matrix information, the final actual output is also a matrix, and the desired output is also a matrix, which is the information of the original lossless uncompressed picture. However, no matter which network algorithm, the problem can be solved with 100% accuracy, so an error accuracy needs to be set to limit the final output.
And S4, carrying out image reconstruction on the group of output vectors of the BP neural network, and restoring the group of output vectors into a target image.
The last step is actually the inverse process of image blocking, and aims to restore the output image to the target image, namely the reconstructed image which is finally needed. After all column vectors of the image are simulated, a group of output vectors, namely pixel data after image compression and reconstruction, is obtained, the group of data is reduced into a matrix with the size of the original image, and the image reconstructed according to the compressed data is obtained, wherein the output vectors of the hidden layer are the data after image compression.
S4 specifically includes: each output vector is derived from a column vector n2X 1 is transformed into a n × n grayscale matrix, then the sub-image blocks with P × P pixels of n × n are transformed into an image with P × P pixels, and finally the image with P × P pixels is restored into an image with Y × Y pixels. For example, each 16 × 1 column vector is transformed into 4 × 4, 4096 4 × 4 images are transformed into 64 × 64 images, and finally the image filled with the cell matrix is restored to a full 256 × 256 image.
Embodiment 2, an image compression encoding system according to an embodiment of the present invention is shown in fig. 6, and includes: the system comprises an image preprocessing module, a network establishing module, a network training module and an image reconstruction module.
The image preprocessing module is used for extracting image pixels from the sample image, carrying out image blocking processing and obtaining the column vector of the straightened gray matrix of the sub-image block.
When extracting image pixels, the image preprocessing module inputs a sample image, namely an image to be compressed, into MATLAB software, the gray value of the image is displayed in a working area of the software and is a matrix, namely, the extraction of the image pixels is finished, and each picture needs to be extracted independently. When the image preprocessing module carries out blocking processing on the image, firstly, a sample image with Y multiplied by Y pixels is divided into P multiplied by P sub image blocks with n multiplied by n pixels, and then the gray matrixes of the sub image blocks with n multiplied by n pixels are straightened to n2X 1 column vector, totaling P x P n2A column vector of x 1.
The network establishing module is used for establishing the BP neural network. Specifically, the network establishment module is configured to perform the following process:
establishing a network structure of the BP neural network, wherein the network structure comprises the number of neurons of an input layer, a hidden layer and an output layer; determining the number of network layers of the BP neural network; selecting a transfer function and a training function; and confirming initial parameters of the BP neural network, including weight vectors of all layers, counter values, network training precision values and maximum training times values.
The image compression ratio is equal to the number of input layer neurons/the number of hidden layer neurons, the number of input layer neurons is equal to the pixel value of the sub image block (i.e., n × n), and the number of output layer neurons is equal to the number of input layer neurons.
And the network training module is used for respectively inputting each column vector after being blocked into the BP neural network, and carrying out simulation training to obtain a group of output vectors. The specific network training module is configured to perform the following process:
setting initial parameters of the BP neural network, wherein the initial parameters comprise weight vectors of all layers, counter values, network training precision values and maximum training times values;
inputting each column vector to be trained and an expected output vector in a BP neural network;
calculating output vector values of all units of the hidden layer and the output layer;
calculating the deviation between the expected output vector and the actual output vector;
judging whether the error meets the network training precision, if so, finishing the training, and if not, executing the next step;
judging whether the maximum training times is reached, if so, finishing the training, and if not, executing the next step;
calculating error signals of each layer;
and adjusting the weight vector of each layer according to the error signal, and returning to execute the calculation of the output vector value of each unit of the hidden layer and the output layer.
The image reconstruction module is used for reconstructing a group of output vectors of the BP neural network into a target image.
In particular, the image reconstruction module is configured to slave-column each output vectorVector n2X 1 is transformed into a n × n grayscale matrix, then the sub-image blocks with P × P pixels of n × n are transformed into an image with P × P pixels, and finally the image with P × P pixels is restored into an image with Y × Y pixels.
Thus, it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been illustrated and described in detail herein, many other variations or modifications consistent with the principles of the invention may be directly determined or derived from the disclosure of the present invention without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be understood and interpreted to cover all such other variations or modifications.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Thus, it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been illustrated and described in detail herein, many other variations or modifications consistent with the principles of the invention may be directly determined or derived from the disclosure of the present invention without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be understood and interpreted to cover all such other variations or modifications.

Claims (10)

1. An image compression encoding method, comprising:
s1, extracting image pixels from the sample picture, and performing image blocking processing to obtain column vectors of straightened sub-image block gray matrix;
s2, establishing a BP neural network;
s3, inputting each column vector after being blocked into a BP neural network respectively, and carrying out simulation training to obtain a group of output vectors;
and S4, carrying out image reconstruction on the group of output vectors of the BP neural network, and restoring the group of output vectors into a target image.
2. The image compression encoding method of claim 1, wherein in the step of partitioning the image, in S1, the sample image with Y x Y pixels is first divided into P x P sub-image blocks with n x n pixels, and then the gray matrices of each sub-image block with n x n pixels are straightened to n2X 1 column vector, totaling P x P n2A column vector of x 1.
3. The image compression encoding method of claim 1, wherein S2 specifically includes:
s21, establishing a network structure of the BP neural network, wherein the network structure comprises the number of neurons of an input layer, a hidden layer and an output layer;
s22, determining the network layer number of the BP neural network;
s23, selecting a transmission function and a training function;
and S24, confirming initial parameters of the BP neural network, including weight vectors of each layer, counter values, network training precision values and maximum training times values.
4. An image compression encoding method as claimed in claim 3, wherein the image compression ratio is the number of input layer neurons/the number of hidden layer neurons, the number of input layer neurons is the pixel value of the sub image block, and the number of output layer neurons is the number of input layer neurons.
5. A method of image compression encoding as claimed in claim 3, wherein the transfer function employs a non-linear transfer function.
6. An image compression encoding method as claimed in claim 5, wherein the non-linear transfer function selects a bipolar sigmoid function.
7. A method as claimed in claim 3, wherein the training function uses the steepest descent method or the newton's method.
8. The image compression encoding method of claim 1, wherein S3 specifically includes:
s31, setting initial parameters of the BP neural network, including weight vectors of each layer, counter values, network training precision values and maximum training times values;
s32, inputting each column vector to be trained and an expected output vector in the BP neural network;
s33, solving the output vector values of the units of the hidden layer and the output layer;
s34, calculating the deviation between the expected output vector and the actual output vector;
s35, judging whether the error meets the network training precision, if so, finishing the training, and if not, executing S36;
s36, judging whether the maximum training times is reached, if so, ending the training, and if not, executing S37;
s37, calculating error signals of each layer;
s38, adjusting the weight vector of each layer according to the error signal, and then returning to execute S33.
9. The image compression encoding method of claim 1, wherein S4 specifically includes: each output vector is derived from a column vector n2X 1 is transformed into a n × n grayscale matrix, then the sub-image blocks with P × P pixels of n × n are transformed into an image with P × P pixels, and finally the image with P × P pixels is restored into an image with Y × Y pixels.
10. An image compression encoding system, comprising:
the image preprocessing module is used for extracting image pixels from the sample picture, carrying out image blocking processing and obtaining a column vector for straightening the gray matrix of the sub-image block;
a network establishing module for establishing a BP neural network;
the network training module is used for respectively inputting each column vector after being partitioned into a BP neural network, and carrying out simulation training to obtain a group of output vectors;
and the image reconstruction module is used for reconstructing the group of output vectors of the BP neural network into a target image.
CN201911093067.3A 2019-11-11 2019-11-11 Image compression coding method and system Pending CN110956669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911093067.3A CN110956669A (en) 2019-11-11 2019-11-11 Image compression coding method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911093067.3A CN110956669A (en) 2019-11-11 2019-11-11 Image compression coding method and system

Publications (1)

Publication Number Publication Date
CN110956669A true CN110956669A (en) 2020-04-03

Family

ID=69977108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911093067.3A Pending CN110956669A (en) 2019-11-11 2019-11-11 Image compression coding method and system

Country Status (1)

Country Link
CN (1) CN110956669A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001885A (en) * 2020-07-14 2020-11-27 武汉理工大学 PCB fault detection method, storage medium and system based on machine vision
CN114782565A (en) * 2022-06-22 2022-07-22 武汉搜优数字科技有限公司 Digital archive image compression, storage and recovery method based on neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106443453A (en) * 2016-07-04 2017-02-22 陈逸涵 Lithium battery SOC estimation method based on BP neural network
US20200051287A1 (en) * 2018-08-09 2020-02-13 Electronic Arts Inc. Texture Compression Using a Neural Network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106443453A (en) * 2016-07-04 2017-02-22 陈逸涵 Lithium battery SOC estimation method based on BP neural network
US20200051287A1 (en) * 2018-08-09 2020-02-13 Electronic Arts Inc. Texture Compression Using a Neural Network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李娜: "基于人工神经网络的数字图像压缩方法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001885A (en) * 2020-07-14 2020-11-27 武汉理工大学 PCB fault detection method, storage medium and system based on machine vision
CN114782565A (en) * 2022-06-22 2022-07-22 武汉搜优数字科技有限公司 Digital archive image compression, storage and recovery method based on neural network

Similar Documents

Publication Publication Date Title
CN110517329B (en) Deep learning image compression method based on semantic analysis
CN110677651A (en) Video compression method
CN108271026A (en) The device and system of compression/de-compression, chip, electronic device
CN108921910B (en) JPEG coding compressed image restoration method based on scalable convolutional neural network
CN109451308A (en) Video compression method and device, electronic equipment and storage medium
CN113159173B (en) Convolutional neural network model compression method combining pruning and knowledge distillation
CN110753225A (en) Video compression method and device and terminal equipment
CN113163203B (en) Deep learning feature compression and decompression method, system and terminal
CN110717868B (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
CN107277520A (en) The bit rate control method of infra-frame prediction
CN111489364A (en) Medical image segmentation method based on lightweight full convolution neural network
CN112862689A (en) Image super-resolution reconstruction method and system
CN110956669A (en) Image compression coding method and system
CN115278257A (en) Image compression method and device, electronic equipment and storage medium
CN115618051B (en) Internet-based smart campus monitoring video storage method
Reddy et al. Image Compression and reconstruction using a new approach by artificial neural network
CN112188217B (en) JPEG compressed image decompression effect removing method combining DCT domain and pixel domain learning
CN107547773A (en) A kind of image processing method, device and equipment
CN110650339A (en) Video compression method and device and terminal equipment
CN117273092A (en) Model quantization method and device, electronic equipment and storage medium
Rahman et al. A new approach for compressing color images using neural network
GB2571818A (en) Selecting encoding options
CN113038131B (en) Video encoding method, video encoding device, computer equipment and storage medium
Li et al. Compression artifact removal with stacked multi-context channel-wise attention network
CN112929664A (en) Interpretable video compressed sensing reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination