CN108062780A - Method for compressing image and device - Google Patents

Method for compressing image and device Download PDF

Info

Publication number
CN108062780A
CN108062780A CN201711477239.8A CN201711477239A CN108062780A CN 108062780 A CN108062780 A CN 108062780A CN 201711477239 A CN201711477239 A CN 201711477239A CN 108062780 A CN108062780 A CN 108062780A
Authority
CN
China
Prior art keywords
image
pending
model
parameter
compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711477239.8A
Other languages
Chinese (zh)
Other versions
CN108062780B (en
Inventor
翁仁亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201711477239.8A priority Critical patent/CN108062780B/en
Publication of CN108062780A publication Critical patent/CN108062780A/en
Priority to US16/130,722 priority patent/US10896522B2/en
Application granted granted Critical
Publication of CN108062780B publication Critical patent/CN108062780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals

Abstract

The embodiment of the present application discloses method for compressing image and device.One specific embodiment of this method includes:Obtain pending image;Feature extraction is carried out to pending image using the compression of images model corresponding convolutional neural networks trained, obtain several characteristic patterns, wherein several characteristic patterns are reconstructed to obtain reconstructed image via the corresponding deconvolution neutral net of default image reconstruction model, and the difference between the reconstructed image of pending image and pending image meets preset condition.The embodiment realizes the compression of image data amount, while can ensure the also proper mass of image.

Description

Method for compressing image and device
Technical field
The invention relates to field of computer technology, and in particular to technical field of image processing more particularly to image Compression method and device.
Background technology
Compression of images refers to the technology that original image pixel matrix is represented with less byte number.Usually in storage image When need to be compressed image to save memory space, in actual use or exhibiting pictures pass through decompression also artwork Picture.
, it is necessary to capture and preserve the substantial amounts of image for including face characteristic information in monitoring scene.To these images When being compressed storage, in order to ensure that image is capable of providing enough information for identification and target following, it is necessary to occupy Larger memory space has higher requirement to disk.
The content of the invention
The embodiment of the present application proposes method for compressing image and device.
In a first aspect, the embodiment of the present application provides a kind of method for compressing image, including:Obtain pending image;It utilizes The corresponding convolutional neural networks of compression of images model trained carry out feature extraction to pending image, obtain several features Figure;Wherein, several characteristic patterns are reconstructed to obtain weight via the corresponding deconvolution neutral net of the image reconstruction model trained Composition picture, the difference between the reconstructed image of pending image and pending image meet preset condition.
In some embodiments, training is drawn as follows for above-mentioned compression of images model and image reconstruction model:It obtains Sample image is taken, and performs comparison step;Comparing step includes:By sample image input picture compact model, it is special to export several Sample graph is levied, the corresponding deconvolution neutral net of several feature samples figure input picture reconstruction models is subjected to image reconstruction, is obtained To the reconstructed image of sample image, loss function is built based on the difference between sample image and the reconstructed image of sample image, Judge whether the value of loss function meets the default condition of convergence;If the judging result for comparing step is no, based on loss function, It is corresponding using the parameter and/or image reconstruction model of the corresponding convolutional neural networks of gradient descent method update compression of images model The parameter of deconvolution neutral net is performed based on updated parameter and compares step;If the judging result for comparing step is yes, defeated Go out the parameter of the corresponding convolutional neural networks of compression of images model and the ginseng of the corresponding deconvolution neutral net of image reconstruction model Number.
In some embodiments, above-mentioned method for compressing image further includes:Store the corresponding convolutional Neural of compression of images model The parameter of the corresponding deconvolution neutral net of parameter and image reconstruction model of network.
In some embodiments, above-mentioned method for compressing image further includes:Using several characteristic patterns as the pressure of pending image Sheepshank fruit is stored.
In some embodiments, features described above figure is made of at least one gray value for the pixel of floating number;It is and above-mentioned Method for compressing image further includes:The maximum and minimum value of the gray value of pixel in each characteristic pattern of pending image are calculated, and By the grayvalue transition of the pixel in characteristic pattern it is character type data based on maximum and minimum value.
In some embodiments, above-mentioned method for compressing image further includes:By the ash of pixel in the characteristic pattern of pending image The corresponding character type data of angle value is stored as compression of images result, and stores pixel in each characteristic pattern of pending image Gray value maximum and minimum value.
Second aspect, the embodiment of the present application provide a kind of image compressing device, including:Acquiring unit is treated for obtaining Handle image;Compression unit, for utilizing the corresponding convolutional neural networks of compression of images model trained to pending image Feature extraction is carried out, obtains several characteristic patterns;Wherein, several characteristic patterns are via the corresponding warp of image reconstruction model trained Product neutral net is reconstructed to obtain reconstructed image, and the difference between the reconstructed image of pending image and pending image meets Preset condition.
In some embodiments, above-mentioned compression of images model and image reconstruction model are trained as follows draw 's:Sample image is obtained, and performs comparison step;Comparing step includes:By sample image input picture compact model, export more The corresponding deconvolution neutral net of several feature samples figure input picture reconstruction models is carried out image weight by width feature samples figure Structure obtains the reconstructed image of sample image, and loss is built based on the difference between sample image and the reconstructed image of sample image Function, judges whether the value of loss function meets the default condition of convergence;If the judging result for comparing step is no, based on loss Function, using the parameter and/or image reconstruction model of the corresponding convolutional neural networks of gradient descent method update compression of images model The parameter of corresponding deconvolution neutral net is performed based on updated parameter and compares step;If compare the judging result of step It is yes, the parameter and the corresponding deconvolution nerve net of image reconstruction model of the corresponding convolutional neural networks of output compression of images model The parameter of network.
In some embodiments, above-mentioned image compressing device further includes:First storage unit, for storing compression of images mould The parameter of the corresponding deconvolution neutral net of parameter and image reconstruction model of the corresponding convolutional neural networks of type.
In some embodiments, above-mentioned image compressing device further includes:Second storage unit, for several characteristic patterns to be made Compression result for pending image is stored.
In some embodiments, features described above figure is made of at least one gray value for the pixel of floating number;It is and above-mentioned Image compressing device further includes converting unit, is used for:Calculate the maximum of the gray value of pixel in each characteristic pattern of pending image Value and minimum value, and be character type data by the grayvalue transition of the pixel in characteristic pattern based on maximum and minimum value.
In some embodiments, above-mentioned image compressing device further includes:3rd storage unit, for by pending image The corresponding character type data of the gray value of pixel is stored as compression of images result in characteristic pattern, and stores pending image Each characteristic pattern in pixel gray value maximum and minimum value.
The third aspect, the embodiment of the present application provide a kind of server, including:One or more processors;Storage device, For storing one or more programs, when one or more programs are executed by one or more processors so that one or more Processor realizes the method for compressing image provided such as first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence, wherein, the method for compressing image that first aspect provides is realized when program is executed by processor.
The method for compressing image and device of the above embodiments of the present application, by obtaining pending image, what utilization had been trained The corresponding convolutional neural networks of compression of images model carry out feature extraction to pending image, several characteristic patterns are obtained, wherein more Width characteristic pattern is reconstructed to obtain reconstructed image via the corresponding deconvolution neutral net of default image reconstruction model, pending Difference between the reconstructed image of image and pending image meets preset condition, realizes the significantly compression of image data amount, It can ensure the also proper mass of image simultaneously.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for compressing image of the application;
Fig. 3 is a principle schematic according to the method for compressing image of the application;
Fig. 4 is the flow chart of one embodiment of the training method of compression of images model and image reconstruction model;
Fig. 5 is the flow chart according to another embodiment of the method for compressing image of the application;
Fig. 6 is according to a structure diagram of the image compressing device of the embodiment of the present application;
Fig. 7 is adapted for the structure diagram of the computer system of the server for realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order to Convenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the case where there is no conflict, the feature in embodiment and embodiment in the application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system of the embodiment of the method for compressing image that can apply the application or image compressing device System framework 100.
As shown in Figure 1, system architecture 100 can include image capture device 101,102, terminal device 103, network 104 With server 105.Network 104 is between image capture device 101,102 and server 105 and 103 and of terminal device The medium of communication link is provided between server 105.Network 104 can include various connection types, such as wired, wireless communication Link or fiber optic cables etc..
Image capture device 101,102 can be interacted by network with server 105, to receive or send data.Image Collecting device 101,102 can be such as the camera of monitoring scene or with camera shooting for gathering the equipment of facial image The mobile electronic devices such as mobile phone, the tablet computer of function.Image capture device 101,102 can have network interface, Ke Yijie The image acquisition request that server 105 is sent is received, can also data be uploaded to by server 105 by network interface and stored Or processing.
User 110 can be interacted by terminal device 103 with server 105, to receive or send message.Terminal device 103 can be the various electronic equipments with user interface, include but not limited to smart mobile phone, tablet computer, PC etc..
Server 105 can be to provide the server of various services, such as the figure to the upload of image capture device 101,102 As the clothes server of processing such as stored, analyzed and being responded to the image acquisition request that terminal device 103 is sent Business device.After image capture device 101,102 uploads image, server 105 can be compressed the image of upload, will scheme As being stored in after coding in corresponding storage medium.User 110 can send reading by terminal device 103 to server 105 Image or the request for obtaining processing result image, server 105 can parse request, search corresponding image or to figure As carrying out that lookup result or handling result are sent to terminal device 103 after handling accordingly.
It should be noted that the method for compressing image that the embodiment of the present application is provided generally is performed by server 105, accordingly Ground, image compressing device are generally positioned in server 105.
It should be understood that the number of the image capture device, terminal device, network and server in Fig. 1 is only schematic 's.According to needs are realized, can have any number of image capture device, terminal device, network and server.Such as it services Device can be for the server of concentrating type, the multiple servers including deploying different processes.
With continued reference to Fig. 2, it illustrates the flows 200 of one embodiment of the method for compressing image according to the application.It should Method for compressing image comprises the following steps:
Step 201, pending image is obtained.
In the present embodiment, the electronic equipment (such as server shown in FIG. 1) of method for compressing image operation thereon can To obtain pending image.The pending image can be by the image of the rear storage to be compressed of image capture device acquisition.
Above-mentioned pending image may, for example, be the image gathered in monitoring scene in video sequence.In general, monitoring camera Head needs constantly to gather the image of monitoring scene.The image of monitoring scene can include the associated picture letter in pedestrian and place Breath, such as facial image can be included.The image collected quantity is larger under monitoring scene, and believes comprising more rich image Breath, if directly storage needs to occupy larger memory space.
In the present embodiment, above-mentioned electronic equipment can be connected to obtain the image of acquisition with image capture device.The electricity The compression of images that sub- equipment can be sent in response to image capture device/image storage asks and receives image capture device upload Pending image, may also respond to the operational order of user and sent to image capture device obtain pending image please It asks, and receives the pending image of image capture device transmission.
Step 202, pending image is carried out using the corresponding convolutional neural networks of compression of images model trained special Sign extraction, obtains several characteristic patterns.
In the present embodiment, the compression of images model that above-mentioned pending image input has been trained can be handled. Trained compression of images model is built based on convolutional neural networks, which can include at least one convolution Layer, optionally, can also include at least one down-sampled layer.Convolutional layer can include convolution kernel, input the image warp of convolutional layer It crosses after the convolution algorithm with convolution kernel and removes the image information of redundancy, output includes the image of characteristic information.If convolution kernel Size is more than 1 × 1, then convolutional layer can export the characteristic pattern that several sizes are less than input picture.The image of convolutional layer output can To carry out down-sampled processing by down-sampled layer (or for " pond layer "), picture size is further reduced.Passing through multiple volumes After the processing of lamination, the size for inputting the image of convolutional neural networks have passed through multistage contraction, obtain several sizes less than defeated Enter the characteristic pattern of the picture size of convolutional neural networks.
Feature extraction can be carried out to pending image using above-mentioned compression of images model corresponding convolutional neural networks, After processing by multiple convolutional layers in convolutional neural networks, several characteristic patterns of pending image are generated.Several features Figure can be as the compression result of pending image.Herein, several characteristic patterns can be via the image reconstruction mould trained The corresponding deconvolution neutral net of type is reconstructed to obtain reconstructed image, and the reconstructed image of pending image and pending figure Difference as between meets preset condition.
The corresponding deconvolution neutral net of image reconstruction model can be utilized to pending image input picture compact model Several characteristic patterns obtained afterwards are reconstructed.Wherein, the corresponding deconvolution neutral net of image reconstruction model can be included at least One warp lamination, wherein warp lamination can also include convolution kernel, can carry out warp to several characteristic patterns using convolution kernel Product computing, expands the picture size of input warp lamination.It specifically can (value in image array be first with blank pixel 0) image is expanded, convolution then is carried out to the image that expansion obtains using the convolution kernel of warp lamination, obtains clear area The pixel grey scale information in domain.After at least one warp lamination processing in by image reconstruction model, several characteristic pattern quilts It is reconstructed into a width reconstructed image.The reconstructed image is identical with the size of pending image, also, above-mentioned compression of images model and figure As reconstruction model be using machine learning method training draw, can cause the difference between reconstructed image and pending image Meet default condition.Herein, default condition can be less than default threshold value.
In this way, after several characteristic patterns are obtained after to compression of images, it, can be to more when needing to read pending image Width characteristic pattern is reconstructed using the above-mentioned image reconstruction model trained, and what the as reduction of obtained reconstructed image obtained waits to locate Manage image.
What above-mentioned compression of images model and image reconstruction model can be completed based on marked sample training.It specifically can be with Marker samples image and corresponding sample characteristics set of graphs using sample image as the input of compression of images model, export image The prediction result of compression is compared with corresponding sample characteristics set of graphs, and then adjusts the parameter of compression of images model;It can be with Using sample characteristics figure as the input of image reconstruction model, export the prediction result of image reconstruction, with corresponding sample image into Row compares, and then adjusts the parameter of image reconstruction model.
It please refers to Fig.3, it illustrates a principle schematics of the method for compressing image according to the application.As shown in figure 3, After pending image I input picture compact models, several characteristic patterns f is obtained after multilayer convolution algorithm1、f2、…、fk, k For positive integer.Several characteristic patterns f1、f2、…、fkAfter image reconstruction model carries out multilayer de-convolution operation, reduction obtains weight Composition is as J.Every width characteristic pattern f1、f2... or fkSize be much smaller than the size of pending image I, and assume pending figure As the size of I is M × N, the size of every width characteristic pattern is m × n, is met:K × m × n is much smaller than M × N.
For example, in actual scene, pending image can be compressed using compression of images model, obtained The characteristic pattern of 256 1 × 1, if the characteristic value in characteristic pattern is represented with single precision floating datum, compressed data volume is The byte of 256 × 4 bytes=1024 needs the space of 1024 bytes to store characteristic pattern after compressing.
The method for compressing image that the above embodiments of the present application provide, by obtaining pending image, the figure that utilization has been trained As the corresponding convolutional neural networks of compact model carry out feature extraction to pending image, obtain several characteristic patterns, wherein several Characteristic pattern is reconstructed to obtain reconstructed image, pending figure via the corresponding deconvolution neutral net of default image reconstruction model Difference between the reconstructed image of picture and pending image meets preset condition, is realized on the premise of image restoring quality is ensured The compression of image data amount.
In some optional realization methods of the present embodiment, the flow 200 of above-mentioned method for compressing image can also include:
Step 203, the parameter and image reconstruction model for storing the corresponding convolutional neural networks of compression of images model correspond to Deconvolution neutral net parameter.
In the present embodiment, the parameter of the corresponding convolutional neural networks of compression of images model can be stored, subsequently to obtain After taking pending image, processing is compressed to the pending image of acquisition using compression of images model.
The parameter of the corresponding deconvolution neutral net of image reconstruction model can also be stored, with when calling the image of storage Using the image reconstruction model by image restoration.
Step 204, stored several characteristic patterns as the compression result of pending image.
In the present embodiment, step 202 can be obtained to several characteristic patterns to associate with the mark of corresponding pending image Ground stores, i.e., is stored several characteristic patterns as the compression result of pending image.It specifically, can be by pending image Several characteristic pattern composition characteristic set of graphs drawn after input picture compact model, every pending image and a feature atlas It closes and corresponds to, the mark of feature set of graphs is configured in the mark of corresponding pending image.The mark can be pending according to The acquisition scene of image, collecting device, acquisition time or its combination settings.In this way, when extracting pending image, Ke Yigen Go out corresponding feature set of graphs according to marker extraction.
By above-mentioned steps 203 and step 204, compression image, compression of images model and image reconstruction model are realized Storage.In this way, can be compressed using same compression of images model to the pending image of batch, and utilize same figure As reconstruction model reduces the compression image of batch, in the large batch of image storage scene such as monitor video backup storage The data volume for needing to store is greatly reduced, so as to save memory space.
In some optional realization methods of the present embodiment, above-mentioned compression of images model and image reconstruction model can be Training is completed independently of each other, training process described above.In other optional realization methods of the present embodiment, above-mentioned figure As compact model and image reconstruction model can be associated training by a series of and complete together.
It please refers to Fig.4 below, it illustrates an implementations of compression of images model and the training method of image reconstruction model A kind of stream of specific implementation of compression of images model and the training of image reconstruction model interaction has been shown in particular in the flow chart of example Journey.As shown in figure 4, the flow 400 of the training method of compression of images model and image reconstruction model can include:
Step 401, sample image is obtained.
In the present embodiment, the sample image of training can be obtained.The sample image can be from Network Picture Database with The network image of machine selection or the real image gathered in different scenes.Optionally, sample image can be according to treating Picture format, targeted scene type of processing image etc. determine, such as the image that pending image is monitoring scene, then sample This image can be extracted from existing monitoring scene image.
Step 402, perform and compare step.
Specifically, comparing step includes step 4021, step 4022, step 4023 and step 4024.
First, step 4021 is performed, by the corresponding convolutional neural networks of sample image input picture compact model, output is more Width feature samples figure.
When comparison step is performed in first time, the initial ginseng of the corresponding convolutional neural networks of compression of images model can be set Number, carrys out initialisation image compact model.After execution during continuous comparison step, it may be determined that the corresponding convolution of compression of images model The parameter of neutral net is last newer parameter after performing comparison step.Specifically, the corresponding volume of compression of images model The parameter of product neutral net can include the convolution kernel of each convolutional layer.
Sample image input picture compact model can be carried out feature to sample image using convolutional neural networks and be carried It takes, obtains several corresponding feature samples figures of sample image, the compression of images result as sample image in this comparison.At this In, feature samples figure is the characteristic pattern of sample image.
Then, step 4022 is performed, by several corresponding deconvolution nerve nets of feature samples figure input picture reconstruction model Network carries out image reconstruction, obtains the reconstructed image of sample image.
It can be corresponding anti-by several feature samples figure (i.e. several characteristic patterns of sample image) input picture reconstruction models Convolutional neural networks carry out de-convolution operation to several feature samples figures, obtain a width reconstructed image, the size of the reconstructed image It is identical with sample image.
In the present embodiment, the corresponding deconvolution neutral net of above-mentioned image reconstruction model includes at least one deconvolution Layer, each warp lamination include a convolution kernel.The parameter of deconvolution neutral net is the convolution kernel of each warp layer, can also Using the parameter determination method of such as compression of images model, when comparison step is performed in first time, image reconstruction model can be set The initial parameter of corresponding deconvolution neutral net, carrys out initialisation image reconstruction model.It, can after execution during continuous comparison step Newer ginseng after comparing step as last time execution using the parameter for determining the corresponding deconvolution neutral net of image reconstruction model Number.
Then, step 4023 is performed, loss is built based on the difference between sample image and the reconstructed image of sample image Function.
Loss function can be used to indicate that difference between the prediction result of neutral net and actual value namely for characterizing The accuracy of the prediction result of neutral net.In the present embodiment, each sample image can be based on the sample image is inputted Difference structure loss function between the reconstructed image obtained after compression of images model and image reconstruction model.Specifically, damage Function L is lost to can be constructed as:
Wherein, IiIt is represented for the matrix of the i-th width sample image, JiFor the matrix of the corresponding reconstructed image of the i-th width sample image It represents, | | Ii-Ji||2Represent Ii-JiTwo norms.
From formula (1) as can be seen that loss function can be the difference between each sample image and corresponding reconstructed image Accumulation.Herein, JiIt is by IiBy the corresponding convolutional neural networks of compression of images model and the corresponding warp of image reconstruction model Product neutral net carries out what is obtained after computing successively, is drawing JiDuring make use of convolutional neural networks and deconvolution nerve The parameter of network, therefore loss function can be parameter and image weight based on the corresponding convolutional neural networks of compression of images model The parameter structure of the corresponding deconvolution neutral net of structure model.
Afterwards, step 4024 is performed, judges whether the value of loss function meets the default condition of convergence.
The reconstructed image that sample image and step 4022 obtain can be substituted into formula (1), current loss function is calculated Value, then judge whether the value of loss function meets the default condition of convergence.The default condition of convergence can be to reach default Data interval, or with nearest t (t be positive integer) not less than 1 in secondary comparison step between the value of loss function Difference be less than default threshold value.
If the judging result for comparing step 402 is no, i.e. when the value of loss function is unsatisfactory for the default condition of convergence, hold Row step 403, based on loss function, using the parameter of the corresponding convolutional neural networks of gradient descent method update compression of images model And/or the parameter of the corresponding deconvolution neutral net of image reconstruction model, it is then based on updated parameter return and performs comparison Step 402.
Above-mentioned loss function is the parameter of convolutional neural networks corresponding with compression of images model and image reconstruction model pair The relevant function of parameter for the deconvolution neutral net answered.In the present embodiment, gradient descent method more new images pressure may be employed The parameter of the corresponding deconvolution neutral net of parameter and/or image reconstruction model of the corresponding convolutional neural networks of contracting model.Make The difference obtained between the sample image drawn after undated parameter and corresponding reconstructed image reduces.Comparison is performed by successive ignition Step 402 and parameter update step 403, are gradually reduced the value of loss function namely cause sample image and corresponding reconstruct Error between image is gradually reduced.
It specifically, can be with parameter and image of the counting loss function on the corresponding convolutional neural networks of compression of images model The gradient of the parameter of the corresponding deconvolution neutral net of reconstruction model, then determines each parameter more according to default step factor Renewal amount is superimposed with current parameter to obtain updated parameter by new amount.
It should be noted that when execution step 403 is updated parameter every time, compression of images model can be only updated The parameter of corresponding convolutional neural networks or the parameter for only updating the corresponding deconvolution neutral net of image reconstruction model, also may be used To update the parameter of the corresponding convolutional neural networks of compression of images model and the corresponding deconvolution nerve of image reconstruction model simultaneously The parameter of network.
If the judging result for comparing step 402 is yes, i.e. when the value of loss function meets the default condition of convergence, perform Step 404, the parameter of the corresponding convolutional neural networks of compression of images model and the corresponding deconvolution god of image reconstruction model are exported Parameter through network.
If the value of loss function meets the default condition of convergence, stop undated parameter, compression of images model is corresponded to Convolutional neural networks parameter and the corresponding deconvolution neutral net of image reconstruction model parameter output, so as to instructed Experienced compression of images model and the image reconstruction model trained.
The joint training of compression of images model and image reconstruction model can be realized by more than flow 400, it is so favourable In simplifying training process, sample size is reduced.Contribute to promote the association between compression of images model and image reconstruction model simultaneously Property, reduce information loss caused by compression of images.
With continued reference to Fig. 5, it illustrates the flow charts of another embodiment of the method for compressing image according to the application.Such as Shown in Fig. 5, the flow 500 of the method for compressing image of the present embodiment comprises the following steps:
Step 501, pending image is obtained.
In the present embodiment, the electronic equipment of method for compressing image operation thereon can be by connecting with image capture device It fetches and obtains pending image.Pending image can be the image of rear storage to be compressed.
Step 502, pending image is carried out using the corresponding convolutional neural networks of compression of images model trained special Sign extraction, obtains several characteristic patterns.
Wherein, several characteristic patterns can carry out weight via the corresponding deconvolution neutral net of the image reconstruction model trained Structure obtains reconstructed image, and the difference between the reconstructed image of pending image and pending image meets preset condition.
In the present embodiment, pending image can be inputted to the compression of images model trained, compression of images model can Being built based on convolutional neural networks, at least one convolutional layer can be included, the pending image of input passes through at least one After the computing of a convolutional layer, generation includes several characteristic patterns of characteristic information.
Several above-mentioned characteristic patterns are corresponding with pending image, and the size of each characteristic pattern is respectively less than the size of pending image. And it is possible to several characteristic patterns are reduced to the reconstructed image of size identical with pending image using image reconstruction model. Here, compression of images model and image reconstruction model are drawn using machine learning method training, can make the sample of training Difference between this image and sample image meets preset condition, likewise it is possible to make the reconstructed image of pending image with treating Difference between processing image meets preset condition.
Step 503, the parameter and image reconstruction model for storing the corresponding convolutional neural networks of compression of images model correspond to Deconvolution neutral net parameter.
In the present embodiment, compression of images model and image reconstruction model can be stored, can specifically store corresponding volume Product neutral net parameter and corresponding deconvolution neutral net parameter, with subsequently get new pending image it Afterwards, processing is compressed using compression of images model;And it will be stored using image reconstruction model when reading the image of storage Several characteristic patterns be restored to the reconstructed image of size identical with pending image.
The step 501 of the present embodiment, step 502, step 503 respectively with the step 201 of previous embodiment, step 202, step Rapid 203 correspond to, and details are not described herein.
Step 504, the maximum and minimum value of the gray value of pixel in each characteristic pattern of pending image are calculated, and is based on The grayvalue transition of pixel in characteristic pattern is character type data by maximum and minimum value.
In the present embodiment, characteristic pattern can be made of at least one gray value for the pixel of floating number.It can be to pixel Gray value further compresses for the characteristic pattern of floating number.It specifically can be with pixel in several corresponding characteristic patterns of every pending image Gray value maximum and minimum value, be denoted as f respectivelymaxAnd fmin.It optionally, can if the size of characteristic pattern is more than 1 × 1 Characteristic pattern to be converted to one-dimensional grey-level sequence, such as the feature that will include 2 × 2 picture element matrixs arranged in the counterclockwise direction Figure is converted to the grey-level sequence being made of the gray value of four pixels.Then looked into the corresponding each grey-level sequence of pending image Look for maximum fmaxWith minimum value fmin
Afterwards, for the gray value f of any of characteristic pattern pixel, the computing of formula (2) can be performed:
Wherein, round is bracket function, fnewFor the corresponding character type datas of gray value f (char) of pixel, after conversion Each character occupies a byte, and the occupied space of storage image is reduced compared to the floating number before conversion.
By above step 501 to 504, pending image can be converted to the ash of the pixel represented by character type data Angle value has further reduced the data volume of image on the basis of embodiment illustrated in fig. 2.
Step 505, using the corresponding character type data of the gray value of pixel in the characteristic pattern of pending image as image pressure Sheepshank fruit is stored, and stores the maximum and minimum value of the gray value of pixel in each characteristic pattern of pending image.
In the present embodiment, the corresponding character type number of gray value of pixel in the characteristic pattern of pending image can be stored According to, while store the maximum f of the gray value of pixel in each characteristic pattern of above-mentioned pending imagemaxWith minimum value fmin.In this way, It, can be first with maximum f when reading pending imagemaxWith minimum value fminThe character type data of storage is converted to floating Points, obtain characteristic pattern, characteristic pattern input picture reconstruction model then are obtained reconstructed image again.By performing step 505, into One step has compressed the occupied memory space of pending image.
With further reference to Fig. 6, as the realization to method shown in above-mentioned each figure, this application provides a kind of compression of images dresses The one embodiment put, the device embodiment is corresponding with the embodiment of the method shown in Fig. 2 and Fig. 5, which can specifically answer For in various electronic equipments.
As shown in fig. 6, the image compressing device 600 of the present embodiment includes:Acquiring unit 601 and compression unit 602.It obtains Unit 601 can be used for obtaining pending image, and compression unit 602 can be used for corresponding to using the compression of images model trained Convolutional neural networks to pending image carry out feature extraction, obtain several characteristic patterns;Wherein, several characteristic patterns are via having instructed The corresponding deconvolution neutral net of experienced image reconstruction model is reconstructed to obtain reconstructed image, the reconstructed image of pending image Difference between pending image meets preset condition.
In the present embodiment, acquiring unit 601 can obtain pending figure in response to the upload request of image capture device Picture can also perform the instruction of image compression operation in response to receiving user and send acquisition request to image capture device, And receive the pending image of image capture device transmission.
Compression unit 602 can be used for the pending image input that acquiring unit 601 obtains having used machine learning side The compression of images model of method training, the compression of images model are to be built based on convolutional neural networks, and based on training template data What training was drawn, including at least one convolutional layer.Pending image is by the corresponding convolutional neural networks fortune of compression of images model After calculation, several characteristic patterns can be obtained.
Several characteristic patterns are the image for including characteristic information gone out from pending image zooming-out.Compression of images model is corresponding The convolution kernel size of convolutional layer in convolutional neural networks is more than 1 × 1, then the size of the characteristic pattern extracted is less than pending figure The size of picture by the convolution algorithm of multiple convolutional layers, realizes the compression to pending image.
Several above-mentioned characteristic patterns can be via the corresponding warp of image reconstruction model that machine learning method has been used to train Product neutral net is reconstructed, and obtains the reconstructed image of pending image, the difference between the reconstructed image and pending image Meet preset condition.Herein, preset condition can be less than default threshold value.It specifically can be by being continued to optimize in training The parameter of the corresponding deconvolution neutral net of image reconstruction model is adjusted to control the reconstructed image and pending image that it is exported Between difference meet preset condition.
In some embodiments, above-mentioned compression of images model and above-mentioned image reconstruction model can be joined as follows Close what training was drawn:Sample image is obtained, and performs comparison step;Comparing step includes:Sample image input picture is compressed Model exports several feature samples figures, by several corresponding deconvolution neutral nets of feature samples figure input picture reconstruction model Image reconstruction is carried out, the reconstructed image of sample image is obtained, based on the difference between sample image and the reconstructed image of sample image Isomery builds loss function, judges whether the value of loss function meets the default condition of convergence;If the judging result for comparing step is It is no, based on loss function, using the parameter and/or figure of the corresponding convolutional neural networks of gradient descent method update compression of images model As the parameter of the corresponding deconvolution neutral net of reconstruction model, performed based on updated parameter and compare step;If compare step Judging result be yes, the parameter and image reconstruction model of the corresponding convolutional neural networks of output compression of images model are corresponding anti- The parameter of convolutional neural networks.
In a further embodiment, above device 600 can also include:First storage unit, for storing image pressure The parameter of the corresponding deconvolution neutral net of parameter and image reconstruction model of the corresponding convolutional neural networks of contracting model.
In a further embodiment, above device 600 can also include:Second storage unit, for by several features Scheme to be stored as the compression result of pending image.
In some embodiments, upper characteristic pattern can be made of at least one gray value for the pixel of floating number;On and Converting unit can also be included by stating device 600, be used for:Calculate the maximum of the gray value of pixel in each characteristic pattern of pending image Value and minimum value, and be character type data by the grayvalue transition of the pixel in characteristic pattern based on maximum and minimum value.
In a further embodiment, above device 600 can also include:3rd storage unit, for by pending figure The corresponding character type data of the gray value of pixel is stored as compression of images result in the characteristic pattern of picture, and is stored pending The maximum and minimum value of the gray value of pixel in each characteristic pattern of image.
It should be appreciated that all units described in device 600 with it is each in the method with reference to figure 2, Fig. 3, Fig. 4 and Fig. 5 description A step is corresponding.The operation and feature above with respect to method description are equally applicable to device 600 and list wherein included as a result, Member, details are not described herein.
The image compressing device 600 of the above embodiments of the present application obtains pending image by acquiring unit, then compresses Unit carries out feature extraction using the corresponding convolutional neural networks of compression of images model trained to pending image, obtains more Width characteristic pattern;Wherein, several characteristic patterns are reconstructed via the corresponding deconvolution neutral net of the image reconstruction model trained Reconstructed image is obtained, the difference between the reconstructed image of pending image and pending image meets preset condition, realizes figure As the significantly compression of data volume, while it can ensure the also proper mass of image.
Below with reference to Fig. 7, it illustrates suitable for being used for realizing the computer system 700 of the server of the embodiment of the present application Structure diagram.Server shown in Fig. 7 is only an example, should not be to the function of the embodiment of the present application and use scope band Carry out any restrictions.
As shown in fig. 7, computer system 700 includes central processing unit (CPU) 701, it can be read-only according to being stored in Program in memory (ROM) 702 or be loaded into program in random access storage device (RAM) 703 from storage part 708 and Perform various appropriate actions and processing.In RAM 703, also it is stored with system 700 and operates required various programs and data. CPU 701, ROM 702 and RAM 703 are connected with each other by bus 704.Input/output (I/O) interface 705 is also connected to always Line 704.
I/O interfaces 705 are connected to lower component:Importation 706 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 707 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage part 708 including hard disk etc.; And the communications portion 709 of the network interface card including LAN card, modem etc..Communications portion 709 via such as because The network of spy's net performs communication process.Driver 710 is also according to needing to be connected to I/O interfaces 705.Detachable media 711, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 710, as needed in order to read from it Computer program be mounted into as needed storage part 708.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product, including being carried on computer-readable medium On computer program, which includes for the program code of the method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 709 and/or from detachable media 711 are mounted.When the computer program is performed by central processing unit (CPU) 701, perform what is limited in the present processes Above-mentioned function.It should be noted that the computer-readable medium of the application can be computer-readable signal media or calculating Machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but it is unlimited In --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device or it is arbitrary more than combination.It calculates The more specific example of machine readable storage medium storing program for executing can include but is not limited to:Being electrically connected, be portable with one or more conducting wires Formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In this application, computer readable storage medium can any include or store program Tangible medium, the program can be commanded execution system, device either device use or it is in connection.And in this Shen Please in, computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal, In carry computer-readable program code.Diversified forms may be employed in the data-signal of this propagation, include but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Any computer-readable medium beyond storage medium, the computer-readable medium can send, propagate or transmit for by Instruction execution system, device either device use or program in connection.The journey included on computer-readable medium Sequence code can be transmitted with any appropriate medium, be included but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned Any appropriate combination.
It can be with one or more programming languages or its calculating for combining to write to perform the operation of the application Machine program code, programming language include object oriented program language-such as Java, Smalltalk, C++, also Including conventional procedural programming language-such as " C " language or similar programming language.Program code can be complete It performs, partly performed on the user computer on the user computer entirely, the software package independent as one performs, part Part performs or performs on a remote computer or server completely on the remote computer on the user computer.It is relating to And in the situation of remote computer, remote computer can pass through the network of any kind --- including LAN (LAN) or wide Domain net (WAN)-be connected to subscriber computer or, it may be connected to outer computer (such as is provided using Internet service Business passes through Internet connection).
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use In the executable instruction of logic function as defined in realization.It should also be noted that it is marked at some as in the realization replaced in box The function of note can also be occurred with being different from the order marked in attached drawing.For example, two boxes succeedingly represented are actually It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depending on involved function.Also to note Meaning, the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor bag Include acquiring unit and compression unit.Wherein, the title of these units does not form the limit to the unit in itself under certain conditions It is fixed, for example, acquiring unit is also described as " unit for obtaining pending image ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are performed by the device so that should Device:Obtain pending image, the corresponding convolutional neural networks of compression of images model that utilization has been trained to pending image into Row feature extraction obtains several characteristic patterns, and wherein several characteristic patterns are via the corresponding deconvolution god of default image reconstruction model It is reconstructed to obtain reconstructed image through network, the difference between the reconstructed image of pending image and pending image meets default Condition.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature The other technical solutions for being combined and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein The technical solution that the technical characteristic of energy is replaced mutually and formed.

Claims (14)

1. a kind of method for compressing image, including:
Obtain pending image;
Feature extraction is carried out to the pending image using the compression of images model corresponding convolutional neural networks trained, is obtained To several characteristic patterns;
Wherein, several described characteristic patterns are reconstructed via the corresponding deconvolution neutral net of the image reconstruction model trained To reconstructed image, the difference between the reconstructed image of the pending image and the pending image meets preset condition.
2. according to the method described in claim 1, wherein, described image compact model and described image reconstruction model are according to as follows Mode, which is trained, to be drawn:
Sample image is obtained, and performs comparison step;
The comparison step includes:The sample image is inputted into described image compact model, exports several feature samples figures, it will Several described feature samples figures input the corresponding deconvolution neutral net of described image reconstruction model and carry out image reconstruction, obtain institute The reconstructed image of sample image is stated, the difference structure damage between the reconstructed image based on the sample image and the sample image Function is lost, judges whether the value of the loss function meets the default condition of convergence;
If the judging result for comparing step is no, based on the loss function, described image is updated using gradient descent method The ginseng of the corresponding deconvolution neutral net of parameter and/or described image reconstruction model of the corresponding convolutional neural networks of compact model Number performs the comparison step based on updated parameter;
If the judging result for comparing step is yes, the parameter of the corresponding convolutional neural networks of output described image compact model The parameter of deconvolution neutral net corresponding with described image reconstruction model.
3. according to the method described in claim 1, wherein, the method further includes:
Parameter and the described image reconstruction model for storing the corresponding convolutional neural networks of described image compact model are corresponding anti- The parameter of convolutional neural networks.
4. according to the method described in claim 3, wherein, the method further includes:
It is stored several described characteristic patterns as the compression result of the pending image.
5. according to claim 1-3 any one of them methods, wherein, the characteristic pattern is floating number by least one gray value Pixel form;And
The method further includes:
The maximum and minimum value of the gray value of pixel in each characteristic pattern of the pending image are calculated, and based on the maximum The grayvalue transition of pixel in the characteristic pattern is character type data by value and the minimum value.
6. according to the method described in claim 5, wherein, the method further includes:
Using the corresponding character type data of gray value of pixel in the characteristic pattern of the pending image as compression of images result into Row storage, and store the maximum and minimum value of the gray value of pixel in each characteristic pattern of the pending image.
7. a kind of image compressing device, including:
Acquiring unit, for obtaining pending image;
Compression unit, for using the corresponding convolutional neural networks of compression of images model trained to the pending image into Row feature extraction obtains several characteristic patterns;
Wherein, several described characteristic patterns are reconstructed via the corresponding deconvolution neutral net of the image reconstruction model trained To reconstructed image, the difference between the reconstructed image of the pending image and the pending image meets preset condition.
8. device according to claim 7, wherein, described image compact model and described image reconstruction model are according to such as Under type training is drawn:
Sample image is obtained, and performs comparison step;
The comparison step includes:The sample image is inputted into described image compact model, exports several feature samples figures, it will Several described feature samples figures input the corresponding deconvolution neutral net of described image reconstruction model and carry out image reconstruction, obtain institute The reconstructed image of sample image is stated, the difference structure damage between the reconstructed image based on the sample image and the sample image Function is lost, judges whether the value of the loss function meets the default condition of convergence;
If the judging result for comparing step is no, based on the loss function, described image is updated using gradient descent method The ginseng of the corresponding deconvolution neutral net of parameter and/or described image reconstruction model of the corresponding convolutional neural networks of compact model Number performs the comparison step based on updated parameter;
If the judging result for comparing step is yes, the parameter of the corresponding convolutional neural networks of output described image compact model The parameter of deconvolution neutral net corresponding with described image reconstruction model.
9. device according to claim 7, wherein, described device further includes:
First storage unit, for storing the parameter and described image of the corresponding convolutional neural networks of described image compact model The parameter of the corresponding deconvolution neutral net of reconstruction model.
10. device according to claim 9, wherein, described device further includes:
Second storage unit, for being stored several described characteristic patterns as the compression result of the pending image.
11. according to claim 7-9 any one of them devices, wherein, the characteristic pattern is floating-point by least one gray value Several pixels is formed;And
Described device further includes converting unit, is used for:
The maximum and minimum value of the gray value of pixel in each characteristic pattern of the pending image are calculated, and based on the maximum The grayvalue transition of pixel in the characteristic pattern is character type data by value and the minimum value.
12. according to the devices described in claim 11, wherein, described device further includes:
3rd storage unit, for the corresponding character type data of the gray value of pixel in the characteristic pattern of the pending image to be made Stored for compression of images result, and store in each characteristic pattern of the pending image maximum of the gray value of pixel and Minimum value.
13. a kind of server, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors are real The now method as described in any in claim 1-6.
14. a kind of computer readable storage medium, is stored thereon with computer program, wherein, described program is executed by processor Methods of the Shi Shixian as described in any in claim 1-6.
CN201711477239.8A 2017-12-29 2017-12-29 Method for compressing image and device Active CN108062780B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711477239.8A CN108062780B (en) 2017-12-29 2017-12-29 Method for compressing image and device
US16/130,722 US10896522B2 (en) 2017-12-29 2018-09-13 Method and apparatus for compressing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711477239.8A CN108062780B (en) 2017-12-29 2017-12-29 Method for compressing image and device

Publications (2)

Publication Number Publication Date
CN108062780A true CN108062780A (en) 2018-05-22
CN108062780B CN108062780B (en) 2019-08-09

Family

ID=62140834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711477239.8A Active CN108062780B (en) 2017-12-29 2017-12-29 Method for compressing image and device

Country Status (2)

Country Link
US (1) US10896522B2 (en)
CN (1) CN108062780B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089010A (en) * 2018-09-14 2018-12-25 深圳市友杰智新科技有限公司 A kind of image transfer method and device
CN109376762A (en) * 2018-09-13 2019-02-22 西安万像电子科技有限公司 Image processing method and device
CN109727195A (en) * 2018-12-25 2019-05-07 成都元点智库科技有限公司 A kind of image super-resolution reconstructing method
CN110191252A (en) * 2019-04-03 2019-08-30 陈昊 A kind of Arthroplasty surgery operative image Transmission system and transmission method
CN110222717A (en) * 2019-05-09 2019-09-10 华为技术有限公司 Image processing method and device
CN110248191A (en) * 2019-07-15 2019-09-17 山东浪潮人工智能研究院有限公司 A kind of video-frequency compression method based on deep layer convolutional neural networks
CN110728630A (en) * 2019-09-03 2020-01-24 北京爱博同心医学科技有限公司 Internet image processing method based on augmented reality and augmented reality glasses
CN110796251A (en) * 2019-10-28 2020-02-14 天津大学 Image compression optimization method based on convolutional neural network
CN110830807A (en) * 2019-11-04 2020-02-21 腾讯科技(深圳)有限公司 Image compression method, device and storage medium
CN110876062A (en) * 2018-08-31 2020-03-10 三星电子株式会社 Electronic device for high-speed compression processing of feature map and control method thereof
CN110933432A (en) * 2018-09-19 2020-03-27 珠海金山办公软件有限公司 Image compression method, image decompression method, image compression device, image decompression device, electronic equipment and storage medium
CN111046893A (en) * 2018-10-12 2020-04-21 富士通株式会社 Image similarity determining method and device, and image processing method and device
CN111050174A (en) * 2019-12-27 2020-04-21 清华大学 Image compression method, device and system
WO2020131645A1 (en) * 2018-12-17 2020-06-25 Qualcomm Incorporated Method and apparatus for providing a rendering engine model comprising a description of a neural network embedded in a media item
CN111405285A (en) * 2020-03-27 2020-07-10 北京百度网讯科技有限公司 Method and apparatus for compressing image
CN111542839A (en) * 2018-12-13 2020-08-14 深圳鲲云信息科技有限公司 Hardware acceleration method and device of deconvolution neural network and electronic equipment
CN111783965A (en) * 2020-08-14 2020-10-16 支付宝(杭州)信息技术有限公司 Method, device and system for biometric identification and electronic equipment
CN112100645A (en) * 2019-06-18 2020-12-18 中国移动通信集团浙江有限公司 Data processing method and device
WO2021000650A1 (en) * 2019-07-04 2021-01-07 国家广播电视总局广播电视科学研究院 Program distribution method and device, reception method, terminal apparatus, and medium
WO2021022685A1 (en) * 2019-08-08 2021-02-11 合肥图鸭信息科技有限公司 Neural network training method and apparatus, and terminal device
WO2021022686A1 (en) * 2019-08-08 2021-02-11 合肥图鸭信息科技有限公司 Video compression method and apparatus, and terminal device
CN113096202A (en) * 2021-03-30 2021-07-09 深圳市商汤科技有限公司 Image compression method and device, electronic equipment and computer readable storage medium
CN113596471A (en) * 2021-07-26 2021-11-02 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113822955A (en) * 2021-11-18 2021-12-21 腾讯医疗健康(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium
WO2022021938A1 (en) * 2020-07-30 2022-02-03 华为技术有限公司 Image processing method and device, and neutral network training method and device
CN114201118A (en) * 2022-02-15 2022-03-18 北京中科开迪软件有限公司 Storage method and system based on optical disk library

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10540574B2 (en) * 2017-12-07 2020-01-21 Shanghai Cambricon Information Technology Co., Ltd Image compression method and related device
CN108491266B (en) * 2018-03-09 2021-11-16 联想(北京)有限公司 Data processing method and device based on block chain and electronic equipment
US10489936B1 (en) * 2019-04-29 2019-11-26 Deep Render Ltd. System and method for lossy image and video compression utilizing a metanetwork
KR20210004702A (en) * 2019-07-05 2021-01-13 삼성전자주식회사 Artificial intelligence processor and performing neural network operation thereof
CN112241740B (en) * 2019-07-19 2024-03-26 新华三技术有限公司 Feature extraction method and device
CN110728726B (en) * 2019-10-24 2022-09-23 湖南大学 Image compression method based on user interaction and deep neural network
CN110880194A (en) * 2019-12-03 2020-03-13 山东浪潮人工智能研究院有限公司 Image compression method based on convolutional neural network
US11687778B2 (en) 2020-01-06 2023-06-27 The Research Foundation For The State University Of New York Fakecatcher: detection of synthetic portrait videos using biological signals
CN111246206B (en) * 2020-01-14 2021-09-21 山东浪潮科学研究院有限公司 Optical flow information compression method and device based on self-encoder
CN111614358B (en) * 2020-04-30 2023-08-04 合肥的卢深视科技有限公司 Feature extraction method, system, equipment and storage medium based on multichannel quantization
US11756290B2 (en) * 2020-06-10 2023-09-12 Bank Of America Corporation System for intelligent drift matching for unstructured data in a machine learning environment
CN112351069A (en) * 2020-09-30 2021-02-09 银盛通信有限公司 System and method for automatic data uploading and maintaining transmission stability
CN113362403A (en) * 2021-07-20 2021-09-07 支付宝(杭州)信息技术有限公司 Training method and device of image processing model
CN114140346A (en) * 2021-11-15 2022-03-04 深圳集智数字科技有限公司 Image processing method and device
CN115115721B (en) * 2022-07-26 2024-03-15 北京大学深圳研究生院 Pruning method and device for neural network image compression model
WO2024054689A1 (en) * 2022-09-10 2024-03-14 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatus for transform training and coding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611303A (en) * 2016-03-07 2016-05-25 京东方科技集团股份有限公司 Image compression system, decompression system, training method and device, and display device
CN107018422A (en) * 2017-04-27 2017-08-04 四川大学 Still image compression method based on depth convolutional neural networks
US20170230675A1 (en) * 2016-02-05 2017-08-10 Google Inc. Compressing images using neural networks
CN107240136A (en) * 2017-05-25 2017-10-10 华北电力大学 A kind of Still Image Compression Methods based on deep learning model
CN107301668A (en) * 2017-06-14 2017-10-27 成都四方伟业软件股份有限公司 A kind of picture compression method based on sparse matrix, convolutional neural networks

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2655450B1 (en) * 1989-12-06 1994-07-29 Thomson Csf METHOD OF COMPRESSING IMAGES BY SELF-ORGANIZATION OF A NEURONAL NETWORK.
US5822452A (en) * 1996-04-30 1998-10-13 3Dfx Interactive, Inc. System and method for narrow channel compression
US10223635B2 (en) * 2015-01-22 2019-03-05 Qualcomm Incorporated Model compression and fine-tuning
EP3259914A1 (en) * 2015-02-19 2017-12-27 Magic Pony Technology Limited Interpolating visual data
GB201603144D0 (en) * 2016-02-23 2016-04-06 Magic Pony Technology Ltd Training end-to-end video processes
US10460230B2 (en) * 2015-06-04 2019-10-29 Samsung Electronics Co., Ltd. Reducing computations in a neural network
US20170076195A1 (en) * 2015-09-10 2017-03-16 Intel Corporation Distributed neural networks for scalable real-time analytics
US20180053091A1 (en) * 2016-08-17 2018-02-22 Hawxeye, Inc. System and method for model compression of neural networks for use in embedded platforms
US10224058B2 (en) * 2016-09-07 2019-03-05 Google Llc Enhanced multi-channel acoustic models
WO2018053340A1 (en) * 2016-09-15 2018-03-22 Twitter, Inc. Super resolution using a generative adversarial network
US10528846B2 (en) * 2016-11-14 2020-01-07 Samsung Electronics Co., Ltd. Method and apparatus for analyzing facial image
US10482337B2 (en) * 2017-09-29 2019-11-19 Infineon Technologies Ag Accelerating convolutional neural network computation throughput

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170230675A1 (en) * 2016-02-05 2017-08-10 Google Inc. Compressing images using neural networks
CN105611303A (en) * 2016-03-07 2016-05-25 京东方科技集团股份有限公司 Image compression system, decompression system, training method and device, and display device
CN107018422A (en) * 2017-04-27 2017-08-04 四川大学 Still image compression method based on depth convolutional neural networks
CN107240136A (en) * 2017-05-25 2017-10-10 华北电力大学 A kind of Still Image Compression Methods based on deep learning model
CN107301668A (en) * 2017-06-14 2017-10-27 成都四方伟业软件股份有限公司 A kind of picture compression method based on sparse matrix, convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
全球人工智能: "嫌图片太大?!卷积神经网络轻松实现无损压缩到20%", 《HTTP://WWW.SOHU.COM/A/163460325_642762》 *
孙翠荣: "基于语义的图像编码与图像质量评价方法的研究", 《中国优秀硕士论文全文数据库 信息科技辑》 *
雷克世界: "体积减小60%?达特茅斯学院利用深度学习进行图像压缩", 《HTTP://WWW.SOHU.COM/A/204280124_390227》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110876062A (en) * 2018-08-31 2020-03-10 三星电子株式会社 Electronic device for high-speed compression processing of feature map and control method thereof
CN109376762A (en) * 2018-09-13 2019-02-22 西安万像电子科技有限公司 Image processing method and device
CN109089010A (en) * 2018-09-14 2018-12-25 深圳市友杰智新科技有限公司 A kind of image transfer method and device
CN110933432A (en) * 2018-09-19 2020-03-27 珠海金山办公软件有限公司 Image compression method, image decompression method, image compression device, image decompression device, electronic equipment and storage medium
CN111046893B (en) * 2018-10-12 2024-02-02 富士通株式会社 Image similarity determining method and device, image processing method and device
CN111046893A (en) * 2018-10-12 2020-04-21 富士通株式会社 Image similarity determining method and device, and image processing method and device
CN111542839B (en) * 2018-12-13 2023-04-04 深圳鲲云信息科技有限公司 Hardware acceleration method and device of deconvolution neural network and electronic equipment
CN111542839A (en) * 2018-12-13 2020-08-14 深圳鲲云信息科技有限公司 Hardware acceleration method and device of deconvolution neural network and electronic equipment
WO2020131645A1 (en) * 2018-12-17 2020-06-25 Qualcomm Incorporated Method and apparatus for providing a rendering engine model comprising a description of a neural network embedded in a media item
US10904637B2 (en) 2018-12-17 2021-01-26 Qualcomm Incorporated Embedded rendering engine for media data
CN109727195A (en) * 2018-12-25 2019-05-07 成都元点智库科技有限公司 A kind of image super-resolution reconstructing method
CN110191252A (en) * 2019-04-03 2019-08-30 陈昊 A kind of Arthroplasty surgery operative image Transmission system and transmission method
CN110222717A (en) * 2019-05-09 2019-09-10 华为技术有限公司 Image processing method and device
CN112100645A (en) * 2019-06-18 2020-12-18 中国移动通信集团浙江有限公司 Data processing method and device
WO2021000650A1 (en) * 2019-07-04 2021-01-07 国家广播电视总局广播电视科学研究院 Program distribution method and device, reception method, terminal apparatus, and medium
CN110248191A (en) * 2019-07-15 2019-09-17 山东浪潮人工智能研究院有限公司 A kind of video-frequency compression method based on deep layer convolutional neural networks
WO2021022686A1 (en) * 2019-08-08 2021-02-11 合肥图鸭信息科技有限公司 Video compression method and apparatus, and terminal device
WO2021022685A1 (en) * 2019-08-08 2021-02-11 合肥图鸭信息科技有限公司 Neural network training method and apparatus, and terminal device
CN110728630A (en) * 2019-09-03 2020-01-24 北京爱博同心医学科技有限公司 Internet image processing method based on augmented reality and augmented reality glasses
CN110796251A (en) * 2019-10-28 2020-02-14 天津大学 Image compression optimization method based on convolutional neural network
CN110830807A (en) * 2019-11-04 2020-02-21 腾讯科技(深圳)有限公司 Image compression method, device and storage medium
CN110830807B (en) * 2019-11-04 2022-08-23 腾讯科技(深圳)有限公司 Image compression method, device and storage medium
CN111050174A (en) * 2019-12-27 2020-04-21 清华大学 Image compression method, device and system
CN111405285A (en) * 2020-03-27 2020-07-10 北京百度网讯科技有限公司 Method and apparatus for compressing image
WO2022021938A1 (en) * 2020-07-30 2022-02-03 华为技术有限公司 Image processing method and device, and neutral network training method and device
CN111783965A (en) * 2020-08-14 2020-10-16 支付宝(杭州)信息技术有限公司 Method, device and system for biometric identification and electronic equipment
CN113096202A (en) * 2021-03-30 2021-07-09 深圳市商汤科技有限公司 Image compression method and device, electronic equipment and computer readable storage medium
CN113596471A (en) * 2021-07-26 2021-11-02 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113596471B (en) * 2021-07-26 2023-09-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113822955B (en) * 2021-11-18 2022-02-22 腾讯医疗健康(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium
CN113822955A (en) * 2021-11-18 2021-12-21 腾讯医疗健康(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium
CN114201118A (en) * 2022-02-15 2022-03-18 北京中科开迪软件有限公司 Storage method and system based on optical disk library

Also Published As

Publication number Publication date
US20190206091A1 (en) 2019-07-04
CN108062780B (en) 2019-08-09
US10896522B2 (en) 2021-01-19

Similar Documents

Publication Publication Date Title
CN108062780B (en) Method for compressing image and device
CN109902659B (en) Method and apparatus for processing human body image
CN108537152A (en) Method and apparatus for detecting live body
CN108197623A (en) For detecting the method and apparatus of target
CN108416324A (en) Method and apparatus for detecting live body
CN108269254A (en) Image quality measure method and apparatus
CN110752028A (en) Image processing method, device, equipment and storage medium
CN107529098A (en) Real-time video is made a summary
US11087140B2 (en) Information generating method and apparatus applied to terminal device
CN108198130A (en) Image processing method, device, storage medium and electronic equipment
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN107958247A (en) Method and apparatus for facial image identification
CN108446650A (en) The method and apparatus of face for identification
CN108510466A (en) Method and apparatus for verifying face
WO2022111387A1 (en) Data processing method and related apparatus
CN108875931A (en) Neural metwork training and image processing method, device, system
CN112200041A (en) Video motion recognition method and device, storage medium and electronic equipment
CN114092678A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111950700A (en) Neural network optimization method and related equipment
CN109389096A (en) Detection method and device
CN108241855A (en) image generating method and device
CN115359261A (en) Image recognition method, computer-readable storage medium, and electronic device
CN110110666A (en) Object detection method and device
CN113808277A (en) Image processing method and related device
CN108595211A (en) Method and apparatus for output data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant