CN107578453A - Compressed image processing method, apparatus, electronic equipment and computer-readable medium - Google Patents

Compressed image processing method, apparatus, electronic equipment and computer-readable medium Download PDF

Info

Publication number
CN107578453A
CN107578453A CN201710974261.7A CN201710974261A CN107578453A CN 107578453 A CN107578453 A CN 107578453A CN 201710974261 A CN201710974261 A CN 201710974261A CN 107578453 A CN107578453 A CN 107578453A
Authority
CN
China
Prior art keywords
parameter
image
neutral net
pretreatment
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710974261.7A
Other languages
Chinese (zh)
Other versions
CN107578453B (en
Inventor
周舒畅
邹雨恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Beijing Maigewei Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Beijing Maigewei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd, Beijing Maigewei Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201710974261.7A priority Critical patent/CN107578453B/en
Publication of CN107578453A publication Critical patent/CN107578453A/en
Application granted granted Critical
Publication of CN107578453B publication Critical patent/CN107578453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

The embodiments of the invention provide compressed image processing method, apparatus, electronic equipment and computer-readable medium, including:The original image that form is JPEG is obtained, original image is subjected to entropy decoding and inverse quantization is handled, the image pre-processed;The image of pretreatment is subjected to neural network algorithm training again;Finally the parameter of the neutral net for the image for training obtained pretreatment is inferred to original image, so as to obtain inferred results, it can be performed on GPU, reduce the pretreatment time of the required data decompression of training, and the time that neutral net is inferred.

Description

Compressed image processing method, apparatus, electronic equipment and computer-readable medium
Technical field
The present invention relates to technical field of image processing, more particularly, to compressed image processing method, apparatus, electronic equipment and Computer-readable medium.
Background technology
At present, the decompression process of JPEG picture is in CPU (Central Processing Unit, the center of computer Processor) on carry out, and JPEG picture is de-compressed into the picture of pure rgb pixel, the time of cost is long.And for very Task in more such as computer visions, the training sample of input is the picture of a jpeg format, therefore, utilizes this form Picture training neutral net or with neutral net infer picture in information, it is necessary to be decompressed by longer picture Journey, so as to limit the processing speed of the training of neutral net and deduction.
The content of the invention
In view of this, can it is an object of the invention to provide compressed image processing method, apparatus, electronic equipment and computer Medium is read, can be performed on GPU, reduces the pretreatment time of the required data decompression of training, and neutral net is entered The time that row is inferred.
In a first aspect, the embodiments of the invention provide compressed image processing method, methods described includes:
Original image is obtained, the form of the original image is jpeg format;
The original image is subjected to entropy decoding and inverse quantization processing, the image pre-processed;
Neural network algorithm training is carried out using the image of the pretreatment, obtains the nerve net of the image of the pretreatment The parameter of network;
The original image is inferred by the parameter of the neutral net of the image of the pretreatment, so as to be pushed away Disconnected result.
With reference in a first aspect, the embodiments of the invention provide the possible embodiment of the first of first aspect, wherein, institute State and carry out neural network algorithm training using the image of the pretreatment, obtain the ginseng of the neutral net of the image of the pretreatment Number includes, and repeats following processing, until reaching the first preparatory condition:
Neural network model is determined, and is built according to the neural network model and calculates figure;
The calculating figure is calculated to the gradient of the parameter of each layer by back-propagating;
The gradient of the parameter of each layer and learning rate are updated to the parameter of each layer by gradient descent method, from And obtain the parameter of the neutral net of the image of the pretreatment;
Wherein, first preparatory condition includes reaching predetermined threshold value for accuracy rate or cycle-index reaches default frequency It is secondary.
With reference to the first possible embodiment of first aspect, the embodiments of the invention provide second of first aspect Possible embodiment, wherein, the calculating figure is calculated to the gradient of the parameter of each layer by back-propagating, including:
The image of the pretreatment is input in the neural network model, obtained each described in the neutral net The parameter of layer;
The calculating figure is calculated to the gradient of the parameter of each layer in the neutral net by back-propagating.
With reference to second of possible embodiment of first aspect, the embodiments of the invention provide the third of first aspect Possible embodiment, wherein, the gradient and learning rate of the parameter by each layer are updated often by gradient descent method The parameter of the individual layer, so as to obtain the parameter of the neutral net of the image of the pretreatment, including:
Obtain different tasks;
The learning rate according to corresponding to determining the attribute of the different task;
The learning rate for meeting the second preparatory condition is chosen from the corresponding learning rate;
By the gradient of the parameter of each layer in the neutral net and the learning rate for meeting second preparatory condition The parameter of each layer in the neutral net is updated by the gradient descent method, obtains the institute of the image of the pretreatment State the parameter of neutral net;
Wherein, second preparatory condition is included for the rate of accuracy reached to predetermined threshold value.
With reference to the third possible embodiment of first aspect, the embodiments of the invention provide the 4th of first aspect kind Possible embodiment, wherein, the attribute of the task includes classification and data set, the category according to the different task Property determine corresponding to learning rate include:
According to the different classifications and the data set determine it is described corresponding to learning rate.
With reference in a first aspect, the embodiments of the invention provide the possible embodiment of the 5th of first aspect kind, wherein, institute Stating inferred results includes classification results, picture detection block and the picture semantic segmentation thermodynamic chart of picture.
Second aspect, the embodiment of the present invention also provide compressed image processing device, and described device includes:
Acquisition module, for obtaining original image, the form of the original image is jpeg format;
Processing module, for the original image to be carried out into entropy decoding and inverse quantization processing, the image pre-processed;
Training module, for carrying out neural network algorithm training using the image of the pretreatment, obtain the pretreatment Image neutral net parameter;
Inference module, the parameter for the neutral net of the image by the pretreatment push away to the original image It is disconnected, so as to obtain inferred results.
With reference to second aspect, the embodiments of the invention provide the possible embodiment of the first of second aspect, wherein, institute Stating training module includes, and repeats following processing, until reaching the first preparatory condition:
Construction unit, for determining neural network model, and built according to the neural network model and calculate figure;
First computing unit, the gradient of the parameter for the calculating figure to be calculated to each layer by back-propagating;
First updating block, it is every for the gradient of the parameter of each layer and learning rate to be updated by gradient descent method The parameter of the individual layer, so as to obtain the parameter of the neutral net of the image of the pretreatment.;
Wherein, first preparatory condition includes reaching predetermined threshold value for accuracy rate or cycle-index reaches default frequency It is secondary.
With reference to the first possible embodiment of second aspect, the embodiments of the invention provide second of second aspect Possible embodiment, wherein, first computing unit includes:
Input block, for the image of the pretreatment to be input in the neural network model, obtain the nerve The parameter of each layer in network;
Second computing unit, for the calculating figure to be calculated into each layer in the neutral net by back-propagating Parameter gradient.
With reference to second of possible embodiment of second aspect, the embodiments of the invention provide the third of second aspect Possible embodiment, wherein, first updating block includes:
Acquiring unit, for obtaining different tasks;
Determining unit, for the learning rate according to corresponding to the determination of the generic attribute of the different task
Unit is chosen, the learning rate of the second preparatory condition is met for being chosen from the corresponding learning rate;
Second updating block, for the gradient of the parameter of each layer in the neutral net and described second will to be met The learning rate of preparatory condition updates the parameter of each layer in the neutral net by the gradient descent method, obtains described The parameter of the neutral net of the image of pretreatment;
Wherein, second preparatory condition is included for the rate of accuracy reached to predetermined threshold value.
With reference to the third possible embodiment of second aspect, the embodiments of the invention provide the 4th of second aspect kind Possible embodiment, wherein, the attribute of the task includes classification and data set, and the determining unit includes:
According to the different classifications and the data set determine it is described corresponding to learning rate.
With reference to second aspect, the embodiments of the invention provide the possible embodiment of the 5th of second aspect kind, wherein, institute Stating inferred results includes classification results, picture detection block and the picture semantic segmentation thermodynamic chart of picture.
The third aspect, the embodiment of the present invention also provide electronic equipment, including memory and processor, deposited in the memory The computer program that can be run on the processor is contained, is realized described in the computing device during computer program above-mentioned Compressed picture trains the step of method of neutral net.
Fourth aspect, the embodiment of the present invention also provide computer-readable recording medium, the computer-readable recording medium On be stored with computer program, the computer program performs above-mentioned compressed picture training neutral net when being run by processor Method the step of.
The embodiments of the invention provide compressed image processing method, apparatus, electronic equipment and computer-readable medium, bag Include:The original image that form is JPEG is obtained, original image is subjected to entropy decoding and inverse quantization is handled, the figure pre-processed Picture;The image of pretreatment is subjected to neural network algorithm training again;The nerve net of the image of obtained pretreatment will finally be trained The parameter of network is inferred to original image, so as to obtain inferred results, can be performed on GPU, reduces required for training The pretreatment time of data decompression, and the time that neutral net is inferred.
Other features and advantages of the present invention will illustrate in the following description, also, partly become from specification Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages are in specification, claims And specifically noted structure is realized and obtained in accompanying drawing.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate Appended accompanying drawing, is described in detail below.
Brief description of the drawings
, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical scheme of the prior art The required accompanying drawing used is briefly described in embodiment or description of the prior art, it should be apparent that, in describing below Accompanying drawing is some embodiments of the present invention, for those of ordinary skill in the art, before creative work is not paid Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic diagram for the electronic equipment that the embodiment of the present invention one provides;
Fig. 2 is the compressed image processing method flow diagram that the embodiment of the present invention two provides;
Fig. 3 is the flow chart of step S103 in the compressed image processing method that the embodiment of the present invention two provides;
Fig. 4 is the compressed image processing schematic device that the embodiment of the present invention three provides.
Icon:
10- acquisition modules;20- processing modules;30- training modules;40- inference modules;31- construction units;32- first is counted Calculate unit;The updating blocks of 33- first;100- electronic equipments;102- processors;104- storage devices;106- input units;108- Output device;110- image collecting devices;112- bus systems.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with accompanying drawing to the present invention Technical scheme be clearly and completely described, it is clear that described embodiment is part of the embodiment of the present invention, rather than Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creative work premise Lower obtained every other embodiment, belongs to the scope of protection of the invention.
For ease of understanding the present embodiment, the embodiment of the present invention is described in detail below.
Embodiment one:
Fig. 1 is the schematic diagram for the electronic equipment that the embodiment of the present invention one provides.
Reference picture 1, the exemplary electronic device of the compressed image processing method and apparatus for realizing the embodiment of the present invention 100, including one or more processors 102, one or more storage devices 104, input unit 106, output device 108 and Image collecting device 110, these components are interconnected by bindiny mechanism's (not shown) of bus system 112 and/or other forms.Should Work as attention, the component and structure of the electronic equipment 100 shown in Fig. 1 be it is illustrative, and not restrictive, as needed, institute Other assemblies and structure can also be had by stating electronic equipment.
The processor 102 can be CPU (CPU) or be performed with data-handling capacity and/or instruction The processing unit of the other forms of ability, and other components in the electronic equipment 100 can be controlled desired to perform Function.
The storage device 104 can include one or more computer program products, and the computer program product can With including various forms of computer-readable recording mediums, such as volatile memory and/or nonvolatile memory.It is described easy The property lost memory can for example include random access memory (Random Access Memory, abbreviation RAM) and/or delay at a high speed Rush memory (cache) etc..The nonvolatile memory for example can include read-only storage (Read-Only Memory, Abbreviation ROM), hard disk, flash memory etc..One or more computer programs can be stored on the computer-readable recording medium to refer to Order, processor 102 can run described program instruction, to realize in invention described below embodiment and (be realized by processor) Client functionality and/or other desired functions.It can also be stored in the computer-readable recording medium various Application program and various data, such as application program use and/or caused various data etc..
The input unit 106 can be the device that user is used for input instruction, and can include keyboard, mouse, wheat One or more of gram wind and touch-screen etc..
The output device 108 can export various information (for example, image or sound) to outside (for example, user), and And one or more of display, loudspeaker etc. can be included.
Described image harvester 110 can shoot the desired image of user (such as photo, video etc.), and will be clapped The image taken the photograph is stored in the storage device 104 so that other components use.It should be appreciated that image collecting device 110 is only Example, electronic equipment 100 can not include image collecting device 110.In such a case, it is possible to using other there is image to adopt The part collection facial image of collection ability, and the image of collection is sent to electronic equipment 100.
Exemplarily, for realizing that the compressed picture provided according to embodiments of the present invention trains the method and dress of neutral net The exemplary electronic device put may be implemented as such as smart mobile phone, tablet personal computer, human face recognition door lock, personal computer or remote The terminal device of journey server etc..
Embodiment two:
Fig. 2 is the method flow diagram that the compressed picture that the embodiment of the present invention two provides trains neutral net.
According to embodiments of the present invention, there is provided compressed picture training neutral net method embodiment, it is necessary to explanation It is that can be performed the step of the flow of accompanying drawing illustrates in the computer system of such as one group computer executable instructions, Also, although logical order is shown in flow charts, in some cases, can be with different from order execution herein Shown or described step.
Reference picture 2, this method comprises the following steps:
Step S101, original image being obtained, the form of original image is jpeg format, wherein, jpeg format is a kind of state The compression standard of border image.
Original image can be the original image that image collecting device (such as camera of terminal device) collects, and also may be used To be that the image obtained afterwards is pre-processed to original image.In addition, original image can be single still image, also may be used To be a certain frame of video in video flowing.
Step S102, original image is subjected to entropy decoding and inverse quantization is handled, the image pre-processed;
Here, JPEG decompression includes five steps, typically first by original image by entropy decoding, inverse quantization, anti- Discrete cosine transform, anti-reduced sampling and color space conversion, wherein, it is anti-discrete cosine to take longer in those steps Conversion, anti-reduced sampling and color space change three steps.Therefore, these three steps are regarded as a kind of convolution operation, and These three steps are merged into the convolutional layer of neutral net and are trained, so as to reduce the data decompression required for training Pretreatment time, and the time that neutral net is inferred.
A part of process conversion of decompression will be realized on CPU (Central Processing Unit, central processing unit) Onto neutral net, and operated on GPU (Graphics Processing Unit, graphics processor), Ke Yigeng Parallelization in high degree, accelerate decompression speed.
In addition, original image is subjected to god by the processing of entropy decoding and inverse quantization using the image of obtained pretreatment Trained through network algorithm.
Step S103, neural network algorithm training, the nerve of the image pre-processed are carried out using the image of pretreatment The parameter of network;
Here, the structure image of pretreatment passed through in neural network algorithm calculates figure, back-propagating and undated parameter, So as to the parameter of the neutral net of image pre-processed.
Step S104, original image is inferred by the parameter of the neutral net of the image of pretreatment, so as to obtain Inferred results.
Here, inferred results include classification results, picture detection block and the picture semantic segmentation thermodynamic chart of picture.Infer knot Fruit is different and different according to task, however it is not limited to above-mentioned three kinds of situations, can also be other situations.
Specifically, the parameter of the neutral net of the image of pretreatment can be used for inferring, specific to infer process to utilize The calculating figure constructed, scheme to carry out propagated forward according to calculating, will finally calculate the output node of figure as inferred results.
By obtaining original image, by original image by the first two steps in jpeg decompression compression process, i.e. entropy decoding With inverse quantization processing, the picture pre-processed;Again by it is time-consuming it is longer be inverse discrete cosine transformation, anti-reduced sampling and color Space changes these three steps and regards a kind of convolution operation as, and these three steps are merged into the convolutional layer of neutral net Row training, the neural network parameter of the image pre-processed;Finally by the parameter pair of the neutral net of the image of pretreatment Original image is inferred, obtains inferred results, so as to reduce the pretreatment time of the data decompression required for training, and The time that neutral net is inferred.
Further, reference picture 3, step S103 comprise the following steps, and repeat following processing, until it is pre- to reach first If condition:
Step S201, neural network model is determined, and built according to neural network model and calculate figure;
Step S202, calculating figure is calculated to the gradient of the parameter of each layer by back-propagating;
Step S203, the gradient of each layer of parameter and learning rate are updated to the parameter of each layer by gradient descent method, So as to the parameter of the neutral net of image pre-processed.
Here, the image of pretreatment is trained by neural network algorithm, it is necessary to by three steps, first step Suddenly it is structure calculating figure;Second step is back-propagating;3rd step is undated parameter.Wherein, in neural network algorithm Including neural network model, after the neural network model during neural network algorithm is determined, according to neural network model structure Build calculating figure.Performed in addition, step S201 to step S203 is circulation, after meeting the first preparatory condition, stop performing, So as to the parameter of the neutral net of image pre-processed, wherein, the first preparatory condition be accuracy rate reach predetermined threshold value or Person's cycle-index reaches the default frequency.Accuracy rate just refers to the accuracy rate classified for classification task;For other tasks, then Corresponding corresponding index, when index reaches predetermined threshold value.
Specifically, step S202 comprises the following steps:
Step S301, the image of pretreatment is input in neural network model, obtains the ginseng of each layer in neutral net Number;
Step S302, gradient of the figure by the parameter of each layer in back-propagating calculating neutral net will be calculated.
Here, neutral net includes multiple layers, such as convolutional layer, pond layer and full articulamentum etc..By the figure of pretreatment As being input in neural network model, parameter of the parameter of convolutional layer, the parameter of pond layer and full articulamentum etc. can be obtained;So Gradient, the gradient of the parameter of pond layer and the ginseng of full articulamentum of the figure by the parameter of back-propagating calculating convolutional layer will be calculated afterwards Several gradients.
Specifically, step S203 comprises the following steps:
Step S401, obtain different tasks;
Step S402, the learning rate according to corresponding to determining the attribute of the different task;
Here, neutral net includes task, and different tasks corresponds to different learning rates.By judge the attribute of task come It is determined that corresponding learning rate;The attribute of task includes classification and data set, according to corresponding to determining different classifications and data set Learning rate.
Step S403, the learning rate for meeting the second preparatory condition is chosen from corresponding learning rate;
Here, the second preparatory condition can be that accuracy rate reaches predetermined threshold value or reaches other indexs.
Step S404, by the gradient of the parameter of each layer in neutral net and meet that the learning rate of the second preparatory condition passes through The parameter of each layer, the parameter of the neutral net of the image pre-processed in gradient descent method renewal neutral net.
The embodiments of the invention provide compressed image processing method, including:The original image that form is JPEG is obtained, by original Beginning image carries out entropy decoding and inverse quantization processing, the image pre-processed;The image of pretreatment is subjected to neutral net calculation again Method is trained;Finally the parameter of the neutral net for the image for training obtained pretreatment is inferred to original image, so as to It to inferred results, can be performed on GPU, reduce the pretreatment time of the required data decompression of training, and nerve net The time that network is inferred.
Embodiment three:
The embodiment of the present invention also provides compressed image processing device, and the compressed image processing device is mainly used in performing this hair The compressed image processing method that bright embodiment the above is provided, below to compressed image processing provided in an embodiment of the present invention Device does specific introduction.
Fig. 4 is the compressed image processing schematic device that the embodiment of the present invention three provides.
Reference picture 4, the device include acquisition module 10, processing module 20, training module 30 and inference module 40, wherein, Training module 30 includes construction unit 31, the first computing unit 32 and the first updating block 33.
Acquisition module 10, for obtaining original image, the form of original image is jpeg format;
Processing module 20, for original image to be carried out into entropy decoding and inverse quantization processing, the image pre-processed;
Training module 30, for carrying out neural network algorithm training, the image pre-processed using the image of pretreatment Neutral net parameter;
Inference module 40, the parameter for the neutral net of the image by pretreatment infer to original image, from And obtain inferred results.
Further, training module 30 includes, and repeats following processing, until reaching the first preparatory condition:
Construction unit 31, for determining neural network model, and built according to neural network model and calculate figure;
First computing unit 32, the gradient of the parameter for calculating figure to be calculated to each layer by back-propagating;
First updating block 33, it is each for the gradient of each layer of parameter and learning rate to be updated by gradient descent method The parameter of layer, so as to the parameter of the neutral net of image pre-processed;
Wherein, the first preparatory condition includes reaching predetermined threshold value for accuracy rate or cycle-index reaches the default frequency.
Further, the first computing unit 32 includes:
Input block (not shown), for the image of pretreatment to be input in neural network model, obtain neutral net In each layer parameter;
Second computing unit (not shown), for ginseng of the figure by each layer in back-propagating calculating neutral net will to be calculated Several gradients.
Further, the first updating block 33 includes:
Acquiring unit (not shown), for obtaining different tasks;
Determining unit (not shown), for the learning rate according to corresponding to the determination of the attribute of different tasks;
Unit (not shown) is chosen, the learning rate of the second preparatory condition is met for being chosen from corresponding learning rate;
Second updating block (not shown), for the gradient of the parameter of each layer in neutral net and satisfaction second to be preset The learning rate of condition updates the parameter of each layer in neutral net, the nerve net of the image pre-processed by gradient descent method The parameter of network;
Wherein, the second preparatory condition includes reaching predetermined threshold value for accuracy rate.
Further, the attribute of the task includes classification and data set, and determining unit (not shown) includes:
The learning rate according to corresponding to determining different classifications and data set.
Further, inferred results include classification results, picture detection block and the picture semantic segmentation thermodynamic chart of picture.
The embodiments of the invention provide compressed image processing device, including:The original image that form is JPEG is obtained, by original Beginning image carries out entropy decoding and inverse quantization processing, the image pre-processed;The image of pretreatment is subjected to neutral net calculation again Method is trained;Finally the parameter of the neutral net for the image for training obtained pretreatment is inferred to original image, so as to It to inferred results, can be performed on GPU, reduce the pretreatment time of the required data decompression of training, and nerve net The time that network is inferred.
The computer program product that the embodiment of the present invention is provided, including store the computer-readable storage of program code Medium, the instruction that described program code includes can be used for performing the method described in previous methods embodiment, and specific implementation can be joined See embodiment of the method, will not be repeated here.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description With the specific work process of device, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In addition, in the description of the embodiment of the present invention, unless otherwise clearly defined and limited, term " installation ", " phase Even ", " connection " should be interpreted broadly, for example, it may be being fixedly connected or being detachably connected, or be integrally connected;Can To be mechanical connection or electrical connection;Can be joined directly together, can also be indirectly connected by intermediary, Ke Yishi The connection of two element internals.For the ordinary skill in the art, with concrete condition above-mentioned term can be understood at this Concrete meaning in invention.
If the function is realized in the form of SFU software functional unit and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part to be contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are causing a computer equipment (can be People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the present invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.
In the description of the invention, it is necessary to explanation, term " " center ", " on ", " under ", "left", "right", " vertical ", The orientation or position relationship of the instruction such as " level ", " interior ", " outer " be based on orientation shown in the drawings or position relationship, merely to Be easy to the description present invention and simplify description, rather than instruction or imply signified device or element must have specific orientation, With specific azimuth configuration and operation, therefore it is not considered as limiting the invention.In addition, term " first ", " second ", " the 3rd " is only used for describing purpose, and it is not intended that instruction or hint relative importance.
Finally it should be noted that:Embodiment described above, it is only the embodiment of the present invention, to illustrate the present invention Technical scheme, rather than its limitations, protection scope of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair It is bright to be described in detail, it will be understood by those within the art that:Any one skilled in the art The invention discloses technical scope in, it can still modify to the technical scheme described in previous embodiment or can be light Change is readily conceivable that, or equivalent substitution is carried out to which part technical characteristic;And these modifications, change or replacement, do not make The essence of appropriate technical solution departs from the spirit and scope of technical scheme of the embodiment of the present invention, should all cover the protection in the present invention Within the scope of.Therefore, protection scope of the present invention described should be defined by scope of the claims.

Claims (14)

  1. A kind of 1. compressed image processing method, it is characterised in that methods described includes:
    Original image is obtained, the form of the original image is jpeg format;
    The original image is subjected to entropy decoding and inverse quantization processing, the image pre-processed;
    Neural network algorithm training is carried out using the image of the pretreatment, obtains the neutral net of the image of the pretreatment Parameter;
    The original image is inferred by the parameter of the neutral net of the image of the pretreatment, so as to obtain inferring knot Fruit.
  2. 2. compressed image processing method according to claim 1, it is characterised in that the image using the pretreatment Neural network algorithm training is carried out, obtaining the parameter of the neutral net of the image of the pretreatment includes, and repeats following place Reason, until reaching the first preparatory condition:
    Neural network model is determined, and is built according to the neural network model and calculates figure;
    The calculating figure is calculated to the gradient of the parameter of each layer by back-propagating;
    The gradient of the parameter of each layer and learning rate are updated to the parameter of each layer by gradient descent method, so as to To the parameter of the neutral net of the image of the pretreatment;
    Wherein, first preparatory condition includes reaching predetermined threshold value for accuracy rate or cycle-index reaches the default frequency.
  3. 3. compressed image processing method according to claim 2, it is characterised in that the calculating figure is passed through into back-propagating The gradient of the parameter of each layer is calculated, including:
    The image of the pretreatment is input in the neural network model, obtains each layer in the neutral net Parameter;
    The calculating figure is calculated to the gradient of the parameter of each layer in the neutral net by back-propagating.
  4. 4. compressed image processing method according to claim 3, it is characterised in that the parameter by each layer Gradient and learning rate update the parameter of each layer by gradient descent method, so as to obtain described in the image of the pretreatment The parameter of neutral net, including:
    Obtain different tasks;
    The learning rate according to corresponding to determining the attribute of the different task;
    The learning rate for meeting the second preparatory condition is chosen from the corresponding learning rate;
    By the gradient of the parameter of each layer in the neutral net and meet that the learning rate of second preparatory condition passes through The gradient descent method updates the parameter of each layer in the neutral net, obtains the god of the image of the pretreatment Parameter through network;
    Wherein, second preparatory condition is included for the rate of accuracy reached to predetermined threshold value.
  5. 5. compressed image processing method according to claim 4, it is characterised in that the attribute of the task include classification and Data set, the learning rate according to corresponding to determining the attribute of the different task include:
    According to the different classifications and the data set determine it is described corresponding to learning rate.
  6. 6. compressed image processing method according to claim 1, it is characterised in that the inferred results include point of picture Class result, picture detection block and picture semantic segmentation thermodynamic chart.
  7. 7. a kind of compressed image processing device, it is characterised in that described device includes:
    Acquisition module, for obtaining original image, the form of the original image is jpeg format;
    Processing module, for the original image to be carried out into entropy decoding and inverse quantization processing, the image pre-processed;
    Training module, for carrying out neural network algorithm training using the image of the pretreatment, obtain the figure of the pretreatment The parameter of the neutral net of picture;
    Inference module, the parameter for the neutral net of the image by the pretreatment infer to the original image, So as to obtain inferred results.
  8. 8. compressed image processing device according to claim 7, it is characterised in that the training module includes, and repetition is held Row is following to be handled, until reaching the first preparatory condition:
    Construction unit, for determining neural network model, and built according to the neural network model and calculate figure;
    First computing unit, the gradient of the parameter for the calculating figure to be calculated to each layer by back-propagating;
    First updating block, for the gradient of the parameter of each layer and learning rate to be updated into each institute by gradient descent method The parameter of layer is stated, so as to obtain the parameter of the neutral net of the image of the pretreatment;
    Wherein, first preparatory condition includes reaching predetermined threshold value for accuracy rate or cycle-index reaches the default frequency.
  9. 9. compressed image processing device according to claim 8, it is characterised in that first computing unit includes:
    Input block, for the image of the pretreatment to be input in the neural network model, obtain the neutral net In each layer parameter;
    Second computing unit, for the calculating figure to be calculated to the ginseng of each layer in the neutral net by back-propagating Several gradients.
  10. 10. compressed image processing device according to claim 9, it is characterised in that first updating block includes:
    Acquiring unit, for obtaining different tasks;
    Determining unit, for the learning rate according to corresponding to the determination of the attribute of the different task;
    Unit is chosen, the learning rate of the second preparatory condition is met for being chosen from the corresponding learning rate;
    Second updating block, for the gradient of the parameter of each layer in the neutral net and will meet that described second presets The learning rate of condition updates the parameter of each layer in the neutral net by the gradient descent method, obtains the pre- place The parameter of the neutral net of the image of reason;
    Wherein, second preparatory condition is included for the rate of accuracy reached to predetermined threshold value.
  11. 11. compressed image processing device according to claim 10, it is characterised in that the attribute of the task includes classification And data set, the determining unit include:
    According to the different classifications and the data set determine it is described corresponding to learning rate.
  12. 12. compressed image processing device according to claim 7, it is characterised in that the inferred results include picture Classification results, picture detection block and picture semantic segmentation thermodynamic chart.
  13. 13. a kind of electronic equipment, including memory and processor, it is stored with and can runs on the processor in the memory Computer program, it is characterised in that the claims 1 to 6 are realized described in the computing device during computer program The step of method described in one.
  14. 14. a kind of computer-readable recording medium, computer program is stored with the computer-readable recording medium, its feature Be, when the computer program is run by processor perform any one of the claims 1 to 6 described in method the step of.
CN201710974261.7A 2017-10-18 2017-10-18 Compressed image processing method, apparatus, electronic equipment and computer-readable medium Active CN107578453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710974261.7A CN107578453B (en) 2017-10-18 2017-10-18 Compressed image processing method, apparatus, electronic equipment and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710974261.7A CN107578453B (en) 2017-10-18 2017-10-18 Compressed image processing method, apparatus, electronic equipment and computer-readable medium

Publications (2)

Publication Number Publication Date
CN107578453A true CN107578453A (en) 2018-01-12
CN107578453B CN107578453B (en) 2019-11-01

Family

ID=61037662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710974261.7A Active CN107578453B (en) 2017-10-18 2017-10-18 Compressed image processing method, apparatus, electronic equipment and computer-readable medium

Country Status (1)

Country Link
CN (1) CN107578453B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764046A (en) * 2018-04-26 2018-11-06 平安科技(深圳)有限公司 Generating means, method and the computer readable storage medium of vehicle damage disaggregated model
CN108830288A (en) * 2018-04-25 2018-11-16 北京市商汤科技开发有限公司 Image processing method, the training method of neural network, device, equipment and medium
CN108898174A (en) * 2018-06-25 2018-11-27 Oppo(重庆)智能科技有限公司 A kind of contextual data acquisition method, contextual data acquisition device and electronic equipment
CN109242755A (en) * 2018-08-01 2019-01-18 浙江深眸科技有限公司 Computer vision processing server framework neural network based
CN109472360A (en) * 2018-10-30 2019-03-15 北京地平线机器人技术研发有限公司 Update method, updating device and the electronic equipment of neural network
CN109543766A (en) * 2018-11-28 2019-03-29 钟祥博谦信息科技有限公司 Image processing method and electronic equipment, storage medium
CN109815875A (en) * 2019-01-17 2019-05-28 柳州康云互联科技有限公司 One kind is for versicolor system in internet detection
CN109996073A (en) * 2019-02-26 2019-07-09 山东师范大学 A kind of method for compressing image, system, readable storage medium storing program for executing and computer equipment
CN110207871A (en) * 2018-02-28 2019-09-06 新疆金风科技股份有限公司 Method, apparatus, storage medium and the system of the stress prediction of Wind turbines
CN110517329A (en) * 2019-08-12 2019-11-29 北京邮电大学 A kind of deep learning method for compressing image based on semantic analysis
CN110770756A (en) * 2018-09-28 2020-02-07 深圳市大疆创新科技有限公司 Data processing method and device and unmanned aerial vehicle
CN111210467A (en) * 2018-12-27 2020-05-29 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111931770A (en) * 2020-09-16 2020-11-13 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN112270403A (en) * 2020-11-10 2021-01-26 北京百度网讯科技有限公司 Method, device, equipment and storage medium for constructing deep learning network model
CN112633140A (en) * 2020-12-21 2021-04-09 华南农业大学 Multi-spectral remote sensing image urban village multi-category building semantic segmentation method and system
CN113747145A (en) * 2020-05-29 2021-12-03 Oppo广东移动通信有限公司 Image processing circuit, electronic device, and image processing method
CN115761448A (en) * 2022-12-02 2023-03-07 美的集团(上海)有限公司 Training method and device for neural network and readable storage medium
CN115797228A (en) * 2023-01-30 2023-03-14 深圳市九天睿芯科技有限公司 Image processing device, method, chip, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927736A (en) * 2013-01-14 2014-07-16 安凯(广州)微电子技术有限公司 Matching method and device of images based on JPEG format
CN104661037A (en) * 2013-11-19 2015-05-27 中国科学院深圳先进技术研究院 Tampering detection method and system for compressed image quantization table
CN106096670A (en) * 2016-06-17 2016-11-09 北京市商汤科技开发有限公司 Concatenated convolutional neural metwork training and image detecting method, Apparatus and system
US20160328630A1 (en) * 2015-05-08 2016-11-10 Samsung Electronics Co., Ltd. Object recognition apparatus and method
CN106407931A (en) * 2016-09-19 2017-02-15 杭州电子科技大学 Novel deep convolution neural network moving vehicle detection method
CN107018422A (en) * 2017-04-27 2017-08-04 四川大学 Still image compression method based on depth convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927736A (en) * 2013-01-14 2014-07-16 安凯(广州)微电子技术有限公司 Matching method and device of images based on JPEG format
CN104661037A (en) * 2013-11-19 2015-05-27 中国科学院深圳先进技术研究院 Tampering detection method and system for compressed image quantization table
US20160328630A1 (en) * 2015-05-08 2016-11-10 Samsung Electronics Co., Ltd. Object recognition apparatus and method
CN106096670A (en) * 2016-06-17 2016-11-09 北京市商汤科技开发有限公司 Concatenated convolutional neural metwork training and image detecting method, Apparatus and system
CN106407931A (en) * 2016-09-19 2017-02-15 杭州电子科技大学 Novel deep convolution neural network moving vehicle detection method
CN107018422A (en) * 2017-04-27 2017-08-04 四川大学 Still image compression method based on depth convolutional neural networks

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110207871B (en) * 2018-02-28 2021-04-06 新疆金风科技股份有限公司 Method, device, storage medium and system for stress prediction of wind turbine generator
CN110207871A (en) * 2018-02-28 2019-09-06 新疆金风科技股份有限公司 Method, apparatus, storage medium and the system of the stress prediction of Wind turbines
CN108830288A (en) * 2018-04-25 2018-11-16 北京市商汤科技开发有限公司 Image processing method, the training method of neural network, device, equipment and medium
US11334763B2 (en) 2018-04-25 2022-05-17 Beijing Sensetime Technology Development Co., Ltd. Image processing methods, training methods, apparatuses, devices, media, and programs
CN108764046A (en) * 2018-04-26 2018-11-06 平安科技(深圳)有限公司 Generating means, method and the computer readable storage medium of vehicle damage disaggregated model
CN108898174A (en) * 2018-06-25 2018-11-27 Oppo(重庆)智能科技有限公司 A kind of contextual data acquisition method, contextual data acquisition device and electronic equipment
CN109242755A (en) * 2018-08-01 2019-01-18 浙江深眸科技有限公司 Computer vision processing server framework neural network based
CN110770756A (en) * 2018-09-28 2020-02-07 深圳市大疆创新科技有限公司 Data processing method and device and unmanned aerial vehicle
CN109472360A (en) * 2018-10-30 2019-03-15 北京地平线机器人技术研发有限公司 Update method, updating device and the electronic equipment of neural network
CN109472360B (en) * 2018-10-30 2020-09-04 北京地平线机器人技术研发有限公司 Neural network updating method and updating device and electronic equipment
US11328180B2 (en) 2018-10-30 2022-05-10 Beijing Horizon Robotics Technology Research And Development Co., Ltd. Method for updating neural network and electronic device
CN109543766A (en) * 2018-11-28 2019-03-29 钟祥博谦信息科技有限公司 Image processing method and electronic equipment, storage medium
CN111210467A (en) * 2018-12-27 2020-05-29 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109815875A (en) * 2019-01-17 2019-05-28 柳州康云互联科技有限公司 One kind is for versicolor system in internet detection
CN109996073A (en) * 2019-02-26 2019-07-09 山东师范大学 A kind of method for compressing image, system, readable storage medium storing program for executing and computer equipment
CN109996073B (en) * 2019-02-26 2020-11-20 山东师范大学 Image compression method, system, readable storage medium and computer equipment
CN110517329A (en) * 2019-08-12 2019-11-29 北京邮电大学 A kind of deep learning method for compressing image based on semantic analysis
CN110517329B (en) * 2019-08-12 2021-05-14 北京邮电大学 Deep learning image compression method based on semantic analysis
CN113747145B (en) * 2020-05-29 2024-02-27 Oppo广东移动通信有限公司 Image processing circuit, electronic apparatus, and image processing method
CN113747145A (en) * 2020-05-29 2021-12-03 Oppo广东移动通信有限公司 Image processing circuit, electronic device, and image processing method
CN111931770A (en) * 2020-09-16 2020-11-13 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN112270403A (en) * 2020-11-10 2021-01-26 北京百度网讯科技有限公司 Method, device, equipment and storage medium for constructing deep learning network model
CN112270403B (en) * 2020-11-10 2022-03-29 北京百度网讯科技有限公司 Method, device, equipment and storage medium for constructing deep learning network model
CN112633140B (en) * 2020-12-21 2023-09-01 华南农业大学 Multi-spectrum remote sensing image city village multi-category building semantic segmentation method and system
CN112633140A (en) * 2020-12-21 2021-04-09 华南农业大学 Multi-spectral remote sensing image urban village multi-category building semantic segmentation method and system
CN115761448A (en) * 2022-12-02 2023-03-07 美的集团(上海)有限公司 Training method and device for neural network and readable storage medium
CN115761448B (en) * 2022-12-02 2024-03-01 美的集团(上海)有限公司 Training method, training device and readable storage medium for neural network
CN115797228A (en) * 2023-01-30 2023-03-14 深圳市九天睿芯科技有限公司 Image processing device, method, chip, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107578453B (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN107578453A (en) Compressed image processing method, apparatus, electronic equipment and computer-readable medium
CN110473141B (en) Image processing method, device, storage medium and electronic equipment
CN106203376B (en) Face key point positioning method and device
CN108062780A (en) Method for compressing image and device
CN110084281A (en) Image generating method, the compression method of neural network and relevant apparatus, equipment
CN112396613B (en) Image segmentation method, device, computer equipment and storage medium
CN109214343A (en) Method and apparatus for generating face critical point detection model
CN110956202B (en) Image training method, system, medium and intelligent device based on distributed learning
CN109034206A (en) Image classification recognition methods, device, electronic equipment and computer-readable medium
CN111125519B (en) User behavior prediction method, device, electronic equipment and storage medium
CN110399788A (en) AU detection method, device, electronic equipment and the storage medium of image
CN107392189A (en) For the method and apparatus for the driving behavior for determining unmanned vehicle
CN112418292A (en) Image quality evaluation method and device, computer equipment and storage medium
CN111950700A (en) Neural network optimization method and related equipment
CN108875931A (en) Neural metwork training and image processing method, device, system
US11741678B2 (en) Virtual object construction method, apparatus and storage medium
CN113191479A (en) Method, system, node and storage medium for joint learning
CN115205150A (en) Image deblurring method, device, equipment, medium and computer program product
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN113869496A (en) Acquisition method of neural network, data processing method and related equipment
WO2022228142A1 (en) Object density determination method and apparatus, computer device and storage medium
CN110838306A (en) Voice signal detection method, computer storage medium and related equipment
WO2023174256A1 (en) Data compression method and related device
CN116090543A (en) Model compression method and device, computer readable medium and electronic equipment
CN115035559A (en) Face living body detection method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant