CN110569961A - neural network training method and device and terminal equipment - Google Patents

neural network training method and device and terminal equipment Download PDF

Info

Publication number
CN110569961A
CN110569961A CN201910727890.9A CN201910727890A CN110569961A CN 110569961 A CN110569961 A CN 110569961A CN 201910727890 A CN201910727890 A CN 201910727890A CN 110569961 A CN110569961 A CN 110569961A
Authority
CN
China
Prior art keywords
neural network
image
noise
noise generation
weight parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910727890.9A
Other languages
Chinese (zh)
Inventor
孙振鉷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Map Duck Mdt Infotech Ltd
Original Assignee
Hefei Map Duck Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Map Duck Mdt Infotech Ltd filed Critical Hefei Map Duck Mdt Infotech Ltd
Priority to CN201910727890.9A priority Critical patent/CN110569961A/en
Priority to PCT/CN2019/114946 priority patent/WO2021022685A1/en
Publication of CN110569961A publication Critical patent/CN110569961A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for the technical field of image compression, and provides a neural network training method, a device, terminal equipment and a computer storage medium, wherein the neural network training method comprises the following steps: step A: generating image noise; and B: inputting the image noise into a neural network to generate a corresponding noise generation image; and C: b, adjusting the weight parameters of the neural network according to the noise generation image and the original image, and updating the neural network in the step B according to the adjusted weight parameters; step D: and C, repeatedly executing the steps B to C until the neural network meets a preset condition. The invention adjusts the weight parameters of the neural network according to the noise generation image and the original image, and performs image compression by adjusting the neural network of the weight parameters, thereby improving the image compression effect and solving the problem of slow decoding in the image compression algorithm.

Description

Neural network training method and device and terminal equipment
Technical Field
the invention belongs to the technical field of image compression, and particularly relates to a neural network training method, a neural network training device and terminal equipment.
Background
conventional image compression algorithms, such as jpeg, jpeg2000, and the like, can greatly lose high-frequency information in an image while obtaining an extremely high compression rate, resulting in a large loss of image information, which causes image distortion. At present, high-definition pictures on websites and social media are increasing day by day, and the bandwidth consumption is also increasing, if the high-definition pictures are not compressed, too much resource space is occupied, and if a traditional image compression algorithm is applied, the problems that the images are not clear after being compressed and the like can be caused.
the invention provides a neural network training method, which adjusts weight parameters of a neural network according to generated noise until the neural network meets preset indexes, and performs image compression by adjusting the neural network of the weight parameters, thereby improving the image compression effect and solving the problem of slow decoding in the conventional deep learning image compression algorithm.
Disclosure of Invention
In view of this, embodiments of the present invention provide a neural network training method, an apparatus, and a terminal device, so as to solve the problems in the prior art that a compression effect is not good and decoding in an image compression algorithm is too slow.
A first aspect of an embodiment of the present invention provides a neural network training method, including:
step A: generating image noise;
and B: inputting the image noise into a neural network to generate a corresponding noise generation image;
and C: b, adjusting the weight parameters of the neural network according to the noise generation image and the original image, and updating the neural network in the step B according to the adjusted weight parameters;
Step D: and C, repeatedly executing the steps B to C until the neural network meets a preset condition.
optionally, the inputting of the image noise to the neural network to generate a corresponding noise generation image comprises:
And carrying out convolution operation on the image noise in a neural network to generate a corresponding noise generation image.
Optionally, the adjusting the weight parameter of the neural network according to the noise generation image and the original image includes:
Generating a loss function according to the noise generation image and the original image;
updating the gradient according to the loss function;
adjusting a weight parameter of the neural network by the gradient update.
Optionally, the repeatedly executing steps B to C until the neural network satisfies a preset condition includes:
Repeating the steps B to C until the performance index of the image generated by the neural network reaches a preset threshold value
Or
And C, repeatedly executing the step B until the number of times of the step C reaches a preset number of times.
optionally, after step D, the method further includes:
Extracting a weight parameter of the neural network, and taking the weight parameter as a characteristic image;
Entropy coding the characteristic image to obtain coded data;
Generating reconstruction weight parameters by entropy decoding the coded data;
Updating the neural network according to the reconstruction weight parameter;
And inputting the image noise into the neural network after the reconstruction weight parameters are updated, and generating a reconstructed image.
a second aspect of an embodiment of the present invention provides a neural network training device, including:
The image noise generation module is used for generating image noise;
the convolution module is used for inputting the image noise into the neural network to generate a corresponding noise generation image;
a neural network updating module, configured to adjust a weight parameter of the neural network according to the noise-generated image and the original image, and update the neural network in step B according to the adjusted weight parameter;
And the circulating module is used for repeatedly executing the steps B to C until the neural network meets the preset condition.
Optionally, the convolution module includes:
And the convolution operation unit is used for carrying out convolution operation on the image noise in a neural network to generate a corresponding noise generation image.
optionally, the neural network updating module includes:
a loss function unit for generating a loss function according to the noise generation image and the original image;
A gradient updating unit for performing gradient updating according to the loss function;
and the parameter adjusting unit is used for updating and adjusting the weight parameters of the neural network through the gradient.
a third aspect of embodiments of the present invention provides a neural network training terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method provided in the first aspect when executing the computer program.
a fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as provided in the first aspect above.
compared with the prior art, the embodiment of the invention has the following beneficial effects: the invention trains the neural network by adjusting the weight parameters of the neural network, improves the image compression effect and solves the problem of slow decoding in the image compression algorithm.
Drawings
in order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
fig. 1 is a schematic diagram of an implementation flow of a neural network training method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a convolutional neural network structure provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a process for adjusting weight parameters according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a neural network training device provided in an embodiment of the present invention;
fig. 5 is a schematic diagram of a neural network training terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
in order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Example one
Fig. 1 shows a flow of implementing the neural network training method according to an embodiment of the present invention, and an execution subject of the method may be a terminal device, including but not limited to a smartphone, a tablet computer, a personal computer, a server, and the like. It should be noted that the number of the terminal devices is not fixed, and the terminal devices may be deployed according to actual situations. Further, the implementation flow of the neural network training method provided by the first embodiment is detailed as follows:
In step S101, image noise is generated.
Alternatively, the image noise may be randomly generated image noise or preset image noise, and is not limited herein. The image noise refers to unnecessary or redundant interference information present in the image data, and includes, but is not limited to, gaussian noise, poisson noise, multiplicative noise, salt and pepper noise, and the like.
Illustratively, in the embodiment of the present invention, a gaussian noise with a gaussian distribution obeying to the probability density function is randomly generated, where the gaussian noise is a matrix with a size of 1 × H × W × C, where H is the height of the noise matrix, W is the width of the noise matrix, and C is the number of channels of the noise matrix.
Step S102, inputting image noise into a neural network to generate a corresponding noise generation image.
optionally, the neural network is a convolutional neural network, and the convolutional neural network may include at least one convolutional layer. Further, the convolution layer may include a convolution kernel, and the image input to the convolution layer is subjected to convolution operation with the convolution kernel to remove redundant image information, and output an image including feature information. If the size of the convolution kernel is larger than 1 multiplied by 1, the convolution layer can output a plurality of feature maps with the size smaller than that of the input image, and after the processing of the plurality of convolution layers, the size of the image input into the convolution neural network is subjected to multi-stage contraction to obtain a plurality of feature maps with the size smaller than that of the image input into the neural network. Further, in the embodiment of the present invention, the input of the image noise to the neural network to generate the corresponding noise image may be a deconvolution operation, which is the reverse of the above-described process of generating the feature image by removing redundant information from the input image. Optionally, the convolutional neural network may further include a pooling layer, an inclusion module, a full-link layer, and the like, which is not limited herein.
For example, as shown in fig. 2, the convolutional neural network may include four convolutional layers, and the weight parameter matrices corresponding to the four convolutional layers have sizes H1*W1*N1*C1,H2*W2*N2*C2,H3*W3*N3*C3,H4*W4*N4*C4Step lengths are respectively S1、S2、S3、S4. H is the height of the weight parameter matrix, W is the width of the weight parameter matrix, N is the number of output channels of the weight parameter matrix, and C is the number of input channels of the weight parameter matrix. And the input channel number of the weight parameters of the four convolutional layers is respectively associated with the output channel number of the weight parameters of the previous layer. Exemplarily, in fig. 2, C1=C,C2=N1,C3=N2,C4=N3. In step S101, the matrix with gaussian noise of 1 × H × W × C is generated randomly, and is subjected to convolution operation by the convolutional neural network, and then, the matrix is subjected to convolution operationa noise-producing image of size 1 × H '× W' × C 'should be produced, where H' ═ H × S1*S2*S3*S4,W’=W*S1*S2*S3*S4,C’=C4
step S103, adjusting the weight parameter of the neural network according to the noise generation image, and updating the neural network in step B according to the adjusted weight parameter.
optionally, fig. 3 shows a process of adjusting the weight parameters of the neural network according to the noise generation image:
The method comprises the following steps: s301: and generating a loss function according to the noise generation image and the original image.
Alternatively, the loss function between the noise-generated image and the original image may use MSE (mean square error). Specifically, the formula of MSE is shown in formula (1):
Wherein H is the height of the noise generation image, W is the width of the noise generation image, C is the number of channels of the noise generation image, X 'represents the noise generation image, X represents the original image, and X'i,j,mValues, X, representing the ith row and jth column of the mth channel in the noise-generated imagei,j,mRepresenting the value of the ith row and the jth column of the mth channel in the original image.
step S302: the gradient update is performed according to the loss function described above.
Alternatively, the formula for the gradient update is shown in equation (2):
W′=W-αΔW (2)
Wherein, W represents the weight parameter of the neural network, W' represents the updated weight parameter, α is the preset learning rate, and Δ W is the calculated gradient.
alternatively, the calculations can be performed using an existing adaptive gradient optimizer when performing the gradient update. In particular, an Adam optimizer may be used. Further, the MSE calculation result, the weight parameter of the neural network, and the preset learning rate are input into the Adam optimizer, and the updated weight parameter can be obtained.
Step S303: and adjusting the weight parameters of the neural network through the gradient update.
Optionally, the updated weight parameters obtained by the calculation replace the original weight parameters in the neural network to form a new neural network. The new weight parameter of the neural network is the updated weight parameter calculated in step S302.
And step S104, repeatedly executing the steps B to C until the neural network meets the preset condition.
Optionally, the repeatedly executing steps B to C until the neural network satisfies a preset condition includes:
Repeating the steps B to C until the performance index of the neural network generated image reaches a preset threshold value
Or
And C, repeatedly executing the step B until the number of times of the step C reaches a preset number of times.
Further, the steps B to C are repeatedly executed for a preset number of times, wherein the preset number of times is manually preset in the neural network training program or preset in the terminal equipment loaded with the neural network training program.
Further, the steps B to C are repeatedly executed until the performance index of the neural network generated image reaches a preset threshold value. The performance indexes of the neural network generated image comprise Peak signal to Noise Ratio (PSNR) and pixel Bit Per Pixel (BPP).
Specifically, the test atlas is put into the neural network after the weight parameters are updated to test the performance indexes of the neural network, namely, the peak signal-to-noise ratio (PSNR) and the pixel bit BPP. Optionally, under a fixed pixel bit BPP, it is determined whether the peak signal-to-noise ratio PSNR reaches a preset threshold, and a higher peak signal-to-noise ratio PSNR represents less information lost in picture compression. Optionally, the test pattern set may include a set of 24 kodak standard test patterns, which is not limited herein.
in the embodiment, the weight parameters of the neural network are adjusted by generating the noise until the image compression effect of the neural network reaches the expected index, and the image compression is performed according to the neural network with the adjusted weight parameters, so that the image compression effect is improved, and the problem of slow decoding in an image compression algorithm is solved.
it should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two
fig. 4 is a schematic diagram of a neural network training device provided in an embodiment of the present invention, and only a part related to the embodiment of the present invention is shown for convenience of description. The neural network training device 4 includes: an image noise generation module 41, a convolution module 42, a neural network update module 43, and a loop module 44.
The image noise generating module 41 is configured to generate image noise;
a convolution module 42, configured to input the image noise to a neural network to generate a corresponding noise generation image;
A neural network updating module 43, configured to adjust the weight parameter of the neural network according to the noise-generated image and the original image, and update the neural network in step B according to the adjusted weight parameter;
And a loop module 44, configured to repeatedly execute steps B to C until the neural network satisfies a preset condition.
Specifically, the repeatedly executing steps B to C until the neural network satisfies a preset condition includes:
Repeating the steps B to C until the performance index of the neural network generated image reaches a preset threshold value
or
and C, repeatedly executing the step B until the number of times of the step C reaches a preset number of times.
Optionally, the convolution module 42 includes:
And the convolution operation unit is used for carrying out convolution operation on the image noise in a neural network to generate a corresponding noise generation image.
Optionally, the neural network updating module 43 includes:
A loss function unit for generating a loss function according to the noise generation image and the original image;
a gradient updating unit for performing gradient updating according to the loss function;
And the parameter adjusting unit is used for updating and adjusting the weight parameters of the neural network through the gradient.
Optionally, the neural network training device 4 further includes:
the image compression module is used for extracting the weight parameters of the neural network and taking the weight parameters as characteristic images;
Entropy coding the characteristic image to obtain coded data;
generating a reconstruction weight parameter by entropy decoding the coded data;
Updating the neural network according to the reconstruction weight parameter;
And inputting the image noise into the neural network after the reconstruction weight parameters are updated, and generating a reconstructed image.
In this embodiment, the image noise generation module 41 generates noise, the convolution module 42 generates noise to generate an image, the neural network update module 43 adjusts the weight parameter of the neural network through the noise generated image until the image compression effect of the neural network reaches an expected index, and performs image compression according to the neural network with the adjusted weight parameter, thereby improving the image compression effect and solving the problem of too slow decoding in the image compression algorithm.
EXAMPLE III
Fig. 5 is a schematic diagram of a neural network training terminal device according to an embodiment of the present invention. As shown in fig. 5, the neural network training terminal device 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52, such as a neural network training program, stored in said memory 51 and operable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the above-described embodiments of the neural network training method, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the modules in the above-described device embodiments, such as the functions of the modules 41 to 44 shown in fig. 4.
illustratively, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program 52 in the neural network training terminal device 5. For example, the computer program 52 may be divided into an image noise generation module, a convolution module, a neural network update module, and a rotation module, each of which functions specifically as follows:
an image noise generation module: for generating image noise;
A convolution module: the noise generation device is used for inputting image noise into a neural network to generate a corresponding noise generation image;
The neural network updating module: b, the weight parameter used for producing the picture and adjusting the said neural network according to said noise, and upgrade the said neural network of step B according to the said weight parameter adjusted;
a circulation module: and repeatedly executing the steps B to C until the neural network meets a preset condition.
the network-trained terminal device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The network-trained terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of a network-trained terminal device 5, and does not constitute a limitation of the network-trained terminal device 5, and may include more or less components than those shown, or combine some components, or different components, for example, the network-trained terminal device may also include input-output devices, network access devices, buses, etc.
the Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 51 may be an internal storage unit of the network-trained terminal device 5, such as a hard disk or a memory of the network-trained terminal device 5. The memory 51 may also be an external storage device of the network-trained terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and the like, which are equipped on the network-trained terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the network-trained terminal device 5. The memory 51 is used for storing the computer programs and other programs and data required for the network-trained terminal device. The memory 51 may also be used to temporarily store data that has been output or is to be output.
therefore, the method and the device have the advantages that the random noise is generated, the noise generation image generated by the random noise through the neural network and the original image are calculated to update the weight parameters in the neural network, and the weight parameters in the neural network are repeatedly updated all the time to enable the neural network to reach the optimal state. Therefore, the effect of the neural network on image compression is improved, and the decoding speed of the compression algorithm is improved.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
in the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
in addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
the integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. a neural network training method, comprising:
Step A: generating image noise;
and B: inputting the image noise into a neural network to generate a corresponding noise generation image;
and C: b, adjusting the weight parameters of the neural network according to the noise generation image and the original image, and updating the neural network in the step B according to the adjusted weight parameters;
Step D: and C, repeatedly executing the steps B to C until the neural network meets a preset condition.
2. The neural network training method of claim 1, wherein inputting image noise to a neural network to generate a corresponding noise-generated image comprises:
And carrying out convolution operation on the image noise in a neural network to generate a corresponding noise generation image.
3. the neural network training method of claim 1, wherein the adjusting the weight parameters of the neural network according to the noise-generated image and the raw image comprises:
generating a loss function according to the noise generation image and the original image;
Updating the gradient according to the loss function;
Adjusting a weight parameter of the neural network by the gradient update.
4. the neural network training method of claim 1, wherein the repeatedly performing steps B to C until the neural network satisfies a preset condition comprises:
Repeating the steps B to C until the performance index of the image generated by the neural network reaches a preset threshold value
Or
And C, repeatedly executing the step B until the number of times of the step C reaches a preset number of times.
5. the neural network training method of claim 1, further comprising, after step D:
extracting a weight parameter of the neural network, and taking the weight parameter as a characteristic image;
Entropy coding the characteristic image to obtain coded data;
generating reconstruction weight parameters by entropy decoding the coded data;
initializing the neural network according to the reconstruction weight parameter;
And inputting the image noise into the neural network after the reconstruction weight parameters are updated, and generating a reconstructed image.
6. a neural network training device, comprising:
The image noise generation module is used for generating image noise;
the convolution module is used for inputting the image noise into the neural network to generate a corresponding noise generation image;
a neural network updating module, configured to adjust a weight parameter of the neural network according to the noise-generated image and the original image, and update the neural network in step B according to the adjusted weight parameter;
And the circulating module is used for repeatedly executing the steps B to C until the neural network meets the preset condition.
7. the neural network training device of claim 6, wherein the convolution module comprises:
and the convolution operation unit is used for carrying out convolution operation on the image noise in a neural network to generate a corresponding noise generation image.
8. the neural network training device of claim 6, wherein the neural network update module comprises:
a loss function unit for generating a loss function according to the noise generation image and the original image;
a gradient updating unit for performing gradient updating according to the loss function;
And the parameter adjusting unit is used for updating and adjusting the weight parameters of the neural network through the gradient.
9. a neural network training terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201910727890.9A 2019-08-08 2019-08-08 neural network training method and device and terminal equipment Pending CN110569961A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910727890.9A CN110569961A (en) 2019-08-08 2019-08-08 neural network training method and device and terminal equipment
PCT/CN2019/114946 WO2021022685A1 (en) 2019-08-08 2019-11-01 Neural network training method and apparatus, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910727890.9A CN110569961A (en) 2019-08-08 2019-08-08 neural network training method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN110569961A true CN110569961A (en) 2019-12-13

Family

ID=68774786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910727890.9A Pending CN110569961A (en) 2019-08-08 2019-08-08 neural network training method and device and terminal equipment

Country Status (2)

Country Link
CN (1) CN110569961A (en)
WO (1) WO2021022685A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111556316A (en) * 2020-04-08 2020-08-18 北京航空航天大学杭州创新研究院 Rapid block segmentation coding method and device based on deep neural network acceleration
CN111951155A (en) * 2020-08-14 2020-11-17 上海龙旗科技股份有限公司 Picture effect processing method and equipment

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2698414C1 (en) 2018-09-21 2019-08-26 Владимир Александрович Свириденко Method and device for compressing video information for transmission over communication channels with varying throughput capacity and storage in data storage systems using machine learning and neural networks
US11356305B2 (en) * 2020-02-24 2022-06-07 Qualcomm Incorporated Method to convey the TX waveform distortion to the receiver
CN113052301B (en) * 2021-03-29 2024-05-28 商汤集团有限公司 Neural network generation method and device, electronic equipment and storage medium
CN113570056A (en) * 2021-06-23 2021-10-29 上海交通大学 Method, system, medium, and electronic device for optimizing fault-tolerant neural network structure
CN113657576A (en) * 2021-07-21 2021-11-16 浙江大华技术股份有限公司 Convolutional neural network model lightweight method and device, and image identification method
US11711449B2 (en) 2021-12-07 2023-07-25 Capital One Services, Llc Compressing websites for fast data transfers
CN115761448B (en) * 2022-12-02 2024-03-01 美的集团(上海)有限公司 Training method, training device and readable storage medium for neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108604369A (en) * 2016-07-27 2018-09-28 华为技术有限公司 A kind of method, apparatus, equipment and the convolutional neural networks of removal picture noise
CN108985464A (en) * 2018-07-17 2018-12-11 重庆科技学院 The continuous feature generation method of face for generating confrontation network is maximized based on information
CN109872288A (en) * 2019-01-31 2019-06-11 深圳大学 For the network training method of image denoising, device, terminal and storage medium
CN109919864A (en) * 2019-02-20 2019-06-21 重庆邮电大学 A kind of compression of images cognitive method based on sparse denoising autoencoder network
CN110062246A (en) * 2018-01-19 2019-07-26 杭州海康威视数字技术股份有限公司 The method and apparatus that video requency frame data is handled

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726525B2 (en) * 2017-09-26 2020-07-28 Samsung Electronics Co., Ltd. Image denoising neural network architecture and method of training the same
CN108062780B (en) * 2017-12-29 2019-08-09 百度在线网络技术(北京)有限公司 Method for compressing image and device
CN108090521B (en) * 2018-01-12 2022-04-08 广州视声智能科技股份有限公司 Image fusion method and discriminator of generative confrontation network model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108604369A (en) * 2016-07-27 2018-09-28 华为技术有限公司 A kind of method, apparatus, equipment and the convolutional neural networks of removal picture noise
CN110062246A (en) * 2018-01-19 2019-07-26 杭州海康威视数字技术股份有限公司 The method and apparatus that video requency frame data is handled
CN108985464A (en) * 2018-07-17 2018-12-11 重庆科技学院 The continuous feature generation method of face for generating confrontation network is maximized based on information
CN109872288A (en) * 2019-01-31 2019-06-11 深圳大学 For the network training method of image denoising, device, terminal and storage medium
CN109919864A (en) * 2019-02-20 2019-06-21 重庆邮电大学 A kind of compression of images cognitive method based on sparse denoising autoencoder network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈琦等: "基于自编码器的图像去噪设计与实现", 《新疆师范大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111556316A (en) * 2020-04-08 2020-08-18 北京航空航天大学杭州创新研究院 Rapid block segmentation coding method and device based on deep neural network acceleration
CN111951155A (en) * 2020-08-14 2020-11-17 上海龙旗科技股份有限公司 Picture effect processing method and equipment

Also Published As

Publication number Publication date
WO2021022685A1 (en) 2021-02-11

Similar Documents

Publication Publication Date Title
CN110569961A (en) neural network training method and device and terminal equipment
CN109671026B (en) Gray level image noise reduction method based on void convolution and automatic coding and decoding neural network
CN109410123B (en) Deep learning-based mosaic removing method and device and electronic equipment
US11430090B2 (en) Method and apparatus for removing compressed Poisson noise of image based on deep neural network
CN110047044B (en) Image processing model construction method and device and terminal equipment
CN110838085B (en) Super-resolution reconstruction method and device for image and electronic equipment
CN110830808A (en) Video frame reconstruction method and device and terminal equipment
CN111784699A (en) Method and device for carrying out target segmentation on three-dimensional point cloud data and terminal equipment
CN110650339A (en) Video compression method and device and terminal equipment
CN113222856A (en) Inverse halftone image processing method, terminal equipment and readable storage medium
Chen et al. Enhanced separable convolution network for lightweight jpeg compression artifacts reduction
CN114612316A (en) Method and device for removing rain from nuclear prediction network image
DE102017117381A1 (en) Accelerator for sparse folding neural networks
Shin et al. Expanded adaptive scaling normalization for end to end image compression
CN114170082A (en) Video playing method, image processing method, model training method, device and electronic equipment
CN113744159A (en) Remote sensing image defogging method and device and electronic equipment
CN110677671A (en) Image compression method and device and terminal equipment
CN111083482A (en) Video compression network training method and device and terminal equipment
CN108717687B (en) Image enhancement method based on conversion compression and terminal equipment
CN110782415A (en) Image completion method and device and terminal equipment
Qi et al. Subband adaptive image deblocking using wavelet based convolutional neural networks
CN110572652B (en) Static image processing method and device
CN114119377B (en) Image processing method and device
CN110913220A (en) Video frame coding method and device and terminal equipment
CN108648155B (en) Image enhancement method based on compressed domain and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191213