CN113596471B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113596471B
CN113596471B CN202110844476.3A CN202110844476A CN113596471B CN 113596471 B CN113596471 B CN 113596471B CN 202110844476 A CN202110844476 A CN 202110844476A CN 113596471 B CN113596471 B CN 113596471B
Authority
CN
China
Prior art keywords
image
training
image processing
round
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110844476.3A
Other languages
Chinese (zh)
Other versions
CN113596471A (en
Inventor
王岩
孙保成
何岱岚
顾檬
秦红伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110844476.3A priority Critical patent/CN113596471B/en
Publication of CN113596471A publication Critical patent/CN113596471A/en
Application granted granted Critical
Publication of CN113596471B publication Critical patent/CN113596471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium, the method including: compressing the image to be processed to obtain a first compression result of the image to be processed, and reconstructing the first compression result to obtain a reconstructed image of the image to be processed; the image processing method is realized through an image processing network, the image processing network is obtained through a training method for automatically adjusting a loss function through a reward function, the loss function of the image processing network comprises a plurality of sub-loss functions, and the reward function is used for adjusting weights of the plurality of sub-loss functions in the training process. The embodiment of the disclosure can reduce the storage and transmission cost of the image.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computer vision, and in particular relates to an image processing method and device, electronic equipment and a storage medium.
Background
With the development of information technology, image acquisition and image sharing have become ubiquitous. In the case of excessively large image data, the storage, transmission, and processing of the image data are difficult, and therefore, efficient compression of the image data is particularly important.
Image compression techniques based on deep learning have made great progress in recent years, and the compression effect has exceeded the intra-frame coding of h.265 and even h.266. Aiming at the training of an image compression network model, a loss function describing the image quality fidelity is required to be designated, the loss function can be kept unchanged in the training process of the network model, however, the designated loss function only partially represents the subjective feeling of a person, and different types of visual distortion can be caused when the compression rate is higher.
Disclosure of Invention
The disclosure provides an image processing method and device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided an image processing method including: compressing an image to be processed to obtain a first compression result of the image to be processed; carrying out reconstruction processing on the first compression result to obtain a reconstructed image of the image to be processed; the image processing method is realized through an image processing network, the image processing network is obtained through a training method for automatically adjusting loss functions through rewarding functions, the loss functions of the image processing network comprise a plurality of sub-loss functions, and the rewarding functions are used for adjusting weights of the plurality of sub-loss functions in the training process.
By the method, the weights of a plurality of sub-loss functions of the loss function can be adjusted through the reward function, so that the reward function can automatically reconcile the loss function, complex manual loss function design and super-parameter traversal are replaced, training effect and efficiency are improved, and the storage and transmission cost of images is reduced under the condition that the quality of the images is ensured by a trained image processing network.
In one possible implementation, the reward function includes a first function and a plurality of second functions, the first function being used to indicate a deviation of an encoding rate of the image processing network; the second function is used to indicate a deviation of an image quality indicator of the image processing network.
In this way, the reward function can be used to indicate the deviation of the coding rate of the image processing network and the deviation of a plurality of image quality indexes, in this case, the trade-off between the coding rate and each image quality index can be effectively controlled by the online reconciliation of the loss function by the reward function, so that a great deal of cost of manual attempt is saved, and the storage and transmission cost of the image is reduced under the condition of ensuring the image quality.
In one possible implementation manner, the compressing the image to be processed to obtain a first compression result of the image to be processed includes: encoding the image to be processed to obtain a first image characteristic of the image to be processed; quantizing the first image features to obtain quantized data and quantized distribution information of the image to be processed; and carrying out entropy coding on the quantized data to obtain a compressed code stream of the image to be processed, wherein the first compression result comprises the compressed code stream and the quantized distribution information.
In this way, the image to be processed is compressed, and the obtained first compression result of the image to be processed has a better compression effect.
In one possible implementation manner, the reconstructing the first compression result to obtain a reconstructed image of the image to be processed includes: entropy decoding is carried out on the first compression result to obtain a second image characteristic; and decoding the second image features to obtain a reconstructed image of the image to be processed.
In this way, the reconstruction processing is performed on the first compression result, so that the decompressed reconstructed image can be undistorted or reduced.
In one possible implementation, the method further includes: training the image processing network according to the training set and the verification set to obtain a trained image processing network; the training of the image processing network according to the training set and the verification set to obtain a trained image processing network comprises the following steps: for the t-th training, respectively inputting the first sample images in the training set into B image processing networks for (t-1) th training to be processed, so as to obtain B first processing results of the first sample images, wherein t and B are integers larger than 1; respectively training the B image processing networks trained in the (t-1) th round according to the first sample image, the B first processing results and the B loss functions trained in the t-th round to obtain B image processing networks trained in the t-th round; and under the condition that the B image processing networks trained in the t th round meet training conditions, determining the trained image processing networks according to the B image processing networks trained in the t th round.
By the method, the image processing network can be trained according to the training set and the verification set, the trained image processing network is obtained, efficient training of the image processing network is facilitated, and the trained image processing network has better performance.
In one possible implementation manner, the training the image processing network according to the training set and the verification set, to obtain a trained image processing network, further includes: for the t-th training, respectively inputting the second sample images in the verification set into B image processing networks of the t-th training to be processed, so as to obtain B second processing results of the second sample images; b self-adaptive strategies of (t+1) th training corresponding to the B image processing networks of the t th training are respectively determined according to the second sample image, the B second processing results and the rewarding function; and respectively determining weights of a plurality of sub-loss functions of the loss function according to the B self-adaptive strategies of the (t+1) th training round to obtain B loss functions of the (t+1) th training round.
In this way, the weights of a plurality of sub-loss functions of the loss function are adaptively adjusted according to the reward function in the training process, so that the reward function can be used for automatically blending the loss function, complex manual loss function design and super-parameter traversal are replaced, and the training effect and efficiency are improved.
In one possible implementation manner, the training the B image processing networks of the (t-1) th training according to the first sample image, the B first processing results, and the B loss functions of the t-th training respectively, to obtain the B image processing networks of the t-th training includes: under the condition that the training round number t is a multiple of a preset track N, training the B image processing networks of the (t-1) th round training according to the first sample image, the B first processing results and the B loss functions of the t-th round training to obtain B intermediate state image processing networks of the t-th round training, wherein N is an integer larger than 1; respectively inputting the second sample images in the verification set into the image processing networks of the B intermediate states of the t-th training to be processed, so as to obtain B third processing results of the second sample images; respectively determining the rewarding results of the t-th round training of the image processing network in the B intermediate states of the t-th round training according to the second sample image, the B third processing results and the rewarding function; determining a target image processing network of the t-th training from the B intermediate state image processing networks of the t-th training according to the rewarding result of the t-th training; and determining B image processing networks of the t-th training according to the target image processing network of the t-th training.
In this way, the training process is adjusted once every N rounds of iteration, and the target image processing network for training is determined to be the image processing network, so that the condition that learning is not facilitated due to overlarge difference between new and old strategy changes in the training process can be reduced, and the training efficiency and the training accuracy can be improved.
In one possible implementation manner, the training condition includes that the number of training rounds reaches a total number of training rounds T, T is an integer and 1<t is less than or equal to T, and when the B image processing networks trained in the T th round meet the training condition, determining the trained image processing network according to the B image processing networks trained in the T th round includes: under the condition of t=t, respectively inputting the second sample image in the verification set into B image processing networks for T-th training to obtain B fourth processing results of the second sample image; respectively determining the rewarding results of the T-th training of the B-th image processing network of the T-th training according to the second sample image, the B fourth processing results and the rewarding function; determining a target image processing network of the T-th training from the B image processing networks of the T-th training according to the rewarding result of the T-th training; and determining the target image processing network trained by the T th round as a trained image processing network.
By the method, the total training round number T can be preset, a trained image processing network can be obtained in a limited round, and the training efficiency of the image processing network is improved.
In one possible implementation manner, the plurality of sub-loss functions include a first sub-loss function, a second sub-loss function and a third sub-loss function, where the first sub-loss function is used to indicate an error of a coding rate, the second sub-loss function is used to indicate an error of a peak signal-to-noise ratio PSNR, and the third sub-loss function is used to indicate an error of a structural similarity SSIM, and the image quality index includes the peak signal-to-noise ratio PSNR and the structural similarity SSIM.
In this way, the weight of a plurality of sub-losses in the loss function is adjusted, so that the influence of the coding rate and each image quality evaluation index on the network training can be effectively balanced.
In one possible implementation, the first compression result of the image to be processed is used for transmission and/or storage.
In this way, the first compression result is used for transmission and/or storage, and storage or transmission cost of the image can be reduced.
According to an aspect of the present disclosure, there is provided an image processing apparatus including: the compression module is used for carrying out compression processing on the image to be processed to obtain a first compression result of the image to be processed; the reconstruction module is used for carrying out reconstruction processing on the first compression result to obtain a reconstructed image of the image to be processed, wherein the image processing device is realized through an image processing network, the image processing network is obtained through a training method for automatically adjusting a loss function through a reward function, the loss function of the image processing network comprises a plurality of sub-loss functions, and the reward function is used for adjusting weights of the plurality of sub-loss functions in the training process.
By the method, the weights of a plurality of sub-loss functions of the loss function can be adjusted through the reward function, so that the reward function can automatically reconcile the loss function, complex manual loss function design and super-parameter traversal are replaced, training effect and efficiency are improved, and the storage and transmission cost of images is reduced under the condition that the quality of the images is ensured by a trained image processing network.
In one possible implementation, the reward function includes a first function and a plurality of second functions, the first function being used to indicate a deviation of an encoding rate of the image processing network; the second function is used to indicate a deviation of an image quality indicator of the image processing network.
In this way, the reward function can be used to indicate the deviation of the coding rate of the image processing network and the deviation of a plurality of image quality indexes, in this case, the trade-off between the coding rate and each image quality index can be effectively controlled by the online reconciliation of the loss function by the reward function, so that a great deal of cost of manual attempt is saved, and the storage and transmission cost of the image is reduced under the condition of ensuring the image quality.
In one possible implementation, the compression module is configured to: encoding the image to be processed to obtain a first image characteristic of the image to be processed; quantizing the first image features to obtain quantized data and quantized distribution information of the image to be processed; and carrying out entropy coding on the quantized data to obtain a compressed code stream of the image to be processed, wherein the first compression result comprises the compressed code stream and the quantized distribution information.
In this way, the compression module compresses the image to be processed, and the obtained first compression result of the image to be processed has a better compression effect.
In one possible implementation, the reconstruction module is configured to: entropy decoding is carried out on the first compression result to obtain a second image characteristic; and decoding the second image features to obtain a reconstructed image of the image to be processed.
In this way, the reconstruction module performs reconstruction processing on the first compression result, so that the decompressed reconstructed image can be undistorted or reduced.
In one possible implementation, the apparatus further includes a training module: the image processing network training device is used for training the image processing network according to the training set and the verification set to obtain a trained image processing network; wherein, training module is used for: for the t-th training, respectively inputting the first sample images in the training set into B image processing networks for (t-1) th training to be processed, so as to obtain B first processing results of the first sample images, wherein t and B are integers larger than 1; respectively training the B image processing networks trained in the (t-1) th round according to the first sample image, the B first processing results and the B loss functions trained in the t-th round to obtain B image processing networks trained in the t-th round; and under the condition that the B image processing networks trained in the t th round meet training conditions, determining the trained image processing networks according to the B image processing networks trained in the t th round.
By the method, the image processing network can be trained according to the training set and the verification set, the trained image processing network is obtained, efficient training of the image processing network is facilitated, and the trained image processing network has better performance.
In one possible implementation, the training module is further configured to: for the t-th training, respectively inputting the second sample images in the verification set into B image processing networks of the t-th training to be processed, so as to obtain B second processing results of the second sample images; b self-adaptive strategies of (t+1) th training corresponding to the B image processing networks of the t th training are respectively determined according to the second sample image, the B second processing results and the rewarding function; and respectively determining weights of a plurality of sub-loss functions of the loss function according to the B self-adaptive strategies of the (t+1) th training round to obtain B loss functions of the (t+1) th training round.
In this way, the weights of a plurality of sub-loss functions of the loss function are adaptively adjusted according to the reward function in the training process, so that the reward function can be used for automatically blending the loss function, complex manual loss function design and super-parameter traversal are replaced, and the training effect and efficiency are improved.
In one possible implementation manner, the training the B image processing networks of the (t-1) th training according to the first sample image, the B first processing results, and the B loss functions of the t-th training respectively, to obtain the B image processing networks of the t-th training includes: under the condition that the training round number t is a multiple of a preset track N, training the B image processing networks of the (t-1) th round training according to the first sample image, the B first processing results and the B loss functions of the t-th round training to obtain B intermediate state image processing networks of the t-th round training, wherein N is an integer larger than 1; respectively inputting the second sample images in the verification set into the image processing networks of the B intermediate states of the t-th training to be processed, so as to obtain B third processing results of the second sample images; respectively determining the rewarding results of the t-th round training of the image processing network in the B intermediate states of the t-th round training according to the second sample image, the B third processing results and the rewarding function; determining a target image processing network of the t-th training from the B intermediate state image processing networks of the t-th training according to the rewarding result of the t-th training; and determining B image processing networks of the t-th training according to the target image processing network of the t-th training.
In this way, the training process is adjusted once every N rounds of iteration, and the target image processing network for training is determined to be the image processing network, so that the condition that learning is not facilitated due to overlarge difference between new and old strategy changes in the training process can be reduced, and the training efficiency and the training accuracy can be improved.
In one possible implementation manner, the training condition includes that the number of training rounds reaches a total number of training rounds T, T is an integer and 1<t is less than or equal to T, and when the B image processing networks trained in the T th round meet the training condition, determining the trained image processing network according to the B image processing networks trained in the T th round includes: under the condition of t=t, respectively inputting the second sample image in the verification set into B image processing networks for T-th training to obtain B fourth processing results of the second sample image; respectively determining the rewarding results of the T-th training of the B-th image processing network of the T-th training according to the second sample image, the B fourth processing results and the rewarding function; determining a target image processing network of the T-th training from the B image processing networks of the T-th training according to the rewarding result of the T-th training; and determining the target image processing network trained by the T th round as a trained image processing network.
By the method, the total training round number T can be preset, a trained image processing network can be obtained in a limited round, and the training efficiency of the image processing network is improved.
In one possible implementation manner, the plurality of sub-loss functions include a first sub-loss function, a second sub-loss function and a third sub-loss function, where the first sub-loss function is used to indicate an error of a coding rate, the second sub-loss function is used to indicate an error of a peak signal-to-noise ratio PSNR, and the third sub-loss function is used to indicate an error of a structural similarity SSIM, and the image quality index includes the peak signal-to-noise ratio PSNR and the structural similarity SSIM.
In this way, the weight of a plurality of sub-losses in the loss function is adjusted, so that the influence of the coding rate and each image quality evaluation index on the network training can be effectively balanced.
In one possible implementation, the first compression result of the image to be processed is used for transmission and/or storage.
In this way, the first compression result is used for transmission and/or storage, and storage or transmission cost of the image can be reduced.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the weights of a plurality of sub-loss functions of the loss function can be adjusted through the reward function, so that the reward function can be used for automatically blending the loss function, complex manual loss function design and super-parameter traversal are replaced, the training effect and efficiency are improved, and the storage and transmission cost of the image is reduced under the condition that the trained image processing network ensures the image quality.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
FIG. 2 shows a schematic diagram of a bonus function, according to an embodiment of the disclosure.
Fig. 3 shows a schematic diagram of an image processing network according to an embodiment of the present disclosure.
Fig. 4 illustrates a schematic diagram of an image processing effect of an image processing method according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Fig. 7 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In the related art, in the deep learning image compression training, a loss function describing the image quality fidelity needs to be specified in advance, for example, a peak signal-to-noise ratio (Peak Signal To Noise Ratio, PSNR) loss function or a multi-level structure similarity (Multiscale Structural Similarity, MS-SSIM) loss function may be employed. Wherein the MS-SSIM loss function can retain high frequency information of the image (i.e., edge and detail information in the image), but easily causes changes in brightness and color deviation of the image; in contrast, the PSNR loss function can better keep the brightness and color of the image unchanged. Therefore, the PSNR loss function or the MS-SSIM loss function only represents part of subjective feelings of people, and the optimization target is compared on one side, so that different types of visual distortion can be caused when the compression rate is high.
To alleviate the above problems, the PSNR loss function and the MS-SSIM loss function may be manually combined. However, the mixing loss function of the manual design is complex, requires expert experience, and is difficult to balance the optimization of the PSNR loss and the MS-SSIM loss, and cannot guarantee the final subjective and objective effect improvement.
In the related art, although there are training methods such as autopl for advanced visual tasks, for example, tasks including image segmentation, object detection, face recognition, identity authentication, etc., the hyper-parameters of the loss function can be automatically adjusted during the training process. However, these high-level tasks are very different from the image compression task, which is a typical low-level visual task. In advanced visual tasks, the performance degradation caused by the mismatch between the loss function and the evaluation metric leaves relatively much room for improvement in the loss function search or adaptation method. In low-level visual tasks such as image compression, popular assessment indicators, including, for example, PSNR and MS-SSIM, may be used as a loss function. It is much more difficult to achieve performance improvement by re-ranking these two metrics. Furthermore, in low-level visual tasks, none of the single evaluation metrics or known combinations can well represent human visual perception to optimize the final objective, which introduces additional complexity.
In view of this, the disclosure proposes an image processing method, which can adjust weights of multiple sub-loss functions of a loss function through a reward function, so as to implement automatic blending of the loss function by the reward function, replace complex manual loss function design and super-parameter traversal, and improve training effect and efficiency; in addition, the reward function can be used for indicating the deviation of the coding rate of the image processing network and the deviation of a plurality of image quality indexes (including PSNR indexes and MS-SSIM indexes, for example), in this case, the trade-off between the coding rate and each image quality index can be effectively controlled by online blending of the loss function through the reward function, so that a great deal of cost of manual trial is saved, and the storage and transmission cost of the image is reduced under the condition of ensuring the image quality.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, as shown in fig. 1, including:
in S11, compressing an image to be processed to obtain a first compression result of the image to be processed;
in S12, performing reconstruction processing on the first compression result to obtain a reconstructed image of the image to be processed;
The image processing method is realized through an image processing network, the image processing network is obtained through a training method for automatically adjusting loss functions through rewarding functions, the loss functions of the image processing network comprise a plurality of sub-loss functions, and the rewarding functions are used for adjusting weights of the plurality of sub-loss functions in the training process.
In a possible implementation manner, the image processing method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, etc., and the method may be implemented by a processor invoking computer readable instructions stored in a memory. Alternatively, the method may be performed by a server.
In one possible implementation manner, the image to be processed input by the image processing method may be a snap-shot image, a remote sensing image, a medical image, etc. of the security camera, and the source and the specific type of the image to be processed are not limited in the disclosure.
In one possible implementation, the image processing network may include a compression network and a reconstruction network, the compression network may be configured to perform compression processing on the image to be processed to obtain a first compression result for storage or transmission, and the reconstruction network may be configured to decompress the first compression result to obtain a reconstructed image. The smaller the storage space occupied by the first compression result is, the smaller the difference between the reconstructed image and the image to be processed is, and the better the effect of the method is.
In S11, compressing an image to be processed to obtain a first compression result of the image to be processed;
for example, an image to be processed may be input into a compression network of an image processing network, and compression processing is performed on the image to be processed, so as to obtain a first compression result of the image to be processed; the compression network of the image processing network may be a deep learning based compression network, and the length and width of the image to be processed may be 384 pixels×512 pixels, each pixel occupies 24 bits (Bit) and the memory space that may be occupied is 576KB, for example. The image to be processed is input into a compression network of the image processing network, and the obtained first compression result may include 48×64×192 data units, each data unit may consume 1 bit of compression characteristic data on average, and may occupy 48×64×192 bits of storage space. After the compression processing, the number of bits required for each pixel in the first compression result is (48×64×192)/(384×512) =3 bits/pixel, and the compression ratio is 24:3=8:1.
Thus, a to-be-processed image which occupies 576KB can be converted into a first compression result which consumes 72KB of storage space after being processed by a compression network. Compared with the method that the image to be processed is directly stored or transmitted, the first compression result is stored or transmitted, so that the storage space can be saved, and the transmission bandwidth can be reduced.
The compression network based on deep learning can comprise a plurality of convolution, pooling, nonlinearity and other network layers, and the specific network structure of the compression network is not limited by the disclosure.
In S12, performing reconstruction processing on the first compression result to obtain a reconstructed image of the image to be processed;
for example, the first compression result may be input into a reconstruction network of the image processing network, and the reconstruction processing may be performed on the first compression result to obtain a reconstructed image of the image to be processed. The reconstruction network of the image processing network can also be a reconstruction network based on deep learning, a first compression result with the size of 72KB can be input into the reconstruction network, the first compression result is decompressed to obtain a reconstruction image of the image to be processed, the reconstruction image can be restored into image data occupying 576KB, and the image quality of the reconstruction image can be kept close to that of the image to be processed.
The image reconstruction network based on deep learning can comprise a plurality of convolution, pooling, nonlinearity and other network layers, and the specific network structure of the reconstruction network is not limited by the present disclosure.
It should be understood that the image processing network has different structures or network parameters, the compression and reconstruction effects of the image to be processed are different, the image processing network is trained by searching a training method with better performance, and the acquired trained image processing network has better performance.
In one possible implementation, the image processing network is trained by a preset training set, a preset verification set, a loss function and a reward function; the training set is used for training the image processing network, the testing set is used for evaluating the performance of the trained image processing network, the testing set can be input into the rewarding function, and the performance of the image processing network is determined through the result output by the rewarding function.
For example, the image processing network may be trained using a reinforcement learning-based multi-loss function online reconciliation training method, including, for example, a near-end policy optimization (Proximal Policy Optimization, PPO) method, in which weights among multiple sub-losses of a loss function are adaptively adjusted by a reward function, and the image processing network is trained based on the loss function and a training set in online adjustment until an optimal loss function is found by searching, and an image processing network obtained by training based on the loss function is used as a target image processing network.
And in the training process, the sampling space of the super parameters of the multiple loss functions is respectively adjusted through the reward function, so that the weights of the multiple sub-loss functions of the multiple loss functions are adjusted until the optimal loss function is found, and the function value of the reward function can be maximized based on the image processing network obtained by training the loss function.
In this way, the method can adjust the weights of a plurality of sub-loss functions of the loss function through the reward function, realizes the automatic blending of the loss function by the reward function, replaces complex manual loss function design and super-parameter traversal, improves the training effect and efficiency, and reduces the storage and transmission cost of images under the condition of ensuring the image quality of a trained image processing network.
In one possible implementation, the first compression result of the image to be processed is used for transmission and/or storage.
For example, for a security scene, a security camera captures a large number of images (such as face images) and transmits the large number of images to a server for data analysis, in this process, the larger the image data is, the longer the consumed transmission time is, and the larger the occupied storage space is required to store the large number of image data.
By adopting the method, the image compression can be respectively carried out on mass snapshot images to obtain the first compression result of the snapshot images, for example, the size of the first compression result is 1/S of the size of the snapshot images, and S is a real number larger than 1. In this case, the security camera may transmit the first compression result to the server, and the server stores the received first compression result. When the server needs to analyze the snap-shot image, the first compression result can be decompressed (reconstructed) first, the reconstructed image is analyzed and processed, and compared with the original image before compression, the reconstructed image has little distortion, and can meet the requirement of data analysis.
Therefore, in the security scene, the first compression result is transmitted and/or stored, so that the storage space of the picture can be reduced as much as possible under the condition of ensuring the image quality, or the quality of the stored picture can be improved as much as possible when the same storage space is occupied. Plays a role in improving the quality of security products (such as a face recognition system, a product server stores a first compression result of a face snapshot image) and reducing the storage and maintenance cost.
For example, for medical image processing scenarios, medical devices may continually acquire a large number of medical images (e.g., nuclear magnetic images, ultrasound images, etc.) and transmit the large number of images to a server for image analysis. In order to prevent delay of illness, the server needs to acquire medical images shot by medical equipment as soon as possible and analyze and process the images.
In this case, the medical image shot by the medical device can be compressed by adopting the method disclosed by the disclosure, and the compressed first compression result is transmitted to the server, so that the time for transmitting the image is reduced. The server receives the first compression result, decompresses (reconstructs) the first compression result, and then analyzes and processes the reconstructed image, wherein the reconstructed image has little distortion compared with the original image before compression, and can meet the requirement of data analysis. And moreover, the server needs to store a large number of medical images of patients, if the original image is stored directly, a large amount of storage space is consumed, the first compression result can be stored, the consumption of the storage space of the server is reduced, and further the maintenance cost of the server is reduced.
For example, for an application scenario of the image-text live broadcast platform, the media may take a picture of a live broadcast scene by using an electronic device (such as a mobile phone), and transmit the taken picture to the image-text live broadcast platform.
Under the condition, the method can be used for compressing the field image shot by the electronic equipment (such as a mobile phone), transmitting the compressed first compression result to the image-text live broadcast platform, reducing the time of image transmission and ensuring the real-time performance of the field image. The server receives the first compression result, decompresses (reconstructs) the first compression result, and then analyzes and processes the reconstructed image, wherein the reconstructed image has little distortion compared with the original image before compression, and can meet the requirement of image live broadcast. And moreover, the image-text live broadcast platform needs to store a large number of field images of different media, if the original image is stored directly, a large amount of storage space is consumed, the first compression result can be stored, the consumption of the storage space of the image-text live broadcast platform is reduced, and further the maintenance cost of the platform is reduced.
It should be understood that, the specific application scenario of the first compression result is not limited in the present disclosure, and the first compression result in the method of the present disclosure may be used to transmit and/or store the image when the storage space of the image is reduced as much as possible under the condition of ensuring the image quality, or when the same storage space is occupied, the quality of the stored image can be improved as much as possible.
In this way, the first compression result is used for transmission and/or storage, and storage or transmission cost of the image can be reduced.
In one possible implementation, the reward function includes a first function and a plurality of second functions, the first function being used to indicate a deviation between a coding rate of the image processing network and a target rate; the plurality of second functions are used to indicate deviations of the respective image quality indicators, and include, for example, a PSNR indicator, an MS-SSIM indicator, and the like. The present disclosure does not limit the number of function terms that the bonus function includes.
FIG. 2 shows a schematic diagram of a bonus function, according to an embodiment of the disclosure. As shown in fig. 2, the solid line and the dotted line in the figure are code rate-distortion (RD) curves of the trained image processing network on two evaluation image quality indexes of PSNR and MS-SSIM, respectively, by using PSNR and MS-SSIM alone as loss functions. The "X" in FIG. 2 is marked as RD representation of the image processing network at some intermediate state in the training process.
As shown in FIG. 2, the bonus functionThe design method comprises the following steps:
in the case of the formula (1),representing an image processing network, S v Representing the training set image, R represents the code rate, i.e. the number of Bits occupied by each Pixel in the image (BPP), e.g. 24 Bits are consumed Per Pixel for an RGB three-channel image.
A bonus function shown in formula (1)Comprises three functional items, namely a first function + ->And two second functions->The composition is formed.
Wherein the first functionNumber of digitsFor evaluating image processing networks->Coding rate performance of (i) image processing network->Is set to be a target code rate R target Is a deviation of (2).
Second functionFor evaluating image processing networks->PSNR performance of the image quality index, +.>The calculation method of the middle f1 (PSNR) is shown in the left graph of FIG. 2, and shows the difference between the calculated value and the solid line in the left graph, namely, the code rate R and the corresponding code rate f of the solid line under the same PSNR value 1 (psnr) difference between (psnr).
Another second functionFor evaluating image processing networks->Image quality index MS-SSIM performance, < ->Middle f 2 The calculation method of (MS-SSIM) is shown in the right graph of FIG. 2, and represents the difference from the dotted line in the right graph, namely, the code rate R and the corresponding code rate f of the dotted line under the same MS-SSIM value 2 (ms-ssim).
In order for the image processing networkDifferent application scenes can be matched, and the function item of the rewarding function can be preset according to the current application scene>Different functions may be used for weighting and reshaping. For example->Can be expressed as:
in the formula (2), ω can be set 1 =25,ω 3 =10, the setting of this parameter can make the current reward function more matched to the required image processing networkThe image processing network with better image quality index MS-SSIM performance can be trained by automatically generating a loss function under the harmony of the rewarding function in the application scene with better image quality index MS-SSIM>
It can also be expressed as:
ω i can be used for controlling the priority between the image quality index PSNR and the image quality index MS-SSIM. In order to pay attention to the MS-SSIM index at the same time in the case of optimizing the PSNR index, ω can be set 1 =25;ω 2 =100;ω 3 =1. To optimize MS-SSIM index, while focusing on PSNR index, may set ω 1 =25;ω 2 =1;ω 3 =100。
Therefore, the rewarding function meeting the current scene requirement can be preset according to the current application scene requirementThe pair of the present disclosure is->The specific parameter settings in (a) are not limited. A harmonious image processing network in which bonus functions as designed in formula (1) can be integrated +.>The code rate R, PSNR of the (B) and the MS-SSIM.
For example, for security scenes, it is often necessary to save a large number of pictures. The method can be adopted to preset the rewarding function meeting the scene, so that the storage space of the picture is reduced as much as possible under the condition of ensuring the image quality, or the quality of the stored picture can be improved as much as possible when the same storage space is occupied. The method has the effects of improving the quality of products and reducing the cost of storage and maintenance.
It should be understood that the reward function shown in formulas (1) - (3) is only taken as an example by using three function terms, and the reward function may also have more function terms corresponding to different image quality average indexes, and the number of function terms included in the reward function is not limited in the disclosure.
In this way, the reward function can be used to indicate the deviation of the coding rate of the image processing network and the deviation of a plurality of image quality indexes (including, for example, PSNR indexes and MS-SSIM indexes), in this case, the trade-off between the coding rate and each image quality index can be effectively controlled by online reconciling the loss function by the reward function, so that a great deal of cost of manual trial is saved, and the storage and transmission cost of the image is reduced under the condition of ensuring the image quality.
The image processing method according to the embodiment of the present disclosure will be described below.
Fig. 3 shows a schematic diagram of an image processing network according to an embodiment of the present disclosure, which may include a compression network and a reconstruction network, as shown in fig. 3, wherein gray arrows are used to indicate data flows of the compression network, black arrows are used to indicate data flows of the reconstruction network, and white arrows are used to indicate data flows shared by the compression network and the reconstruction network.
In one possible implementation manner, the compressing the image to be processed to obtain a first compression result of the image to be processed includes: encoding the image to be processed to obtain a first image characteristic of the image to be processed; quantizing the first image features to obtain quantized data and quantized distribution information of the image to be processed; and carrying out entropy coding on the quantized data to obtain a compressed code stream of the image to be processed, wherein the first compression result comprises the compressed code stream and the quantized distribution information.
For example, the image to be processed may be input into a compression network of the image processing network for processing, resulting in a first compression result of the image to be processed.
The compression network may include an encoding sub-network, a quantization sub-network, and an entropy encoding sub-network, among others. The image to be processed can be processed through the coding sub-network to obtain a first image characteristic of the image to be processed; quantizing the first image features through a quantization sub-network to obtain quantized data and quantized distribution information of an image to be processed; and processing the quantized data through the entropy coding sub-network to obtain a compressed code stream of the image to be processed, wherein a first compression result comprises the compressed code stream and the quantized distribution information and is used for transmission and/or storage.
As shown in fig. 3, the compression network may include an encoding sub-network g a Quantization subnetwork Q 1 Entropy coding sub-network AE 1
The image x to be processed can be input into the coding sub-network g a And obtaining a first image feature y of the image x to be processed. Wherein the sub-network g is encoded a First image for extracting image x to be processedThe feature y can reduce the data volume of the image x to be processed, facilitate the subsequent quantization and coding processing and effectively improve the coding efficiency.
Then, the first image feature y is input into the quantization sub-network Q 1 Quantization processing is carried out on the first image feature y to obtain quantized dataWherein, the quantization sub-network Q 1 For converting the continuous numerical fields in the first image feature y into discrete sets, the amount of data can be further reduced. The quantization sub-network Q 1 Quantization can be performed by an equidistant method, a fuzzy clustering method, etc., and the quantization method specifically adopted by the present disclosure is not limited.
Due to quantized data after quantizationIn order to make the image processing network meet the condition that the gradient descent method can be used in the training process, a preset uniform noise +_can be added into the first image feature y>Make get +.>Infinite approach->To simplify the operation +. >Represents->And->
Next, the data will be quantizedInput entropy coding sub-network AE 1 Processing, quantizing data->To a compressed code stream for transmission or storage. The entropy coding sub-network can allocate appropriate codewords by probability with known source probability distribution such that the average total code length is the shortest. Among them, entropy encoding methods may include Shannon (Shannon) encoding, huffman (Huffman) encoding, and arithmetic encoding (Arithmetic Coding), and the present disclosure is not limited to a specific entropy encoding method.
Although the above procedure can obtain the first compression result for storage and transmission, in order to improve the compression effect, the above procedure can also be added to entropy coding by learning the side information, so as to obtain a better first compression result.
The process of the image processing network for compressing the image x to be processed can be regarded as the process of learning the signal distribution of the image. For the image x to be processed, the actual distribution of the image x to be processed is not known, however, the distribution may have statistical dependence (probability coupling), and quantization distribution information based on side information (image edge feature information) may be introduced to guide the entropy encoding sub-network.
As shown in the dashed box portion of fig. 3, the compression network may further comprise an a priori subnetwork for guiding entropy encoding, e.g. the first image feature y may also be input into the encoding subnetwork h a And extracting the side information characteristic of the first image characteristic y to obtain a side information characteristic z. Then sequentially inputting the side information features z into the quantization sub-network Q 2 Entropy coding sub-network AE 2 Entropy decoding subnetwork AD 2 Decoding subnetwork h s The prior guiding information of the side information is obtained after the quantization processing, entropy encoding, entropy decoding, decoding and reconstruction of the side information characteristic zBased on the environment model g cm Quantized data +.>Input environment model subnetwork g cm Obtaining a priori guidance information based on context information +.>Will be->And->Input to entropy parameter acquisition sub-network g ep Quantization distribution information (μ, σ) for guiding entropy encoding is obtained.
In this way, the compression network of the image processing network can be added to entropy coding by learning side information, and a better compression effect can be achieved. Therefore, the image to be processed is compressed, and the obtained first compression result of the image to be processed has better compression effect.
It should be appreciated that the compression network described above may be designed or built with units including convolution, pooling, nonlinearity, etc., and the specific structure of the compression network is not limited by this disclosure.
In one possible implementation manner, the reconstructing the first compression result to obtain a reconstructed image of the image to be processed includes: entropy decoding is carried out on the first compression result to obtain a second image characteristic; and decoding the second image features to obtain a reconstructed image of the image to be processed.
For example, the first compression result may be input to a reconstruction network of the image processing network for processing, resulting in a reconstructed image of the image to be processed.
Wherein the reconstruction network may include an entropy decoding sub-network and a decoding sub-network. The first compression result can be processed through the entropy decoding sub-network to obtain a second image characteristic; and processing the second image features through the decoding sub-network to obtain a reconstructed image of the image to be processed.
As shown in fig. 3, the reconstruction network may include an entropy decoding sub-network AD 1 Decoding subnetwork g s
After the first compression result is obtained, the sub-network AD can be decoded by utilizing entropy 1 Processing the first compression result to obtain a second image featureBy decoding the sub-network g s For the second image feature->Processing to obtain reconstructed image +.>
Wherein, in entropy decoding AD 1 In the course of (a), reference may be made to the above entropy-coding sub-network AE 1 Can also introduce quantization distribution information based on side information (image edge characteristic information) to guide entropy decoding sub-network AD 1
In this way, the compression network of the image processing network can learn side information and add the side information into entropy coding, so that a better image reconstruction effect can be realized. Therefore, the reconstruction processing is performed on the first compression result, so that the decompressed reconstructed image can be undistorted or reduced.
It should be appreciated that the above-described reconstruction network may include convolution, pooling, non-linear, etc. elements designed or constructed, and the present disclosure is not limited to the specific structure of the reconstruction network.
Before the image processing network is applied, the image processing network can be trained through a loss function, and the trained image processing network is obtained.
In one possible implementation manner, the plurality of sub-loss functions include a first sub-loss function, a second sub-loss function and a third sub-loss function, where the first sub-loss function is used to indicate an error of a coding rate, the second sub-loss function is used to indicate an error of a peak signal-to-noise ratio PSNR, and the third sub-loss function is used to indicate an error of a structural similarity SSIM, and the image quality index includes the peak signal-to-noise ratio PSNR and the structural similarity SSIM.
For example, the loss function L may be expressed as:
L=R+D=R+λ MSE ·MSE+λ MS-SSIM ·(1-MS-SSIM) (4)
in equation (4), the first sub-loss function R is used to indicate the error of the coding rate, the second sub-loss function MSE is used to indicate the error of the peak signal to noise ratio PSNR, and the third sub-loss function (1-MS-SSIM) is used to indicate the error of the structural similarity SSIM, e.g. including the error of the multi-level structural similarity MS-SSIM. Lambda (lambda) MSE And lambda is MS-SSIM Representing the weight parameters of the second sub-loss function MSE and the third sub-loss function (1-MS-SSIM), respectively.
Due to lambda MSE And lambda is MS-SSIM The values of (a) are all larger than zero, so that the operation can be simplified Wherein lambda' MSE And lambda' MS-SSIE The super-parameter, which can be a loss function, can be expressed as λ= (λ' MSE ,λ′ MS-SSIE ) By setting different super parameters λ, the weight parameters of the second sub-loss function MSE and the third sub-loss function (1-MS-SSIM) can be adjusted, so that different loss functions can be determined.
By means of the method, the super parameters of the loss function are adjusted, the weights of a plurality of sub-losses in the loss function can be adjusted, automatic blending of the loss function is facilitated, the coding code rate can be effectively balanced, and the influence of each image quality evaluation index on network training can be effectively balanced.
It should be appreciated that the loss function may include a plurality of sub-loss functions, and the present disclosure is not limited to a particular number of sub-loss functions. The loss function may be preset for different configurations of image processing networks, e.g. for an image processing network as shown in fig. 3, the loss function may be expressed as:
in the formula (5) of the present invention,representing the compression code rate of the image processing network, which can be approximated by a desired form of entropy, wherein the image x to be processed can obey the probability distribution p x ,/>Representing quantized data during entropy encoding or entropy decoding, < > and/or a combination thereof>Quantized data representing side information;
representing an indicator of the quality of the image, i.e. reconstructed image +.>Distortion error with image x to be processed, distortion degree +.>One or more combinations of image quality evaluation indexes such as mean square error MSE, multi-level structural similarity MS-SSIM, peak signal to noise ratio PSNR and the like can be used. The super parameter λ includes a plurality of parameters corresponding to the number of image quality indexes existing in D, and is used for adjusting the weight of the image quality indexes.
Wherein the prediction distributionCan be expressed as:
/>
in equation (6), the data is quantizedComprises a plurality of quantization elements->i is used for indicating quantized data +.>Numbering of the elements in>Can represent a mean value of mu i Normal distribution with standard deviation sigma, < ->Representing a uniform distribution.
Alternatively, the prediction distribution can also be based on an autoregressive methodCan be expressed as:
in equation (7), the data is quantizedRepresents quantization element +.>Nearby quantized elements, e.g., within a preset distance, for which the present disclosure is directedThe specific value of (a) is not limited, and the environment model subnetwork g cm And entropy parameter acquisition sub-network g ep May be based on neural network variations.
Alternatively, the prediction distribution can also be calculated according to a Gaussian mixture model method Expressed as:
wherein the sub-network g is obtained according to the entropy parameters ep K sets of entropy parameters (pi) (k) ,μ (k) ,σ (k) ),k=1~K。
After determining the image processing network shown in fig. 3, the reward function shown in formula (1), and the loss function shown in formula (4), the image processing network may be trained by a multiple loss function harmonic training method.
In one possible implementation manner, training the image processing network according to the training set and the verification set to obtain a trained image processing network, including S41-S42:
s41: for the t-th training, respectively inputting the first sample images in the training set into B image processing networks for (t-1) th training to be processed, so as to obtain B first processing results of the first sample images, wherein t and B are integers larger than 1;
s42: respectively training the B image processing networks trained in the (t-1) th round according to the first sample image, the B first processing results and the B loss functions trained in the t-th round to obtain B image processing networks trained in the t-th round;
s43: and under the condition that the B image processing networks trained in the t th round meet training conditions, determining the trained image processing networks according to the B image processing networks trained in the t th round.
For example, assume the training set is S t The verification set is S v The image to be processed is x, and the image processing network isθ represents the image processing network +.>Is a parameter of (a).
Before S41-S43, B image processing networks can be randomly initializedWherein B is an integer greater than 1, for example, B may be set to 8, and the specific value of B is not limited in the present disclosure.
In S41, for the t-th round training (not the first round), the training set S may be t The first sample image data is input into B image processing networks trained in the (t-1) th round one by one in parallelOr inputting multiple first sample image data in parallel into the B image processing network of the n-1 th training in batch (batch) data modeB first processing results of the first sample image are obtained, and the specific input mode of the first sample image to the image processing network is not limited in the disclosure.
Wherein, each first sample image may correspond to B first processing results, namely: image processing networkIs to an image processing network>Each of the first processing results may include a first compressed result of the first sample image and a reconstructed image.
Should be treatedSolution, since t is an integer greater than 1, training set S can be used for training round 1 t Respectively inputting the first sample image in the initialized B image processing networks for processingB first processing results of the first sample image are obtained.
In S42, B loss functions trained on the first sample image, B first processing results, and the t-th roundB image processing networks for previous training respectively +.>Training to obtain B image processing networks (t-th training)>
For example, the network may be processed based on the first sample imageIs trained by the t-th round of the first processing result of (c)>Image processing network for previous training>Training to obtain image processing network for training of the present round (t-th round)>Similarly, other image processing networks for the t-th training can be acquired>And will not be described in detail here.
B loss functions through the t-th roundImage processing network for training of the n-1 th round->The training process can be performed asynchronously and parallelly, so that training efficiency is improved. Wherein, B loss functions of the training can be updated according to the training condition of the previous round>Super parameters of->Obtain B loss functions of the round->
In S43, in the case where the B image processing networks for the t-th round of training satisfy the training conditions, the B image processing networks for the t-th round of training are trained according to B image processing networks->Image processing network with optimal medium performance, which is determined as the trained image processing network +.>
The condition that the training condition is met may include that the iteration number of the t-th round reaches a preset number of times, or that the performance of the obtained image processing network meets the expected performance, etc., and the specific condition that the iteration is stopped is not limited in the present disclosure.
By the method, the image processing network can be trained according to the training set and the verification set, the trained image processing network is obtained, efficient training of the image processing network is facilitated, and the trained image processing network has better performance.
In one possible implementation manner, the training the image processing network according to the training set and the verification set to obtain a trained image processing network, and further includes S44 to S46:
s44: for the t-th training, respectively inputting the second sample images in the verification set into B image processing networks of the t-th training to be processed, so as to obtain B second processing results of the second sample images;
s45: b self-adaptive strategies of (t+1) th training corresponding to the B image processing networks of the t th training are respectively determined according to the second sample image, the B second processing results and the rewarding function;
S46: and respectively determining weights of a plurality of sub-loss functions of the loss function according to the B self-adaptive strategies of the (t+1) th training round to obtain B loss functions of the (t+1) th training round.
For example, in S44, for the t-th round training (not the first round), the validation set S v The second sample images in the training are respectively input into B image processing networks of the training roundProcessing to obtain B second processing results of the second sample image, namely: image processing network->Is to an image processing network>Each of the second processing results may include a first compressed result of the second sample image and the reconstructed image.
Wherein, the second sample images can be respectively input into B image processing networks of the training round one by oneProcessing, the second sample image can be respectively input into B image processing networks of the training round in a batch (batch) data mode>The specific input mode of the second sample image into the image processing network is not limited by the present disclosure.
In S45, according to the verification set S v Second sample image, B second processing results and rewarding functionB image processing networks for the training of the present round are respectively determined +. >B verification results of the respective corresponding (t+1) th round of training +.>The verification result can comprise MS-SSIM index, PSNR index and quantized data of evaluating image quality distortion degree of the second processing result obtained by training the t th round>BPP, side information quantized data +.>And gradient loss and total variation, etc.
Then, according to B verification resultsDetermining the corresponding adaptive strategy->i=1 to B; wherein, adaptive strategy->Can be the verification result o t (e.g. fruit->) Results of Multi-Layer Perceptron (MLP), i.e. one mean value of μ λ Variance is sigma λ The specific acquisition method can be expressed as:
in equation (9), t represents the current round training, the hyper-parameter λ of the current round loss function t Obeys normal distributionThe normal distribution is obtained by verifying the result o t And performing multi-layer perception MLP analysis.
In S46, B adaptive strategies trained according to the (t+1) th roundi=1 to B, sampling the obtained B normal distributions as shown in formula (9) to obtain B ultrasonic parameters corresponding to B loss functions respectively>i=1 to B, and further determining weights of a plurality of sub-loss functions of the B loss functions, to obtain B loss functions +. >
In this way, the weights of a plurality of sub-loss functions (such as the weights of the sub-loss functions including PSNR and MS-SSIM) of the loss function are adaptively adjusted according to the reward function in the training process, so that the reward function can automatically reconcile the loss function, the complex manual loss function design and super-parameter traversal are replaced, and the training effect and efficiency are improved.
In one possible implementation, S42 includes:
s421: under the condition that the training round number t is a multiple of a preset track N, training the B image processing networks trained in the (t-1) th round according to the first sample image, the B first processing results and the B loss functions trained in the t-1 th round respectively to obtain B intermediate state image processing networks trained in the t-1 th round, wherein N is an integer greater than 1;
s422: respectively inputting the second sample images in the verification set into the image processing networks of the B intermediate states of the t-th training to be processed, so as to obtain B third processing results of the second sample images;
s423: respectively determining the rewarding results of the t-th training of the B image processing networks of the t-th training according to the second sample image, the B third processing results and the rewarding function;
S424: determining a target image processing network of the t-th training from the B intermediate state image processing networks of the t-th training according to the rewarding result of the t-th training;
s425: and determining B image processing networks of the t-th training according to the target image processing network of the t-th training.
For example, in S421, in the case that the training round number t is a multiple of the preset track N (N > 1), the training set S is used t In (2), B first processing results of the first sample image obtained in S41, and B loss functions trained with the t-th roundB image processing networks for previous training respectively +.>Training to obtain B intermediate state image processing networks of the training round>
In the case where the training wheel number t is not a multiple of the preset trajectory N, the data processing may be performed according to S41 to S43.
Therefore, through the preset track N, the training process can be adjusted once by N rounds of iteration at intervals, the smaller the value of N is, the smaller the variation of the parameter distribution of the target image processing network obtained by training among the rounds is, and the learning process of the whole training is facilitated.
In S422, the verification set S v Respectively inputting the second sample images in the training data into the image processing network of B intermediate states of the training data Obtaining B third processing results of the second sample image, namely: image processing networkThird processing result of (2) image processing network->Each third processing result may include a first compression result of the second sample image and a reconstructed image.
Wherein, the second sample images can be respectively input into the image processing networks of the B intermediate states of the training round one by oneIn the process, the second sample image can be respectively input into B intermediate state image processing networks of the training in the mode of batch data>The specific input mode of the second sample image into the image processing network is not limited by the present disclosure.
In S423, the verification set S may be v Second sample image of (a)B third processing results are respectively input into corresponding rewarding functionsImage processing network for determining B intermediate states of t-th trainingReward outcome of training round t +.>
Wherein, in order to facilitate gradient descent during training, an optimized adaptive strategy can also be obtained by minimizing proxy loss functions (Surrogate Loss Function, SLF)Parameter omega of (2) p And by parameter omega p Adjusting adaptive strategy- >The deviation between the strategy of the current round and the strategy of the previous round of iteration can be relatively smaller, and the iterative training of the subsequent round is facilitated. Wherein, according to the adaptive strategy->Parameter omega of (2) p The normal distribution to which the hyper-parameter lambda of the loss function is subject can be adjusted.
The proxy loss function may be represented as L CLIP The method comprises the following steps:
/>
in equation (10), the sampling coefficientThe method is used for indicating the change rate between the new adaptive strategy and the old adaptive strategy; shear function CLIP (f tp ),1-∈1 +.epsilon.) is used to evaluate the advantage +.>The range of the shear rate epsilon is limited to a required range, wherein the value of the shear rate epsilon can be 0.2, and the specific value range of the shear rate epsilon is not limited in the present disclosure.
In S424, according to the reward result of the t-th trainingImage processing network of B intermediate states trained from the t-th round>In (3) giving a reward result->Image processing network corresponding to the largest rewarding resultDetermining a target image processing network for the t-th training, namely: />
In S425, the target image processing network trained according to the t-th roundB image processing networks for training the t-th round +.>Is replaced by the target image processing network->B image processing networks for training of the t-th round can be determined +.>
In this way, the training process is adjusted once every N rounds of iteration, and the target image processing network for training is determined to be the image processing network, so that the condition that learning is not facilitated due to overlarge difference between new and old strategy changes in the training process can be reduced, and the training efficiency and the training accuracy can be improved.
In one possible implementation, the training condition includes that the training wheel number reaches a total training wheel number T, T is an integer and 1<t +.t, S43 includes:
s431: under the condition of t=t, respectively inputting the second sample image in the verification set into B image processing networks for T-th training to obtain B fourth processing results of the second sample image;
s422: respectively determining the rewarding results of the T-th training of the B-th image processing network of the T-th training according to the second sample image, the B fourth processing results and the rewarding function;
s433: determining a target image processing network of the T-th training from the B image processing networks of the T-th training according to the rewarding result of the T-th training;
s434: and determining the target image processing network trained by the T th round as a trained image processing network.
For example, in S431, the total training round number T may be preset, and in the case that the training round number T is the T-th round, the verification set S is set v Respectively inputting the second sample images in the T-th training B image processing networks for processingObtaining B fourth processing results of the second sample image, namely: image processing network- >Fourth processing result of (2) image processing network->Fourth processing results of (2), each fourth processingThe management result may include a first compression result of the second sample image and the reconstructed image.
Wherein, the second sample images can be respectively input into B image processing networks of the training round one by oneIn (2) respectively inputting the second sample image into B image processing networks trained by the T-th round in a batch (batch) data mode>The specific input mode of the second sample image into the image processing network is not limited by the present disclosure.
In S432, the second sample image and the B fourth processing results may be input with corresponding reward functionsDetermining B image processing networks for training round T>Is->
In S433, rewarding results according to the T-th trainingB image processing networks trained from the T-th roundIn (3) giving a reward result->Maximum reward outcome +.>Corresponding image processingNetwork->Determining the target image processing network for the training of the T-th round>
In S434, the target image processing network for training the T-th roundAnd determining the image processing network after training.
It should be appreciated that prior to the T-th round of training, if the resulting performance of the image processing network meets the desired performance, the training may be ended and the image processing network may be treated as a trained image processing network.
By the method, the total training round number T can be preset, a trained image processing network can be obtained in a limited round, and the training efficiency of the image processing network is improved.
In summary, the weights of the multiple sub-loss functions in the loss function can be dynamically adjusted as training proceeds. Specifically, the super parameter lambda of the loss function can be sampled for multiple rounds according to the adaptive strategy pi (& I o) of the current round respectively to obtain the optimal loss function, so that the image processing networkIn the verification set S v Rewarding->Max, wherein the image processing network +_>The parameter θ of (2) is obtainable by minimizing the current loss function. By means of the above-described double-layer optimization, an optimal image processing network is obtained>Namely:
in formula (11), l=r+λ MSE ·MSE+λ MS-SSIM The term (1-MS-SSIM) represents a loss function whose hyper-parameters lambda include the respective sub-loss functions lambda MSE And lambda is MS-SSIM Weight parameters of (c) are provided.
The method can effectively control the trade-off among the three indexes of the code rate, the PSNR and the MS-SSIM, save a great deal of cost of manual trial, promote the blend training on the two indexes of the PSNR and the MS-SSIM at the same time, and solve the problem of different image quality caused by independently optimizing the PSNR or the MS-SSIM at a low code rate.
Fig. 4 illustrates a schematic diagram of an image processing effect of an image processing method according to an embodiment of the present disclosure. As shown in fig. 4, the left image (a) is an original image to be processed; the middle picture (b) is an effect schematic diagram using the method disclosed by the disclosure, and the bit number occupied by each pixel in the image is 0.128BPP; the right image (c) is an effect diagram using the JPEG method, and the number of bits occupied by each pixel in the image is 0.164BPP.
Comparing the original image to be processed (a) with the JPEG method (c), it can be seen that the image processing network trained by the method disclosed by the disclosure performs image compression on the image to be processed, so that under the condition of lower BPP, the picture details (as shown in a square frame in FIG. 4) can be better reserved, the image quality of the image to be processed is consistent with that of the original image to be processed, and the image distortion introduced by compression processing is reduced.
Therefore, according to the image processing method of the embodiment of the disclosure, the image processing network can be obtained through training of a preset training set, a preset verification set, a loss function and a reward function, and the weights of a plurality of sub-loss functions of the loss function can be adjusted through the reward function, so that the automatic blending of the loss function by the reward function is realized, the complex manual loss function design and super-parameter traversal are replaced, and the training effect and efficiency are improved; and the first function and the plurality of second functions which are included in the reward function are respectively used for indicating the deviation of the coding rate of the image processing network and the deviation of a plurality of image quality indexes, and the balance between the coding rate and each image quality index can be effectively controlled by the online harmony of the reward function on the loss function, so that the cost of a large amount of manual attempts is saved, and the storage and transmission cost of the image is reduced under the condition of ensuring the image quality.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides an image processing apparatus, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any one of the image processing methods provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 5 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure, as shown in fig. 5, the apparatus including:
the compression module 51 is configured to perform compression processing on an image to be processed, so as to obtain a first compression result of the image to be processed;
a reconstruction module 52, configured to perform reconstruction processing on the first compression result, so as to obtain a reconstructed image of the image to be processed;
the image processing device is realized through an image processing network, the image processing network is obtained through a training method for automatically adjusting loss functions through rewarding functions, the loss functions of the image processing network comprise a plurality of sub-loss functions, and the rewarding functions are used for adjusting weights of the plurality of sub-loss functions in the training process.
It should be appreciated that the compression module 51 and the reconstruction module 52 may be applied to any processor, and the present disclosure is not limited.
In one possible implementation, the reward function includes a first function and a plurality of second functions, the first function being used to indicate a deviation of an encoding rate of the image processing network; the second function is used to indicate a deviation of an image quality indicator of the image processing network.
In one possible implementation, the compression module 51 is configured to: encoding the image to be processed to obtain a first image characteristic of the image to be processed; quantizing the first image features to obtain quantized data and quantized distribution information of the image to be processed; and carrying out entropy coding on the quantized data to obtain a compressed code stream of the image to be processed, wherein the first compression result comprises the compressed code stream and the quantized distribution information.
In one possible implementation, the reconstruction module 52 is configured to: entropy decoding is carried out on the first compression result to obtain a second image characteristic; and decoding the second image features to obtain a reconstructed image of the image to be processed.
In one possible implementation, the apparatus further includes a training module: the image processing network training device is used for training the image processing network according to the training set and the verification set to obtain a trained image processing network; wherein, training module is used for: for the t-th training, respectively inputting the first sample images in the training set into B image processing networks for (t-1) th training to be processed, so as to obtain B first processing results of the first sample images, wherein t and B are integers larger than 1; respectively training the B image processing networks trained in the (t-1) th round according to the first sample image, the B first processing results and the B loss functions trained in the t-th round to obtain B image processing networks trained in the t-th round; and under the condition that the B image processing networks trained in the t th round meet training conditions, determining the trained image processing networks according to the B image processing networks trained in the t th round.
In one possible implementation, the training module is further configured to: for the t-th training, respectively inputting the second sample images in the verification set into B image processing networks of the t-th training to be processed, so as to obtain B second processing results of the second sample images; b self-adaptive strategies of (t+1) th training corresponding to the B image processing networks of the t th training are respectively determined according to the second sample image, the B second processing results and the rewarding function; and respectively determining weights of a plurality of sub-loss functions of the loss function according to the B self-adaptive strategies of the (t+1) th training round to obtain B loss functions of the (t+1) th training round.
In one possible implementation manner, the training the B image processing networks of the (t-1) th training according to the first sample image, the B first processing results, and the B loss functions of the t-th training respectively, to obtain the B image processing networks of the t-th training includes: under the condition that the training round number t is a multiple of a preset track N, training the B image processing networks of the (t-1) th round training according to the first sample image, the B first processing results and the B loss functions of the t-th round training to obtain B intermediate state image processing networks of the t-th round training, wherein N is an integer larger than 1; respectively inputting the second sample images in the verification set into the image processing networks of the B intermediate states of the t-th training to be processed, so as to obtain B third processing results of the second sample images; respectively determining the rewarding results of the t-th round training of the image processing network in the B intermediate states of the t-th round training according to the second sample image, the B third processing results and the rewarding function; determining a target image processing network of the t-th training from the B intermediate state image processing networks of the t-th training according to the rewarding result of the t-th training; and determining B image processing networks of the t-th training according to the target image processing network of the t-th training.
In one possible implementation manner, the training condition includes that the number of training rounds reaches a total number of training rounds T, T is an integer and 1<t is less than or equal to T, and when the B image processing networks trained in the T th round meet the training condition, determining the trained image processing network according to the B image processing networks trained in the T th round includes: under the condition of t=t, respectively inputting the second sample image in the verification set into B image processing networks for T-th training to obtain B fourth processing results of the second sample image; respectively determining the rewarding results of the T-th training of the B-th image processing network of the T-th training according to the second sample image, the B fourth processing results and the rewarding function; determining a target image processing network of the T-th training from the B image processing networks of the T-th training according to the rewarding result of the T-th training; and determining the target image processing network trained by the T th round as a trained image processing network.
In one possible implementation manner, the plurality of sub-loss functions include a first sub-loss function, a second sub-loss function and a third sub-loss function, where the first sub-loss function is used to indicate an error of a coding rate, the second sub-loss function is used to indicate an error of a peak signal-to-noise ratio PSNR, and the third sub-loss function is used to indicate an error of a structural similarity SSIM, and the image quality index includes the peak signal-to-noise ratio PSNR and the structural similarity SSIM.
In one possible implementation, the first compression result of the image to be processed is used for transmission and/or storage.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 6 shows a block diagram of an electronic device 800, according to an embodiment of the disclosure. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 7 illustrates a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 7, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. Electronic device 1900 may operate an operating system based on memory 1932, such as the Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Multi-user multi-process computer operating system (Unix) TM ) Unix-like operating system (Linux) of free and open source code TM ) Unix-like operating system (FreeBSD) with open source code TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. An image processing method, the method comprising:
compressing an image to be processed to obtain a first compression result of the image to be processed;
carrying out reconstruction processing on the first compression result to obtain a reconstructed image of the image to be processed;
The image processing method is realized through an image processing network, the image processing network is obtained through a training method for automatically adjusting a loss function through a reward function, the loss function of the image processing network comprises a plurality of sub-loss functions, and the reward function is used for adjusting weights of the plurality of sub-loss functions in the training process;
the rewarding function comprises a first function and a plurality of second functions, wherein the first function is used for indicating deviation of coding rate of the image processing network; the second function is used to indicate a deviation of an image quality indicator of the image processing network.
2. The method according to claim 1, wherein the compressing the image to be processed to obtain a first compression result of the image to be processed includes:
encoding the image to be processed to obtain a first image characteristic of the image to be processed;
quantizing the first image features to obtain quantized data and quantized distribution information of the image to be processed;
and carrying out entropy coding on the quantized data to obtain a compressed code stream of the image to be processed, wherein the first compression result comprises the compressed code stream and the quantized distribution information.
3. The method according to claim 1 or 2, wherein reconstructing the first compression result to obtain a reconstructed image of the image to be processed comprises:
entropy decoding is carried out on the first compression result to obtain a second image characteristic;
and decoding the second image features to obtain a reconstructed image of the image to be processed.
4. A method according to any one of claims 1-3, characterized in that the method further comprises:
training the image processing network according to the training set and the verification set to obtain a trained image processing network;
the training of the image processing network according to the training set and the verification set to obtain a trained image processing network comprises the following steps:
for the t-th training, respectively inputting the first sample images in the training set into B image processing networks for (t-1) th training to be processed, so as to obtain B first processing results of the first sample images, wherein t and B are integers larger than 1;
respectively training the B image processing networks trained in the (t-1) th round according to the first sample image, the B first processing results and the B loss functions trained in the t-th round to obtain B image processing networks trained in the t-th round;
And under the condition that the B image processing networks trained in the t th round meet training conditions, determining the trained image processing networks according to the B image processing networks trained in the t th round.
5. The method of claim 4, wherein training the image processing network according to the training set and the validation set results in a trained image processing network, further comprising:
for the t-th training, respectively inputting the second sample images in the verification set into B image processing networks of the t-th training to be processed, so as to obtain B second processing results of the second sample images;
b self-adaptive strategies of (t+1) th training corresponding to the B image processing networks of the t th training are respectively determined according to the second sample image, the B second processing results and the rewarding function;
and respectively determining weights of a plurality of sub-loss functions of the loss function according to the B self-adaptive strategies of the (t+1) th training round to obtain B loss functions of the (t+1) th training round.
6. The method of claim 4, wherein training the B image processing networks of the (t-1) th training round according to the first sample image, the B first processing results, and the B loss functions of the t-th training round, respectively, to obtain the B image processing networks of the t-th training round comprises:
Under the condition that the training round number t is a multiple of a preset track N, training the B image processing networks of the (t-1) th round training according to the first sample image, the B first processing results and the B loss functions of the t-th round training to obtain B intermediate state image processing networks of the t-th round training, wherein N is an integer larger than 1;
respectively inputting the second sample images in the verification set into the image processing networks of the B intermediate states of the t-th training to be processed, so as to obtain B third processing results of the second sample images;
respectively determining the rewarding results of the t-th round training of the image processing network in the B intermediate states of the t-th round training according to the second sample image, the B third processing results and the rewarding function;
determining a target image processing network of the t-th training from the B intermediate state image processing networks of the t-th training according to the rewarding result of the t-th training;
and determining B image processing networks of the t-th training according to the target image processing network of the t-th training.
7. The method according to any one of claims 4 to 6, wherein the training conditions include the number of training rounds reaching a total number of training rounds T, T being an integer and 1<t.ltoreq.T,
Under the condition that the B image processing networks trained in the t-th round meet training conditions, determining the trained image processing network according to the B image processing networks trained in the t-th round, wherein the method comprises the following steps:
under the condition of t=t, respectively inputting the second sample image in the verification set into B image processing networks for T-th training to obtain B fourth processing results of the second sample image;
respectively determining the rewarding results of the T-th training of the B-th image processing network of the T-th training according to the second sample image, the B fourth processing results and the rewarding function;
determining a target image processing network of the T-th training from the B image processing networks of the T-th training according to the rewarding result of the T-th training;
and determining the target image processing network trained by the T th round as a trained image processing network.
8. The method of claim 1, wherein the plurality of sub-loss functions includes a first sub-loss function for indicating an error in the coding rate, a second sub-loss function for indicating an error in the peak signal-to-noise ratio PSNR, and a third sub-loss function for indicating an error in the structural similarity SSIM,
The image quality index comprises peak signal-to-noise ratio PSNR and structural similarity SSIM.
9. Method according to any of claims 1-8, characterized in that the first compression result of the image to be processed is used for transmission and/or storage.
10. An image processing apparatus, comprising:
the compression module is used for carrying out compression processing on the image to be processed to obtain a first compression result of the image to be processed;
a reconstruction module, configured to perform reconstruction processing on the first compression result to obtain a reconstructed image of the image to be processed,
the image processing device is realized through an image processing network, the image processing network is obtained through a training method for automatically adjusting a loss function through a reward function, the loss function of the image processing network comprises a plurality of sub-loss functions, and the reward function is used for adjusting weights of the plurality of sub-loss functions in the training process;
the rewarding function comprises a first function and a plurality of second functions, wherein the first function is used for indicating deviation of coding rate of the image processing network; the second function is used to indicate a deviation of an image quality indicator of the image processing network.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method of any of claims 1 to 9.
12. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 9.
CN202110844476.3A 2021-07-26 2021-07-26 Image processing method and device, electronic equipment and storage medium Active CN113596471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110844476.3A CN113596471B (en) 2021-07-26 2021-07-26 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110844476.3A CN113596471B (en) 2021-07-26 2021-07-26 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113596471A CN113596471A (en) 2021-11-02
CN113596471B true CN113596471B (en) 2023-09-12

Family

ID=78249975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110844476.3A Active CN113596471B (en) 2021-07-26 2021-07-26 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113596471B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062780A (en) * 2017-12-29 2018-05-22 百度在线网络技术(北京)有限公司 Method for compressing image and device
CN110428378A (en) * 2019-07-26 2019-11-08 北京小米移动软件有限公司 Processing method, device and the storage medium of image
CN111147862A (en) * 2020-01-03 2020-05-12 南京大学 End-to-end image compression method based on target coding
CN111683250A (en) * 2020-05-13 2020-09-18 武汉大学 Generation type remote sensing image compression method based on deep learning
US10965948B1 (en) * 2019-12-13 2021-03-30 Amazon Technologies, Inc. Hierarchical auto-regressive image compression system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062780A (en) * 2017-12-29 2018-05-22 百度在线网络技术(北京)有限公司 Method for compressing image and device
CN110428378A (en) * 2019-07-26 2019-11-08 北京小米移动软件有限公司 Processing method, device and the storage medium of image
US10965948B1 (en) * 2019-12-13 2021-03-30 Amazon Technologies, Inc. Hierarchical auto-regressive image compression system
CN111147862A (en) * 2020-01-03 2020-05-12 南京大学 End-to-end image compression method based on target coding
CN111683250A (en) * 2020-05-13 2020-09-18 武汉大学 Generation type remote sensing image compression method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Zhengxue Cheng, et al.Deep Convolutional AutoEncoder-based Lossy Image Compression.《2018 Picture Coding Symposium (PCS)》.2018,全文. *

Also Published As

Publication number Publication date
CN113596471A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN106170979B (en) A kind of computer implemented method, system, machine readable media and equipment for Video coding
CN110915215B (en) Tiled image compression using neural networks
CN109859144B (en) Image processing method and device, electronic equipment and storage medium
CN112383777B (en) Video encoding method, video encoding device, electronic equipment and storage medium
CN114363615B (en) Data processing method and device, electronic equipment and storage medium
JP7345650B2 (en) Alternative end-to-end video coding
WO2020062074A1 (en) Reconstructing distorted images using convolutional neural network
EP3849180A1 (en) Encoding or decoding data for dynamic task switching
JP2023535290A (en) Reinforcement learning based on rate control
CN115052150A (en) Video encoding method, video encoding device, electronic equipment and storage medium
CN109447258B (en) Neural network model optimization method and device, electronic device and storage medium
WO2022246986A1 (en) Data processing method, apparatus and device, and computer-readable storage medium
CN113596471B (en) Image processing method and device, electronic equipment and storage medium
CN106358004B (en) Video call method and device
CN116847087A (en) Video processing method and device, storage medium and electronic equipment
CN116805282A (en) Image super-resolution reconstruction method, model training method, device and electronic equipment
WO2023118317A1 (en) Method and data processing system for lossy image or video encoding, transmission and decoding
CN116129308A (en) Video quality enhancement method and device, electronic equipment and storage medium
Mali et al. The sibling neural estimator: Improving iterative image decoding with gradient communication
CN102948147A (en) Video rate control based on transform-coefficients histogram
CN112954348B (en) Video encoding method and device, electronic equipment and storage medium
CN111885386B (en) Image compression method, image decompression method, image compression device, image decompression device, electronic equipment and storage medium
US20230105436A1 (en) Generative adversarial network for video compression
US11670008B2 (en) Processing display data for transmission
WO2024131692A1 (en) Image processing method, apparatus and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant