CN116347251A - Image processing super-parameter optimization method, system, equipment and storage medium - Google Patents

Image processing super-parameter optimization method, system, equipment and storage medium Download PDF

Info

Publication number
CN116347251A
CN116347251A CN202310291835.6A CN202310291835A CN116347251A CN 116347251 A CN116347251 A CN 116347251A CN 202310291835 A CN202310291835 A CN 202310291835A CN 116347251 A CN116347251 A CN 116347251A
Authority
CN
China
Prior art keywords
image
image processing
training
super
processing model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310291835.6A
Other languages
Chinese (zh)
Inventor
韦钊
胡旭阳
王诗韵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keyuan Software Technology Development Co ltd
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keyuan Software Technology Development Co ltd
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keyuan Software Technology Development Co ltd, Suzhou Keda Technology Co Ltd filed Critical Suzhou Keyuan Software Technology Development Co ltd
Priority to CN202310291835.6A priority Critical patent/CN116347251A/en
Publication of CN116347251A publication Critical patent/CN116347251A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing super-parameter optimization method, system, equipment and storage medium, wherein the method comprises the following steps: acquiring a plurality of groups of first training image data from the debugging equipment; training a first image processing model based on the first training image data; acquiring a plurality of groups of second training image data from the comparison equipment; training a second image processing model based on the second training image data; acquiring a third shooting image from the debugging equipment, and acquiring a third tag image based on the second image processing model; and taking the third shooting image and the initialization super-parameter combination as the input of the first image processing model, taking the third tag image as tag data of the first image processing model, and optimizing the initialization super-parameter combination. By adopting the method and the device, the problem of visual angle difference between debugging equipment and contrast equipment is solved, and the effect of image processing super-parameter optimization is further improved.

Description

Image processing super-parameter optimization method, system, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, a system, an apparatus, and a storage medium for optimizing an image processing super parameter.
Background
Commercial-level imaging systems rely on image signal processing (Image Signal Processing, ISP) procedures, which typically consist of Pixel-level image processing modules that contain a large number of super-parameters for reconstructing the RAW image on the sensor into an RGB image. In the field of surveillance, there is often a complex interaction of hardware image processing superparameters (ISP superparameters) with the reconstructed RGB image. Conventional approaches typically require experienced ISP engineers to spend months optimizing these super parameters of the monitoring device to achieve best visual quality image results, which not only requires significant time resources, but also makes it difficult to ensure that the optimized parameters are globally or locally optimal during incremental iterations.
In recent years, with the excellent performance of convolutional neural networks (Convolutional Nearul Network, CNN) and generation countermeasure networks (Generative Adversarial Network, GAN) in the pixel-level image processing field, various methods based on neural networks to replace ISP flows are endless. The main method comprises the following steps: learning a differentiable proxy model of the debugging equipment hardware ISP through a neural network, and then optimizing the super-parameters of the debugging equipment ISP by using the contrast equipment image obtained through the traditional image registration algorithm as a target image (GT) to expect to achieve the contrast equipment image style. In the prior art, a traditional image registration algorithm is used, an affine transformation matrix is calculated, an image of a comparison device is aligned with an image of a debugging device, but when the difference of angles of view of the comparison device and the debugging device is large, serious pixel misalignment problem can occur, and the method cannot solve pixel errors caused by lens distortion or scene depth difference, so that the optimization effect of image processing super-parameters is poor.
Content of the application
Aiming at the problems in the prior art, the purpose of the application is to provide an image processing super-parameter optimization method, an image processing super-parameter optimization system, image processing super-parameter optimization equipment and a storage medium, which solve the problem of visual angle difference between debugging equipment and contrast equipment and further improve the effect of image processing super-parameter optimization.
The embodiment of the application provides an image processing super-parameter optimization method, which comprises the following steps:
acquiring a plurality of groups of first training image data from debugging equipment, wherein each group of first training image data comprises a first shooting image, a training super-parameter combination and a first tag image obtained by the debugging equipment through processing the first shooting image based on the training super-parameter combination;
training a first image processing model based on the first training image data;
acquiring a plurality of groups of second training image data from a comparison device, wherein each group of second training image data comprises a second shooting image and a second label image obtained by processing the second shooting image by the comparison device;
training a second image processing model based on the second training image data;
acquiring a third shooting image from the debugging equipment, and acquiring a third tag image based on the second image processing model;
And taking the third shooting image and the initialization super-parameter combination as input of the first image processing model, taking the third label image as label data of the first image processing model, and optimizing the initialization super-parameter combination.
The image processing super-parameter optimization method is adopted to acquire a first image processing model for simulating an internal image processing algorithm of the debugging equipment, acquire a second image processing model for simulating the internal image processing algorithm of the contrast equipment, acquire a third tag image of a target based on a third shooting image shot by the debugging equipment, and take the third tag image as a target image when the super-parameter combination is optimized.
In some embodiments, training a second image processing model based on the second training image data comprises the steps of:
the second shot image is subjected to a preset image correction algorithm to obtain a first corrected image;
taking the first corrected image as the input of the second image processing model, taking the second label image as the label data of the second image processing model, and training the second image processing model;
obtaining a third label image based on the second image processing model, comprising the following steps:
the third shot image is subjected to a preset image correction algorithm to obtain a second corrected image;
and inputting the second corrected image into the second image processing model to obtain a third label image output by the second image processing model.
In some embodiments, the image correction algorithm is configured to convert the first format image to a second format image based on the sensor parameters;
and obtaining a first corrected image from the second shot image through a preset image correction algorithm, wherein the method comprises the following steps of: acquiring first sensor parameters of the contrast equipment, and converting the second shot image in a first format into a first corrected image in a second format by adopting the image correction algorithm based on the first sensor parameters;
And obtaining a second corrected image from the third shot image through a preset image correction algorithm, wherein the method comprises the following steps of: and acquiring second sensor parameters of the debugging equipment, and based on the second sensor parameters, adopting the image correction algorithm to convert the third shot image in the first format into a second corrected image in the second format.
In some embodiments, the acquiring the plurality of sets of second training image data from the contrast device includes the steps of:
acquiring a plurality of contrast shooting images of a static scene at the same moment;
and carrying out averaging treatment on the plurality of contrast shooting images to obtain a second shooting image.
In some embodiments, the first image processing model comprises an encoder and a decoder, the encoder comprising a plurality of encoding modules serially connected in sequence;
training a first image processing model based on the first training image data, comprising the steps of:
inputting a first shooting image and a training super-parameter combination in each group of first training image data into the first image processing model in sequence, respectively fusing an input image of the first image processing model and the input super-parameter combination by a coding module of the first image processing model, extracting a characteristic image, and taking the characteristic image as an output image of the coding module, wherein the input image of a first coding module is the first shooting image, and the input images of other coding modules except the first are the output images of the previous coding module;
And obtaining an output image of the first image processing model, constructing a loss function with the first label image, and reversely iterating and optimizing the first image processing model.
In some embodiments, the first image processing model is a uformer model, and each coding module uses an input image, a parameter and an output image of the coding module as information to be queried, queried information and values obtained by querying, and adopts a self-attention mechanism to fuse.
In some embodiments, each of the encoding modules includes a convolution unit, a first dimension flattening unit, a second dimension flattening unit, a first point multiplication unit, a probability weight calculation unit, and a second point multiplication unit;
the input hyper-parameter combination of the coding module is input into the first dimension flattening unit through the convolution unit, the input image of the coding module is input into the second dimension flattening unit, the output data of the first dimension flattening unit and the output data of the second dimension flattening unit are obtained through the first dot multiplication unit, the dot multiplication data are input into the probability weight calculation unit, the output data of the probability weight calculation unit and the output data of the second dimension flattening unit are input into the second dot multiplication unit, and the second dot multiplication unit outputs a feature map as an output image of the coding module.
The embodiment of the invention also provides an image processing super-parameter optimization system which is applied to the image processing super-parameter optimization method, and the system comprises the following steps:
the first model training module is used for acquiring a plurality of groups of first training image data from the debugging equipment, wherein each group of first training image data comprises a first shooting image, a training super-parameter combination and a first label image obtained by the debugging equipment through processing the first shooting image based on the training super-parameter combination, and a first image processing model is trained based on the first training image data;
the second model training module is used for acquiring a plurality of groups of second training image data from the comparison equipment, wherein each group of second training image data comprises a second shooting image and a second label image obtained by processing the second shooting image by the comparison equipment, and a second image processing model is trained based on the second training image data;
the super-parameter optimization module is used for acquiring a third shooting image from the debugging equipment, obtaining a third tag image based on the second image processing model, taking the third shooting image and an initialization super-parameter combination as input of the first image processing model, taking the third tag image as tag data of the first image processing model, and optimizing the initialization super-parameter combination.
According to the image processing super-parameter optimizing system, the first image processing model for simulating the internal image processing algorithm of the debugging equipment is obtained through the first model training module, the second image processing model for simulating the internal image processing algorithm of the contrast equipment is obtained through the second model training module, then the super-parameter optimizing module is used for obtaining the third tag image of the target based on the third shooting image shot by the debugging equipment, the third tag image is used as the target image for optimizing the super-parameter combination, the third tag image is obtained after the third shooting image is processed through the second image processing model, the third tag image and the third shooting image are free of visual angle difference and are not affected by problems of lens distortion, inconsistent image resolution and the like, the third tag image is obtained through the second image processing model of the simulation contrast equipment, the image processing style of the simulation contrast equipment during optimizing the super-parameter combination is achieved, and the super-parameter combination optimizing effect is improved.
The embodiment of the application also provides an image processing super-parameter optimizing device, which comprises:
a processor;
a memory having stored therein executable instructions of the processor;
Wherein the processor is configured to perform the steps of the image processing super-parameter optimization method via execution of the executable instructions.
By adopting the image processing super-parameter optimizing device provided by the application, the processor executes the image processing super-parameter optimizing method when executing the executable instruction, so that the beneficial effects of the image processing super-parameter optimizing method can be obtained.
The embodiment of the application also provides a computer readable storage medium for storing a program, which when executed by a processor, implements the steps of the image processing super-parameter optimization method.
By adopting the computer-readable storage medium provided by the application, the program stored therein realizes the steps of the image processing super-parameter optimization method when being executed, thereby obtaining the beneficial effects of the image processing super-parameter optimization method.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings.
FIG. 1 is a flow chart of an image processing hyper-parameter optimization method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a first image processing model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an encoding module according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an image processing hyper-parametric optimization system according to an embodiment of the disclosure;
FIG. 5 is a schematic structural diagram of an image processing super-parameter optimizing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer storage medium according to an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus a repetitive description thereof will be omitted. Although the terms "first" or "second" etc. may be used herein to describe certain features, these features should be interpreted in a descriptive sense only and not for purposes of limitation as to the number and importance of the particular features.
As shown in fig. 1, in an embodiment, the present application provides an image processing super-parameter optimization method, which includes the following steps:
S100: acquiring a plurality of groups of first training image data from debugging equipment, wherein each group of first training image data comprises a first shooting image, a training super-parameter combination and a first tag image obtained by the debugging equipment through processing the first shooting image based on the training super-parameter combination;
the debugging device is a device needing to debug the image processing parameters, for example, a monitoring device, an image processing device and the like, an image directly shot by the debugging device is a first shot image, a first tag image is obtained by processing the first shot image based on a training super-parameter combination, the training super-parameter combination can be obtained by carrying out layered sampling on the value range of each super-parameter, and the specific implementation will be described in detail below;
in this embodiment, the first shot image is a RAW image shot by the debug device, the first tag image is an RGB image or a YUV image obtained after processing a hardware ISP processing flow of the debug device, and the first tag image is taken as an RGB image for illustration, that is, the hardware ISP processing flow of the debug device processes the shot RAW image into an RGB image; in an alternative embodiment, the first label image may also be a YUV image, and the first label image may be processed to obtain an RGB image by program processing according to need;
S200: training a first image processing model based on the first training image data;
specifically, a first shooting image and a training super-parameter combination in each group of first training image data are used as input of a first image processing model, the first label image is used as label data of the first image processing model, a loss function is constructed based on output of the first image processing model and the first label image, the first image processing model is trained in a reverse iteration mode in a gradient descending mode based on the loss function, and in the training process, the model inherent parameters except the image processing super-parameter in the first image processing model are optimized, and the obtained first image processing model is a simulation model for simulating an image processing algorithm flow of the debugging equipment;
s300: acquiring a plurality of groups of second training image data from a comparison device, wherein each group of second training image data comprises a second shooting image and a second label image obtained by processing the second shooting image by the comparison device;
the contrast device is a device which wants to debug the device to simulate the image processing style, the contrast device may be a device which has been debugged in advance, for example, a monitoring device, an image processing device, etc., or a digital camera, the second shot image is an image which is directly shot by the contrast device, for example, a RAW image, the second label image is an image which is processed by an internal image processing algorithm of the contrast device, for example, an RGB image or a YUV image, and the second label image is an RGB image, for example, that is, the image processing algorithm of the contrast device also processes the shot RAW image to obtain an RGB image; in an alternative embodiment, the second label image may also be a YUV image, and the second label image may be processed to obtain an RGB image by program processing according to need;
S400: training a second image processing model based on the second training image data;
for example, the second shot image or the second shot image is processed by a preset correction algorithm and then is used as input of the second image processing model, the second label image is used as label data of the second image processing model, a loss function is constructed based on output of the second image processing model and the second label image, and inherent model parameters of the second image processing model are reversely and iteratively optimized in a gradient descent mode, so that the obtained second image processing model is a simulation model for simulating the image processing style of the contrast device;
s500: acquiring a third shooting image from the debugging equipment, and acquiring a third tag image based on the second image processing model;
the third label image is a target image which has a contrast device image processing style and is aligned with the pixel level of the third shooting image and is used for automatically optimizing the super-parameters subsequently; the third tag image is obtained based on the second image processing model, and the third tag image may be obtained by inputting the third shot image or an image processed by a preset correction algorithm of the third shot image into the second image processing model; the third shot image is, for example, a RAW image, the third label image is, for example, an RGB image or a YUV image, when the second label image is an RGB image, the third label image is correspondingly an RGB image, and when the second label image is a YUV image, the third label image is correspondingly a YUV image;
S600: taking the third shot image and an initialization hyper-parameter combination as input of the first image processing model, taking the third label image as label data of the first image processing model, and optimizing the initialization hyper-parameter combination;
in this process, other intrinsic model parameters (this part of intrinsic model parameters is obtained through optimization training in step S200) except for the hyper-parameter combination in the first image processing model are fixed, a loss function is constructed according to the output of the first image processing model and the third label image, and the initialized hyper-parameter combination is optimized in a reverse iteration mode.
According to the image processing super-parameter optimization method, the first image processing model simulating the internal image processing algorithm of the debugging equipment is obtained through the steps S100-S200, the second image processing model simulating the internal image processing algorithm of the contrast equipment is obtained through the steps S300-S400, then the third tag image of the target is obtained through the step S500 based on the third shooting image shot by the debugging equipment, the third tag image is used as the target image when the super-parameter combination is optimized in the step S600, and as the third tag image is obtained after the third shooting image is processed through the second image processing model, the third tag image and the third shooting image are free of visual angle difference and are not affected by problems such as lens distortion and inconsistent image resolution, the third tag image is obtained through the second image processing model simulating the contrast equipment, so that the image processing style simulating the contrast equipment is realized when the super-parameter combination is optimized, and the super-parameter combination optimizing effect is improved.
In this embodiment, in the step S100, the acquiring a plurality of sets of first training image data from the debugging device includes using the debugging device to automatically acquire the first captured image I in the RAW format captured by the debugging device through the media control stream capturing program in the darkroom raw Randomly setting an image processing super-parameter combination P as a training super-parameter combination to obtain a first tag image I in an RGB format which is processed by an internal hardware image processing algorithm of debugging equipment under the super-parameter combination P rgb . Through the acquisition process, a large number (I) raw ,P,I rgb ) And (5) triad. When the image processing super-parameter combination P is set randomly, the super-parameter P is processed for each image i I= {1,2,3, … } is sampled hierarchically in its range of values and then combined randomly to ensure that all the possible values of the super-parameters are sampled uniformly.
The principle of the image processing algorithm of the first image processing model simulation debugging equipment is as follows:
the image processing algorithm flow of the debugging device is regarded as a function f for converting an input RAW image into an output RGB image ISP . Since the parameterization can be performed by the superparameter combination P, I rgb =f ISP (I raw The method comprises the steps of carrying out a first treatment on the surface of the P). Thus, the ISP hyper-parameter optimization problem can be defined as:
Figure BDA0004141777570000091
Wherein P is * Representing an optimal ISP superparameter combination optimized by task-specific evaluation criteria, N representing a target image
Figure BDA0004141777570000092
Quantity of->
Figure BDA0004141777570000093
For the ith input RAW image, P is the super-parameter combination corresponding to the input RAW image one by one, and the tone is adjustedThe image processing algorithm flow of the test equipment is based on P pairs +.>
Figure BDA0004141777570000094
Processing to obtain target image->
Figure BDA0004141777570000095
Normally hardware or software ISP flow f ISP Is not differentiable, which results in an inability to optimize equation (1) using the gradient descent method. To solve this problem, the present application proposes a differentiable proxy model f proxy (first image processing model), and by training f proxy So that f proxy ≈f ISP . And f ISP Similarly, f proxy By adding the super parameter P as input, the input image I is obtained raw Mapping into
Figure BDA0004141777570000096
There is +.>
Figure BDA0004141777570000097
W represents an internal intrinsic parameter of the first image processing model. By optimizing the formula (2), an approximation f can be obtained ISP F of (2) proxy Parameter W * M represents the number of first training data.
Figure BDA0004141777570000098
After the RAW image is shot by the debugging device, a series of processing is carried out on the image through a hardware image processing algorithm flow, wherein the processing comprises correction processing (such as color correction, automatic white balance, black level, shading correction and the like) and parameter adjustment processing (such as demosaicing, sharpening, 2D noise reduction, 3D noise reduction and the like), and various image processing modes of the debugging device are required to be simulated by the first image processing model. In one embodiment, a full convolutional neural network based on U-NET may be employed for the first image processing model or other type of convolutional neural network model. In the embodiment adopting the U-NET model, certain limitations still exist in terms of parameters of the model and the capacity of the whole network, and the disadvantages mainly comprise: (1) Under the condition that the adjustable super-parameters are continuously increased, if 166 super-parameters or more are increased, the generalization of the model is poor and the feature extraction capability is limited when the learning image of the model processes the generation effect of the image corresponding to the super-parameters. (2) For tasks that cannot be optimized based on a large number of super parameters, the super parameters need to be decomposed into small modules to search one by one, which increases the complexity of work and easily ignores the coupling between different image processing modules (such as the coupling between sharpening, contrast and other modules and the 2D noise reduction module). With the ever increasing number of optimized superparameters, with the dramatic rise in search space resulting from superparameter combinations, models need to have a stronger inductive bias capability to accommodate differences in image style due to changes in superparameters, while being able to accommodate larger search spaces.
Based on this, in another embodiment, a structure and a construction manner of another first image processing model are further provided, and compared with a classical convolutional neural network, the method has stronger expression capability in feature extraction of the model, can better generalize the situation of different image styles caused by a large number of different parameters, and can better fit the style of a target image, thereby providing a new direction for the development of an automatic image super-parameter optimization subsequent large model.
Specifically, the first image processing model comprises an encoder and a decoder, the encoder comprises a plurality of encoding modules which are sequentially connected in series, a downsampling module is arranged between every two adjacent encoding modules, and multiple downsampling fusion between an input image and an input super-parameter combination is realized through the encoding modules and the downsampling modules. The step S200: training a first image processing model based on the first training image data, comprising the steps of:
a first shooting image I in each group of the first training image data raw And training the hyper-parameter combination P to input the first image processing model, wherein the coding modules of the first image processing model respectively code the coding modules And fusing the input images of the blocks with the input hyper-parameters, extracting a characteristic image, and taking the characteristic image as an output image of the coding module, wherein the input image of the first coding module is the first shooting image, and the input images of the other coding modules except the first shooting image are the output images of the previous coding modules. And if multiple groups of first training image data are input to the first image processing model at the same time, the corresponding relation between each output image and each group of first training image data needs to be determined.
Acquiring an output image of the first image processing model and the first tag image I rgb And constructing a loss function, reversely iterating and optimizing the first image processing model, and if a plurality of groups of first training image data are input at the same time, when the loss function is constructed, inputting a first shooting image in each group of first training image data into an output image obtained by the first image processing model and a first label image in the corresponding first training image data to construct the loss function.
In this embodiment, as shown in fig. 3, the first image processing model is a uformer model, and the encoding module is a LeWin Blocks. Uformer is a network of layered codec structures. LeWin Blocks (localization-enhanced window transformer) were introduced in the Uformer design and self-attention was calculated in non-overlapping partial windows. Here, five encoding modules are taken as an example, but the present application is not limited thereto, and in different embodiments, a different number of encoding modules, for example, four, six, etc., may be selected as needed. In fig. 3, degraded Image is an input Image input to the first Image processing model, and super-parameter combinations input to each LeWin Blocks through downward arrows above the Degraded Image are input to the first Image processing model. When the first Image processing model is trained, the Degraded Image is a first shot Image, and the hyper-parameter combinations input to the LeWin Blocks are training hyper-parameter combinations. The Input image is, for example, subjected to a convolution layer composed of a 3×3 convolution kernel and a LeakyReLU activation function to extract features, so as to obtain the Input project in fig. 3, which is used as the Input image of the first LeWin Blocks, and the feature map extracted by each LeWin Blocks is output and used as the Input image of the next LeWin Blocks. In fig. 3, five LeWin Blocks connected by a downsampling module are coding modules, while the below LeWin Blocks connected by an upsampling module are decoding modules, and the last decoding module outputs an Image to Output Projection and then is used as an output Image Restored Image of the first Image processing model. Modulator is a disciplinary Modulator, and the addition of a Modulator at each stage of the decoder allows for flexible adjustment of the feature map, thereby improving the performance of the recovery details.
In this embodiment, each coding module uses the input image, the parameter and the output image of the coding module as the information Q to be queried, the queried information K and the queried value V, and uses the self-attention mechanism self-attention to fuse. Specifically, as shown in fig. 4, in this embodiment, each of the encoding modules includes a convolution unit (corresponding to Conv2D (3 x 3)), a first dimension flattening unit (corresponding to the left side of the platten), a second dimension flattening unit (corresponding to the right side of the platten), a first point multiplication unit (corresponding to the upper torch. Matmul), a probability weight calculation unit (including a softmax unit and a sigmoid unit), and a second point multiplication unit (corresponding to the lower torch. Matmul). And a layerrnorm unit is further arranged between the second dimension flattening unit and the first dot multiplying unit, and layer normalization processing is performed.
As shown in fig. 4, the input hyper-parameter combination of the encoding module is input to the first dimension flattening unit through the convolution unit in which the input hyper-parameter combination aligns the input hyper-parameter combination with the image channel through a convolution of 3x 3. The input image of the coding module is input into the second dimension flattening unit, the input image of the nxc×h×w is flattened into an nxc×h×w feature map (where N represents how many groups of RAW images and super parameter combinations P are input at one time, C represents the total channel number of the input RAW images and super parameter combinations P, H represents the height of the input image, and W represents the width of the input image), and after channel stitching, the dimension of the nxw is reduced to h×w). The output data of the first dimension flattening unit and the second dimension flattening unit are multiplied by the first point multiplication unit to obtain point multiplication data, and the point is obtained by the first point multiplication unit The multiplier data is input into the probability weight calculation unit, the output data of the probability weight calculation unit and the output data of the second dimension flattening unit are input into the second point multiplication unit, and the second point multiplication unit outputs a feature map as an output image of the encoding module. Through a large number of experiments, it is shown that by adopting the feature fusion structure of the coding module as shown in fig. 4, compared with the method adopting the position coding, the channel concat and the modulator, the effect of fusion of the image and the super-parameters is better, and the method can learn (I raw ,P,I rgb ) The deep relation in the triplet can reduce the calculated amount, realize the multi-mode fusion of the image and the super parameter, greatly reduce the channel number in the channel dimension and greatly improve the calculation speed and the training speed.
Since the embodiment adopts the uformer to construct the first image processing model, the uformer has strong characteristic extraction capability on images and super parameters, so that the super parameters of a plurality of processing modules in the debugging equipment can be combined to be optimized simultaneously. For example, the super parameters of the sharpening contrast and the like module and the super parameters of the 2D noise reduction module are fused together. The training hyper-parameter combinations P also comprise the values of the hyper-parameters of the sharpening contrast and other modules and the values of the hyper-parameters of the 2D noise reduction modules, so that 166 hyper-parameters and even more than 166 hyper-parameters can be simultaneously and optimally trained. After the sharpening contrast and other modules are combined with the 2D noise reduction module, the model training efficiency of the overall simulation image processing algorithm flow can be improved, so that only one model is required to be trained to simultaneously optimize the super parameters of the sharpening contrast and other modules and the 2D noise reduction module, and then one model is required to be trained to optimize the super parameters of the 3D noise reduction module. And the image processing effect with some gain is better than the training effect of three separate image processing models based on U-net. The number of module sampling is reduced, and meanwhile, the coupling between different image processing modules (such as a sharpening contrast and the like module and a 2D noise reduction module) is increased.
The first image processing model described herein is an example only, using a uformer model. In other alternative embodiments, the first image processing model may also be constructed by using an existing convolutional neural network model, including a convolutional layer, a pooling layer, a downsampling layer, an upsampling layer, and the like, which also may achieve the purposes of the present application, and are all within the scope of protection of the present application.
As described above, in order to solve the problem of the difference between the viewing angles of the contrast device and the debugging device in the prior art, the present application proposes a solution for training the second image processing model to simulate the processing style of the contrast device. Specifically, in this embodiment, the second image processing model may employ an enhancement net model, but the present application is not limited thereto. In other embodiments, the second image processing model may also be another type of machine learning model such as a convolutional neural network, which falls within the protection scope of the present application.
In the embodiment, when the second image processing model is trained, in order to solve the problem that the second image processing model cannot be generalized due to the fact that the noise of the RAW image collected under the high-gain scene is large, the noise on the RAW image is greatly reduced by a method that a plurality of RAW images of a static scene at the same moment are collected through the comparison equipment and then averaged, and detailed information is not lost. Specifically, in the step S300, a plurality of sets of second training image data are acquired from the comparison device, including the following steps:
Acquiring N' contrast shooting images of static scene at same moment
Figure BDA0004141777570000131
N >1;
And carrying out averaging treatment on the N' comparison shooting images to obtain a second shooting image.
The averaging process is as shown in the following formula (3), calculated
Figure BDA0004141777570000132
As a second captured image:
Figure BDA0004141777570000133
further, considering that the image sensor models adopted by the comparing device and the debugging device may be different, the processing parameters when the RAW image is processed by black level correction, AWB correction and the like are different, and the processing parameters are directly related to the image sensor models and can be obtained by calibrating each specific image sensor model. In order to solve the problem that the types of the image sensors adopted by the comparison equipment and the debugging equipment are inconsistent, the embodiment further corrects the RAW images acquired by the comparison equipment to RGB space and then trains a second image processing model, so that the difference of different image sensors in processing the RAW images is reduced.
In this embodiment, in the step S300, the acquiring a plurality of sets of second training image data from the comparison device includes: using a contrast device to snap a RAW image of the darkroom scene, and obtaining RGB images processed by the contrast device to be respectively recorded as
Figure BDA0004141777570000134
I.e. second shot images in RAW format, respectively +.>
Figure BDA0004141777570000135
And a second label image in RGB format +.>
Figure BDA0004141777570000136
Specifically, the step S400: training a second image processing model based on the second training image data, comprising the steps of:
acquiring a first sensor parameter of the contrast device, and based on the first sensor parameter, converting a second shot image in RAW format
Figure BDA0004141777570000141
Obtaining a first corrected image in RGB format through a preset image correction algorithm
Figure BDA0004141777570000142
The image correction algorithm is used for converting the RAW image into an RGB image based on sensor parameters, the image sensor parameters comprise information such as black level, CCM (color correction matrix), AWB (automatic white balance) and the like, and the image correction algorithm comprises black level correction, AWB correction, demosaicing, CCM correction and the like;
and taking the first corrected image as the input of the second image processing model, taking the second label image as the label data of the second image processing model, and training the second image processing model. When the second label image is an RGB image, the second image processing model is a simulation model for obtaining an output RGB image by performing noise reduction, image enhancement and other processes on the input corrected RGB image, that is, fitting by using the second image processing model
Figure BDA0004141777570000143
To->
Figure BDA0004141777570000144
When the second label image is a YUV image, the second image processing model is a simulation model for converting the input corrected RGB image into an output YUV image by adopting noise reduction, image enhancement and other processes and formats.
In the step S400, during the training process of the second image processing model based on the enhanced net, the network structure uses UNet, and the input is that
Figure BDA0004141777570000145
Output is->
Figure BDA0004141777570000146
The model training process is shown in formula (4), wherein W is an internal intrinsic parameter of the second image processing model.
Figure BDA0004141777570000147
The embodiment in which the correction processing is performed before the second captured image is input to the second image processing model is only an alternative. In another embodiment, when the types of the image sensors adopted by the comparing device and the debugging device are the same, or the parameters of the image sensors are very small, the step of correcting can be omitted, and the second shot image is directly input into the second image processing model for training, namely, the input of the second image processing model is the RAW image directly shot by the device.
In the step S500, acquiring a third shot image from the commissioning device includes using the commissioning device to snap-photograph a RAW-format third shot image of the darkroom scene
Figure BDA0004141777570000148
The third shot image and the scene content acquired by the second shot image are kept consistent as much as possible, but the field angle of view is not strictly required to be consistent. In the step S500, a third label image is obtained based on the second image processing model, including the following steps:
acquiring second sensor parameters of the debugging equipment, and based on the second sensor parameters, adopting the image correction algorithm to carry out RAW format of the third shot image
Figure BDA0004141777570000149
Second rectified image converted into RGB format (or YUV format in another embodiment, depending on the class of second label image employed by the second image processing model during training)>
Figure BDA0004141777570000151
The second corrected image is processed
Figure BDA0004141777570000152
Inputting the second image processing model to obtain a third label image output by the second image processing model>
Figure BDA0004141777570000153
This third label image->
Figure BDA0004141777570000154
I.e. as a target image when optimizing the hyper-parameter combination.
In the step S600, a set of image processing super-parameter combinations P are randomly initialized to obtain a third captured image
Figure BDA0004141777570000155
And initializing the image processing hyper-parameter combination P to be fed into the converged first image processing model to obtain +.>
Figure BDA0004141777570000156
Then will->
Figure BDA0004141777570000157
And third label image->
Figure BDA0004141777570000158
And solving the loss, and optimizing and initializing the image processing super-parameter combination P by using a gradient descent method so as to obtain the optimal image processing super-parameter combination. Specifically, after the converged first image processing model is obtained through step S200, f may be obtained proxy Parameter W of (2) * By fixing W * The above formula (1) may be rewritten as formula (5).
Figure BDA0004141777570000159
Thus, for a given task L ta*k Gradient descent method for image super-parameter combination P
Figure BDA00041417775700001510
p i And E, P, optimizing.
As shown in fig. 4, the embodiment of the present invention further provides an image processing super-parameter optimization system, which is applied to the image processing super-parameter optimization method, where the system includes:
a first model training module M100, configured to obtain, from a debugging device, a plurality of sets of first training image data, where each set of first training image data includes a first captured image, a training hyper-parameter combination, and a first label image obtained by the debugging device processing the first captured image based on the training hyper-parameter combination, and train a first image processing model based on the first training image data;
a second model training module M200, configured to obtain a plurality of sets of second training image data from a comparison device, where each set of second training image data includes a second captured image and a second label image obtained by processing the second captured image by the comparison device, and train a second image processing model based on the second training image data;
the super-parameter optimizing module M300 is configured to obtain a third captured image from the debugging device, obtain a third label image based on the second image processing model, and use the third captured image and an initialized super-parameter combination as input of the first image processing model, and use the third label image as label data of the first image processing model to optimize the initialized super-parameter combination.
According to the image processing super-parameter optimizing system, the first image processing model simulating the internal image processing algorithm of the debugging equipment is obtained through the first model training module M100, the second image processing model simulating the internal image processing algorithm of the contrast equipment is obtained through the second model training module M200, then the super-parameter optimizing module M300 is used for obtaining the third tag image of the target based on the third shooting image shot by the debugging equipment, the third tag image is used as the target image when the super-parameter combination is optimized, and as the third tag image is obtained after the third shooting image is processed through the second image processing model, the third tag image and the third shooting image are free of visual angle difference and are not affected by problems such as lens distortion and inconsistent image resolution, the third tag image is obtained through the second image processing model simulating the contrast equipment, so that the image processing style of the contrast equipment is simulated when the super-parameter combination is optimized, and the super-parameter combination optimizing effect is improved.
The embodiment of the application also provides an image processing super-parameter optimizing device, which comprises a processor; a memory having stored therein executable instructions of the processor; wherein the processor is configured to perform the steps of the image processing super-parameter optimization method via execution of the executable instructions.
Those skilled in the art will appreciate that the various aspects of the present application may be implemented as a system, method, or program product. Accordingly, aspects of the present application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the present application is described below with reference to fig. 5. The electronic device 600 shown in fig. 5 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 5, the electronic device 600 is embodied in the form of a general purpose computing device. Components of electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different system components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 such that the processing unit 610 performs steps according to various exemplary embodiments of the present application described in the above-described electronic prescription flow processing methods section of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The memory unit 620 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 6201 and/or cache memory unit 6202, and may further include Read Only Memory (ROM) 6203.
The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any device (e.g., router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
By adopting the image processing super-parameter optimizing device provided by the application, the processor executes the image processing super-parameter optimizing method when executing the executable instruction, so that the beneficial effects of the image processing super-parameter optimizing method can be obtained.
The embodiment of the application also provides a computer readable storage medium for storing a program, which when executed by a processor, implements the steps of the image processing super-parameter optimization method. In some possible embodiments, the various aspects of the present application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the present application as described in the above-mentioned image processing super parameter optimization method section of the present specification, when said program product is run on the terminal device.
Referring to fig. 6, a program product 800 for implementing the above-described method according to an embodiment of the present application is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or cluster. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
By adopting the computer-readable storage medium provided by the application, the program stored therein realizes the steps of the image processing super-parameter optimization method when being executed, thereby obtaining the beneficial effects of the image processing super-parameter optimization method.
The foregoing is a further detailed description of the present application in connection with the specific preferred embodiments, and it is not intended that the practice of the present application be limited to such description. It should be understood that those skilled in the art to which the present application pertains may make several simple deductions or substitutions without departing from the spirit of the present application, and all such deductions or substitutions should be considered to be within the scope of the present application.

Claims (10)

1. An image processing super-parameter optimization method is characterized by comprising the following steps:
acquiring a plurality of groups of first training image data from debugging equipment, wherein each group of first training image data comprises a first shooting image, a training super-parameter combination and a first tag image obtained by the debugging equipment through processing the first shooting image based on the training super-parameter combination;
training a first image processing model based on the first training image data;
acquiring a plurality of groups of second training image data from a comparison device, wherein each group of second training image data comprises a second shooting image and a second label image obtained by processing the second shooting image by the comparison device;
training a second image processing model based on the second training image data;
Acquiring a third shooting image from the debugging equipment, and acquiring a third tag image based on the second image processing model;
and taking the third shooting image and the initialization super-parameter combination as input of the first image processing model, taking the third label image as label data of the first image processing model, and optimizing the initialization super-parameter combination.
2. The image processing hyper-parameter optimization method according to claim 1, wherein training a second image processing model based on the second training image data comprises the steps of:
the second shot image is subjected to a preset image correction algorithm to obtain a first corrected image;
taking the first corrected image as the input of the second image processing model, taking the second label image as the label data of the second image processing model, and training the second image processing model;
obtaining a third label image based on the second image processing model, comprising the following steps:
the third shot image is subjected to a preset image correction algorithm to obtain a second corrected image;
and inputting the second corrected image into the second image processing model to obtain a third label image output by the second image processing model.
3. The image processing super-parameter optimization method according to claim 2, wherein the image correction algorithm is configured to convert the first format image into the second format image based on the sensor parameters;
and obtaining a first corrected image from the second shot image through a preset image correction algorithm, wherein the method comprises the following steps of: acquiring first sensor parameters of the contrast equipment, and converting the second shot image in a first format into a first corrected image in a second format by adopting the image correction algorithm based on the first sensor parameters;
and obtaining a second corrected image from the third shot image through a preset image correction algorithm, wherein the method comprises the following steps of: and acquiring second sensor parameters of the debugging equipment, and based on the second sensor parameters, adopting the image correction algorithm to convert the third shot image in the first format into a second corrected image in the second format.
4. The image processing super-parametric optimization method as claimed in claim 1, wherein the acquiring the plurality of sets of second training image data from the contrast device comprises the steps of:
acquiring a plurality of contrast shooting images of a static scene at the same moment;
And carrying out averaging treatment on the plurality of contrast shooting images to obtain a second shooting image.
5. The image processing super-parameter optimization method as claimed in claim 1, wherein the first image processing model includes an encoder and a decoder, the encoder including a plurality of encoding modules connected in series in sequence;
training a first image processing model based on the first training image data, comprising the steps of:
inputting a first shooting image and a training super-parameter combination in each group of first training image data into the first image processing model in sequence, respectively fusing an input image of the first image processing model and the input super-parameter combination by a coding module of the first image processing model, extracting a characteristic image, and taking the characteristic image as an output image of the coding module, wherein the input image of a first coding module is the first shooting image, and the input images of other coding modules except the first are the output images of the previous coding module;
and obtaining an output image of the first image processing model, constructing a loss function with the first label image, and reversely iterating and optimizing the first image processing model.
6. The method according to claim 5, wherein the first image processing model is a uformer model, and each encoding module uses an input image, a parameter and an output image of the encoding module as information to be queried, queried information and values obtained by querying, respectively, and uses a self-attention mechanism to perform fusion.
7. The image processing hyper-parameter optimization method according to claim 6, wherein each of the encoding modules includes a convolution unit, a first dimension flattening unit, a second dimension flattening unit, a first point multiplication unit, a probability weight calculation unit, and a second point multiplication unit;
the input hyper-parameter combination of the coding module is input into the first dimension flattening unit through the convolution unit, the input image of the coding module is input into the second dimension flattening unit, the output data of the first dimension flattening unit and the output data of the second dimension flattening unit are obtained through the first dot multiplication unit, the dot multiplication data are input into the probability weight calculation unit, the output data of the probability weight calculation unit and the output data of the second dimension flattening unit are input into the second dot multiplication unit, and the second dot multiplication unit outputs a feature map as an output image of the coding module.
8. An image processing super-parameter optimization system, characterized in that it is applied to the image processing super-parameter optimization method according to any one of claims 1 to 7, the system comprising:
the first model training module is used for acquiring a plurality of groups of first training image data from the debugging equipment, wherein each group of first training image data comprises a first shooting image, a training super-parameter combination and a first label image obtained by the debugging equipment through processing the first shooting image based on the training super-parameter combination, and a first image processing model is trained based on the first training image data;
the second model training module is used for acquiring a plurality of groups of second training image data from the comparison equipment, wherein each group of second training image data comprises a second shooting image and a second label image obtained by processing the second shooting image by the comparison equipment, and a second image processing model is trained based on the second training image data;
the super-parameter optimization module is used for acquiring a third shooting image from the debugging equipment, obtaining a third tag image based on the second image processing model, taking the third shooting image and an initialization super-parameter combination as input of the first image processing model, taking the third tag image as tag data of the first image processing model, and optimizing the initialization super-parameter combination.
9. An image processing super-parameter optimizing apparatus, characterized by comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the image processing hyper-parametric optimization method of any of claims 1 to 7 via execution of the executable instructions.
10. A computer-readable storage medium storing a program, characterized in that the program when executed by a processor implements the steps of the image processing super-parameter optimization method according to any one of claims 1 to 7.
CN202310291835.6A 2023-03-23 2023-03-23 Image processing super-parameter optimization method, system, equipment and storage medium Pending CN116347251A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310291835.6A CN116347251A (en) 2023-03-23 2023-03-23 Image processing super-parameter optimization method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310291835.6A CN116347251A (en) 2023-03-23 2023-03-23 Image processing super-parameter optimization method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116347251A true CN116347251A (en) 2023-06-27

Family

ID=86887324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310291835.6A Pending CN116347251A (en) 2023-03-23 2023-03-23 Image processing super-parameter optimization method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116347251A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648451A (en) * 2024-01-30 2024-03-05 青岛漫斯特数字科技有限公司 Data management method, system, device and medium for image processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648451A (en) * 2024-01-30 2024-03-05 青岛漫斯特数字科技有限公司 Data management method, system, device and medium for image processing
CN117648451B (en) * 2024-01-30 2024-04-19 青岛漫斯特数字科技有限公司 Data management method, system, device and medium for image processing

Similar Documents

Publication Publication Date Title
Wang et al. Learning depth from monocular videos using direct methods
Huang et al. Bidirectional recurrent convolutional networks for multi-frame super-resolution
US10593021B1 (en) Motion deblurring using neural network architectures
US11282207B2 (en) Image processing method and apparatus, and storage medium
Shen et al. Human-aware motion deblurring
WO2020063475A1 (en) 6d attitude estimation network training method and apparatus based on deep learning iterative matching
WO2021164234A1 (en) Image processing method and image processing device
RU2424561C2 (en) Training convolutional neural network on graphics processing units
US10726555B2 (en) Joint registration and segmentation of images using deep learning
KR20180050832A (en) Method and system for dehazing image using convolutional neural network
JP2021179833A (en) Information processor, method for processing information, and program
JP6901803B2 (en) A learning method and learning device for removing jittering from video generated by a swaying camera using multiple neural networks for fault tolerance and fracture robustness, and a test method and test device using it.
CN116347251A (en) Image processing super-parameter optimization method, system, equipment and storage medium
KR102225753B1 (en) Deep learning-based panorama image quality evaluation method and device
CN113344826A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111507288A (en) Image detection method, image detection device, computer equipment and storage medium
Song et al. Real-scene reflection removal with raw-rgb image pairs
CN110717958A (en) Image reconstruction method, device, equipment and medium
CN107729885B (en) Face enhancement method based on multiple residual error learning
US20230401737A1 (en) Method for training depth estimation model, training apparatus, and electronic device applying the method
CN105956606A (en) Method for re-identifying pedestrians on the basis of asymmetric transformation
CN111105364A (en) Image restoration method based on rank-one decomposition and neural network
CN113065496B (en) Neural network machine translation model training method, machine translation method and device
CN113688945A (en) Image processing hyper-parameter optimization method, system, device and storage medium
CN115170418A (en) Degradation-compliant low-rank high-dimensional image filling model and filling method and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination