CN111373436A - Image processing method, terminal device and storage medium - Google Patents

Image processing method, terminal device and storage medium Download PDF

Info

Publication number
CN111373436A
CN111373436A CN201880071017.2A CN201880071017A CN111373436A CN 111373436 A CN111373436 A CN 111373436A CN 201880071017 A CN201880071017 A CN 201880071017A CN 111373436 A CN111373436 A CN 111373436A
Authority
CN
China
Prior art keywords
algorithm
image
image processing
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880071017.2A
Other languages
Chinese (zh)
Inventor
薛立君
费奥多尔·克拉夫琴科
赵丛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Shenzhen Dajiang Innovations Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111373436A publication Critical patent/CN111373436A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image processing method, a terminal device and a storage medium, wherein the method comprises the following steps: acquiring a target image (S101); acquiring a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes (S102); the pre-trained convolutional neural network model is read according to the order of the plurality of image processing algorithm indexes in the preset algorithm list to process the target image (S103). And then a better image processing effect is obtained with a smaller calculation amount.

Description

Image processing method, terminal device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, a terminal device, and a storage medium.
Background
At present, in some automatic image post-processing schemes, such as a scheme based on reinforcement learning, when an image processing algorithm that needs to be used increases, a search space of a reinforcement learning action set increases, so that a calculation amount is greatly increased, and a speed of processing an image decreases. In addition, for image processing, it may happen that some image processing algorithms are used for a plurality of times, but some image processing algorithms are never used, and it is difficult to define a good reward and the like, so that the image processing effect is greatly reduced. For example, in a scheme based on a generation-based countermeasure network, there is a relatively large limitation on the resolution of an output picture, and in order to ensure an operation speed, only a picture with a relatively small resolution can be generally generated.
Disclosure of Invention
Based on this, the present application provides an image processing method, a terminal device, and a storage medium, which implement image processing based on a convolutional neural network, aiming at outputting a better image effect with a smaller amount of computation.
In a first aspect, the present application provides an image processing method, including:
acquiring a target image;
acquiring a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes;
and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
In a second aspect, the present application further provides a terminal device, wherein the terminal device includes a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a target image;
acquiring a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes;
and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
In a third aspect, the present application further provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when executed by a processor, the computer program causes the processor to implement:
acquiring a target image;
acquiring a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes;
and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
The embodiment of the application provides an image processing method, terminal equipment and a storage medium, wherein a target image and a preset algorithm list are obtained, and the preset algorithm list comprises a plurality of image processing algorithm indexes; and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image to obtain an output image. The image processing method can realize the output of better image processing results under the condition of using smaller calculation amount.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart illustrating steps of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow diagram of sub-steps of the image processing method of FIG. 1;
FIG. 3 is a flowchart illustrating steps of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic flow diagram of sub-steps of the image processing method of FIG. 3;
FIG. 5 is a schematic diagram of a scene using an image processing method according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating steps of an image processing method according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating steps of a convolutional neural network training method according to an embodiment of the present application;
FIG. 8 is a flow diagram illustrating sub-steps of the convolutional neural network training method of FIG. 7;
fig. 9 is a schematic block diagram of a terminal device according to an embodiment of the present application;
FIG. 10(a) is a schematic diagram of a convolutional neural network architecture used for reinforcement learning;
FIG. 10(b) is a schematic diagram of a convolutional neural network architecture provided by an embodiment of the present application;
FIG. 11(a) is a diagram of system resource allocation when the convolutional neural network architecture used for reinforcement learning is loaded;
fig. 11(b) is a system resource allocation diagram when the convolutional neural network architecture provided by the embodiment of the present application is loaded.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating steps of an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to terminal equipment and is used for automatic post-processing of images. The terminal equipment comprises a mobile phone, a tablet, a notebook computer, an unmanned aerial vehicle and the like.
Wherein, unmanned aerial vehicle can be for rotating wing type unmanned aerial vehicle, for example four rotor unmanned aerial vehicle, six rotor unmanned aerial vehicle, eight rotor unmanned aerial vehicle, also can be fixed wing unmanned aerial vehicle. This unmanned aerial vehicle is last to have camera equipment.
Specifically, as shown in fig. 1, the image processing method includes steps S101 to S103.
And S101, acquiring a target image.
The target image is an image to be processed or an image to be processed selected by a user. For example, an original picture taken by the terminal device, or one or more pictures selected by the user in a graphic library of the terminal device.
S102, obtaining a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes.
The image processing algorithm index is an algorithm identifier corresponding to an image processing algorithm, for example, an algorithm identifier corresponding to a color Temperature adjustment algorithm is Temperature, and an algorithm identifier corresponding to a color algorithm is set to Ti nt. Instead of representing it by its corresponding english word, it is of course also possible to use other types of identification as algorithmic identification, such as the arabic numerals 1, 2, 3, etc., or the letters a, b, c, etc.
In this embodiment, the plurality of image processing algorithm indexes specifically include: adjusting a color temperature algorithm index, setting a color algorithm index, adjusting an exposure algorithm index, adjusting a contrast algorithm index, a highlight recovery algorithm index, a low light compensation algorithm index, a white balance algorithm index, adjusting a definition algorithm index, a defogging algorithm index, adjusting a natural saturation algorithm index, adjusting a saturation algorithm index, and a tone curve algorithm index.
As shown in table 1, the plurality of image processing algorithms correspond to different algorithm indexes, and the plurality of image processing algorithm indexes are arranged in sequence, and the sequence is arranged corresponding to the corresponding arrangement number.
Table 1 is a list of predetermined algorithms
Figure BDA0002474205240000041
Figure BDA0002474205240000051
In the present embodiment, specifically, 12 image processing algorithms are used, and the arrangement order of these algorithms is as in table 1. According to the experiment, the method comprises the following steps: by adopting the 12 image processing algorithms and the corresponding sorting sequence to carry out relevant image processing, a better image processing effect can be obtained.
It should be noted that, according to the actual image processing requirement, the corresponding image processing algorithm may be reduced or increased, such as deleting the tone curve, or adding the cropping algorithm. Of course, the ordering of the graphics processing algorithms may be adjusted accordingly, such as by swapping the ordering of the white balance and defogging algorithms in table 1.
S103, reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
Specifically, the order of the image processing algorithms in the preset algorithm list is indexed, and as shown in table 1, the order of image processing algorithms, i.e., 1, 2, 3.. 11, and 12, in table 1 is to adjust color temperature, set color, adjust exposure, adjust contrast, highlight restoration, low light compensation, white balance, adjust sharpness, defogg, adjust natural saturation, adjust saturation, and adjust hue curve.
Sequentially reading a pre-trained convolutional neural network model to process a target image according to the sequence of a plurality of image processing algorithm indexes in the preset algorithm list to obtain an output image, and specifically, starting the convolutional neural network model according to an image processing algorithm each time to process the target image to obtain the output image; and taking the output image as a target image, circularly executing and starting a convolution neural network model according to an image processing algorithm to process the target image to obtain a final output image, and further finishing the processing of the target image.
In an embodiment, as shown in fig. 2, step S103 specifically includes: and substeps 1031 to S1034.
And S1031, sequentially starting image processing algorithms corresponding to the image processing algorithm indexes according to the preset algorithm list.
Specifically, an image processing algorithm index in the preset algorithm list is determined, and the image processing algorithm index corresponds to an image processing algorithm.
Wherein, the determining an image processing algorithm index in the preset algorithm list may be: and sequentially determining one image processing algorithm in the preset algorithm list according to the arrangement sequence of the plurality of image processing algorithms in the preset algorithm list.
For example, in table 1, the algorithm index Temperature and the corresponding color Temperature adjustment algorithm are determined, the algorithm index Tint and the corresponding color setting algorithm are determined, and the algorithm index Tone cut and the corresponding Tone curve algorithm are determined.
It should be noted that, after the algorithm index Temperature and the corresponding color Temperature adjustment algorithm are determined, steps S1032 to S1033 are performed; then, determining an algorithm index Tint and a corresponding set color algorithm, taking an output image processed according to the color temperature adjusting algorithm as a target image, and continuing to execute the steps S1032 to S1033; this loops until the algorithm in table 1 is executed.
Of course, instead of determining an image processing algorithm according to the arrangement order of the plurality of image processing algorithms in the preset algorithm list, an algorithm index and a corresponding image processing algorithm may be randomly determined in the preset algorithm list each time, and then steps S1032 and S1033 are performed until the algorithms in table 1 are executed.
S1032, determining algorithm parameters corresponding to the image processing algorithm according to the convolutional neural network model.
And inputting the target image into the convolutional neural network model for training to obtain algorithm parameters corresponding to the image processing algorithm.
Specifically, after the image processing algorithm is determined, for example, the determined image processing algorithm is an Exposure adjustment algorithm (Exposure), and the target image is input into the convolutional neural network model for training to obtain an algorithm parameter corresponding to the Exposure adjustment algorithm (Exposure).
The convolutional neural network model is a pre-trained model, the input of which is a target image determined to have an image processing algorithm, and the target image is input into the convolutional neural network model for training to output corresponding algorithm parameters, such as algorithm parameters corresponding to an adjusted Exposure algorithm (Exposure).
In an optional embodiment, inputting the target image into a convolutional neural network model for training to obtain algorithm parameters corresponding to the image processing algorithm, includes: inputting the target image into a convolutional neural network model for training to obtain a plurality of algorithm parameters corresponding to the image processing algorithm and a probability value corresponding to each algorithm parameter; and determining an algorithm parameter corresponding to the image processing algorithm according to the probability value.
For example, the algorithm parameter corresponding to the adjusted color temperature is a range value, for example, { -100,100}, the output result after the convolutional neural network model is trained is a probability value corresponding to the range value of the algorithm parameter, for example, the color temperature is 100, and the probability value is 15%; the color temperature 80, the probability value is 84%, and then the color temperature with the larger probability value is taken as the color temperature of the image. In an alternative example, by adding a comparator to the plurality of outputs of the convolutional neural network, the color temperature value having the largest selection probability can be selected and output by the comparator.
And S1033, processing the target image by adopting the image processing algorithm according to the algorithm parameter to obtain an output image.
Specifically, the processing the target image according to the image processing algorithm and the corresponding algorithm parameter to obtain an output image includes: and calling a function corresponding to the image processing algorithm to process the target image according to the algorithm parameter corresponding to the image processing algorithm so as to obtain an output image.
For example, if it is determined that the algorithm parameter corresponding to the color temperature adjustment is 80, a function corresponding to the color temperature adjustment algorithm, for example, a temperature function, is called. The target image is processed using the temperature function according to the color temperature 80 to obtain a processed target image, i.e., an output image.
It should be noted that, according to the algorithm parameter corresponding to the image processing algorithm, a corresponding image processing Tool (e.g., a Camera Raw Tool, a Photoshop Tool, and a Lighting Tool) may also be called, and the image processing Tool processes the target image by using the image processing algorithm to obtain an output image. That is, the convolutional neural network may provide algorithm parameters for image processing for the already packaged image processing module in the form of a plug-in, so that the image processing module performs a single processing on the input picture according to the algorithm parameters.
S1034, taking the output image as the target image, returning to the image processing algorithms corresponding to the sequentially started image processing algorithm indexes according to the preset algorithm list, and continuing to execute until the image processing algorithm indexes in the preset algorithm list are all started, so as to obtain a final output image.
Wherein, steps S1033 and S1034 are: and processing the target image by adopting the image processing algorithm according to the algorithm parameters to obtain an output image.
For example, an output image obtained by processing through a color temperature adjustment algorithm is used as a target image; returning to execute the steps S1031 to S1034 to determine a set color algorithm, taking an output image obtained by processing through the set color algorithm as a target image, and then circularly executing; and obtaining a final output image until all the image processing algorithm indexes in the preset algorithm list are started. In each step, each convolutional neural network performs output of the parameters only once, and calls an image processing module for processing the output parameters based on the output parameters. The input of the image processing module is the processing result output by the last image processing module. The processing process can use the light convolutional neural network, greatly saves calculation power and improves the processing speed of the picture.
The image processing method provided by the embodiment obtains the target image and the preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes; and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image to obtain an output image. The image processing method completes image processing by means of a preset image algorithm list and a convolution neural network model, and only a minimum neural network aiming at target parameters can be loaded during neural network convolution each time compared with a scheme based on reinforcement learning or a generation type countermeasure network scheme, so that the volume of the loaded neural network convolution can be reduced to a great extent, and the credibility of the algorithm on a low-computing-power platform is ensured. And under the condition of using a smaller calculation amount, a better image processing result is output, namely, an image with the same resolution as the target image can be obtained.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating steps of another image processing method according to an embodiment of the present application. The image processing method can be applied to terminal equipment and is used for carrying out automatic post-processing on the shot original image and outputting a better image processing result under the condition of smaller calculated amount.
Specifically, as shown in fig. 3, the image processing method includes steps S201 to S206.
S201, obtaining an image to be processed and an image format of the image to be processed.
Specifically, the image to be processed may be an image selected by the user in a picture library of the terminal device, or an image just captured by the user through the terminal device. Each image to be processed comprises a corresponding image format, and the image format of the image to be processed can be determined through suffix naming of the image to be processed.
For example, if the suffix of the image 1 to be processed is named as JPG, the image format of the image 1 to be processed is JPG (joint Photographic Experts group) format; if the suffix of the Image 2 to be processed is named as RAW, the Image format of the Image 2 to be processed is RAW (RAW Image format), and the Image in the RAW format is RAW data obtained by converting the captured light source signal into a digital signal by the Image sensor.
S202, judging whether the image format of the image to be processed is a RAW format.
Specifically, judging whether the image format of the image to be processed is the RAW format according to the suffix name of the image to be processed, and generating a judgment result; and performs different steps according to different determination results (step S203 and step S204).
Among them, the RAW format has many advantages, and therefore in the present embodiment, an image in the RAW format is selected for image processing so as to obtain a better image processing effect. Its advantages are as follows:
first, in terms of color gradation: images in RAW format retain more levels than images in JPG format. Operations often performed in the later image processing, such as adding or subtracting exposure, adjusting highlight or shadow, increasing or decreasing contrast, and adjusting the color level and curve, will destroy the continuity of the color level, resulting in a jump color level, which is hard to be perceived by naked eyes, but the large amplitude will destroy the texture of the picture after being perceived by naked eyes. A common example is that the uniform gradual transition of the blue sky is destroyed at a later time to appear unsightly color levels. The 14 bit (bit) color depth of RAW relative to JPG format is advantageous because there is enough gradation to maintain the gradation jump at a level that is indistinguishable to the naked eye.
Secondly, for white balance correction: the RAW format image is sensor RAW data using 14 bits, and thus accurate color temperature correction such as cold color 0k-10000k warm color correction can be performed by an image processing tool (Camera RAW tool). Even if the white balance is completely set incorrectly at the time of photographing (for example, photographing with daylight white balance under an indoor incandescent lamp), if the JPG format is used, since the image of the JPG format uses a JPG file of 8 bits (bit), the Color temperature can only be adjusted slightly by several limited methods (for example, Color balance) in PS (Photoshop tool), and the details of adjusting a picture of a warm gorgeous back to the normal Color temperature can be misled.
And thirdly, correcting the bright part and the dark part. This is the most common problem in photography, overexposure or underexposure. If the RAW format image is stored after being shot, the exposure compensation of at least + -2EV can be corrected in the later period, and the complete picture details are still remained, and if the JPG format image is used, the serious overexposure or underexposure is found in the later period, because even after the exposure is corrected, a dead white or dead black without details on the picture can be seen.
Therefore, in this embodiment, before using the image processing method, it is necessary to perform image saving setting on the terminal device so that the terminal device saves the taken picture as an image in the RAW format, or still saves an image in the RAW format corresponding to the image in the format after saving the image in another format.
And S203, acquiring the image to be processed as a target image.
Specifically, if the image format of the image to be processed is RAW, the image to be processed is directly acquired as the target image, and step S205 is executed.
And S204, inquiring and acquiring the RAW format image corresponding to the image to be processed as the target image.
Specifically, if the image format of the image to be processed is not the RAW format, the image in the RAW format corresponding to the image to be processed is queried and acquired as the target image, and step S205 is executed.
S205, obtaining a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes.
The preset algorithm list includes a plurality of image processing algorithm indexes arranged In a stack form, specifically, a stack form of First In First Out (FIFO). Since the plurality of image processing algorithm indexes are stored In a First-In First-Out (FIFO) stack form, it is ensured that the pre-trained convolutional neural network model is read In the order of the plurality of image processing algorithm indexes In the preset algorithm list.
In an embodiment, as shown in fig. 4, before the step of obtaining the preset algorithm list, the image processing method further includes: step S205a and step S205 b.
S205a, acquiring a preset algorithm list, wherein the algorithm list comprises a plurality of image processing algorithm indexes arranged in sequence; s205b, saving the image processing algorithm indexes in the algorithm list in a first-in first-out stack form to generate a preset algorithm list.
Specifically, the algorithm list may be table 1, and includes a plurality of image processing algorithm indexes arranged in sequence, where the image processing algorithm indexes are algorithm identifiers of image processing algorithms. And storing the image processing algorithm indexes in the algorithm list in a first-in first-out stack form to generate a preset algorithm list, and further ensuring that a pre-trained convolutional neural network model is read according to the sequence of the image processing algorithm indexes in the preset algorithm list.
S206, reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
And sequentially reading the pre-trained convolutional neural network model according to a first-in first-out stack form to process the target image so as to obtain an output image. Specifically, a convolutional neural network model is started to process a target image according to an image processing algorithm every time to obtain an output image; taking the output image as a target image in a circulating execution mode, and starting a convolution neural network model according to an image processing algorithm to process the target image to obtain an output image; and obtaining a final output image until the image processing algorithms in the preset algorithm list are completely executed, and further finishing the processing of the target image.
As shown in fig. 5, the terminal device is specifically a smartphone, and certainly may also be an unmanned aerial vehicle, and in this embodiment, the example is only illustrated by using a smartphone. Before the image processing method is used for processing the image to be processed by using the smart phone, two setting operations are required.
Setting operation one: image saving setting needs to be performed on a terminal device (a smartphone) so that the terminal device saves a shot image as an image in a RAW format, or still saves an image in a RAW format corresponding to an image in another format after the image in the other format is saved.
Setting operation two: the pre-trained convolutional neural network model is stored in the terminal device, and certainly, a deep learning model compression technology can be adopted, and the pre-trained convolutional neural network model is compressed and then stored in the terminal device.
After the setting operation is completed, the user can use the smart phone to perform image processing on the image to be processed by using the image processing method. For example, a smartphone is used to perform image processing on the captured image by using the image processing algorithm, so that a relatively small amount of calculation is used to obtain a relatively good image processing effect. Therefore, the image processing algorithm is particularly suitable for terminal equipment such as smart phones and unmanned planes, and the processing capacity of the terminal equipment is poorer than that of a server.
As shown in fig. 5, if a user uses the smart phone 10 to photograph a tree 20 at a remote location, an image processing function or an automatic processing function is selected after the photographing, where the image processing function and the automatic processing function are both designed based on the image processing method, and the photographed image is processed by using the image processing algorithm, so as to obtain a tree picture 30 with a better image effect.
Specifically, the smartphone 10 specifically executes the following steps: acquiring an image to be processed and an image format of the image to be processed; judging whether the image format of the image to be processed is an RAW format or not; if the image format of the image to be processed is RAW format, directly acquiring the image to be processed as a target image; or, if the image format of the image to be processed is not the RAW format, inquiring and acquiring a RAW format image corresponding to the image to be processed as the target image; and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image and obtain an output image 30.
In the image processing method provided by the above embodiment, the image to be processed and the image format of the image to be processed are acquired; judging whether the image format of the image to be processed is an RAW format or not; if the image format of the image to be processed is RAW format, directly acquiring the image to be processed as a target image; or, if the image format of the image to be processed is not the RAW format, inquiring and acquiring a RAW format image corresponding to the image to be processed as the target image; and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image. Therefore, the method can output images with good effect under the condition of using a small calculation amount of the terminal equipment, and further improves the user experience.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating steps of another image processing method according to an embodiment of the present application. The image processing method can be applied to terminal equipment and is used for carrying out automatic post-processing on the shot original image and outputting a better image processing result under the condition of smaller calculated amount.
Specifically, as shown in fig. 6, the image processing method includes steps S301 to S305.
S301, obtaining a target image, wherein the target image is in a RAW format.
Specifically, the target image is a to-be-processed image or a to-be-processed image selected by a user. For example, an original picture taken by the terminal device, or one or more pictures selected by the user in a graphic library of the terminal device, where the selected picture is an image in RAW format.
S302, a preset algorithm list is obtained, and the preset algorithm list comprises a plurality of image processing algorithm indexes.
Specifically, the image processing algorithm index is an algorithm identifier corresponding to the image processing algorithm, for example, the algorithm identifier corresponding to the color Temperature adjustment algorithm is Temperature, and the algorithm identifier corresponding to the color algorithm is T i nt. In this embodiment, a plurality of image processing algorithms shown in table 1 may be used to correspond to different algorithm indexes, and the image processing algorithm indexes are arranged in sequence.
S303, inputting the target image into a convolutional neural network model for training to obtain an algorithm parameter corresponding to one image processing algorithm in the preset algorithm list.
Specifically, after a preset algorithm list is determined, the target image is input into the convolutional neural network model for training to obtain an algorithm parameter corresponding to one image processing algorithm in the preset algorithm list, for example, the exposure level of one image processing algorithm (for example, an exposure adjustment algorithm) in the preset algorithm list is randomly determined.
In this embodiment, the target image is input to a convolutional neural network model for training in an image-oriented manner to obtain an algorithm parameter corresponding to one image processing algorithm in the preset algorithm list. And matching the corresponding image by using the convolutional neural network model, and determining the algorithm parameter of the image processing algorithm corresponding to the image. This is different from the way of using the convolutional neural network model to determine the algorithm parameters of the corresponding image processing algorithm in the manner of using the image processing algorithm as the dominant, which is provided in the above embodiment, so that the image processing effect can be improved.
S304, processing the target image according to the algorithm parameters and the corresponding image processing algorithm to obtain an output image.
Specifically, the processing the target image according to the image processing algorithm and the corresponding algorithm parameter to obtain an output image includes: and calling a function corresponding to the image processing algorithm to process the target image according to the algorithm parameter corresponding to the image processing algorithm so as to obtain an output image.
For example, if it is determined that the algorithm parameter corresponding to the color temperature adjustment is 80, a function corresponding to the color temperature adjustment algorithm, for example, a temperature function, is called. The target image is processed using the function temperature function according to the color temperature 80 to obtain a processed target image, i.e., an output image.
S305, taking the output image as the target image, returning to the step of inputting the target image into a convolutional neural network for training and continuing to execute until the image processing algorithms in the preset algorithm list are completely trained to obtain a final output image.
For example, the output image obtained by the color temperature adjustment algorithm is used as the target image, and the steps S303 to S305 are performed to determine another image processing algorithm, such as determining to set the color algorithm; and taking the output image obtained by the processing of the set color algorithm as a target image, and circularly executing S303 to S305 until the image processing algorithms in the preset algorithm list are completely trained to obtain a final output image.
In the image processing method provided in the above embodiment, a target image in RAW format and a preset algorithm list are obtained, where the preset algorithm list includes a plurality of image processing algorithm indexes; inputting the target image into a convolutional neural network model for training to obtain an algorithm parameter corresponding to one image processing algorithm in the preset algorithm list; processing the target image according to the algorithm parameters and the corresponding image processing algorithm to obtain an output image; and taking the output image as the target image, returning to the step of inputting the target image into a convolutional neural network for training and continuing to execute until the image processing algorithms in the preset algorithm list are completely trained to obtain a final output image. The image processing method can not only reduce the calculation amount of the terminal equipment, but also obtain a better image processing result.
Referring to fig. 7, fig. 7 is a flowchart illustrating steps of a convolutional neural network training method according to an embodiment of the present disclosure. The convolutional neural network training method is used for obtaining a convolutional neural network model. The embodiment of the image processing method may adopt the convolutional neural network model, and certainly, other methods train the obtained convolutional neural network model.
In the present embodiment, google lenet is used to perform model training to obtain the convolutional neural network model, but other networks, such as AlexNet or VGGNet, may also be used. The following description will be given by taking google lenet as an example.
Specifically, as shown in fig. 7, the convolutional neural network training method includes step S401 and step S404.
S401, obtaining image sample data, wherein the image sample data comprises a plurality of image data processed by an image processing algorithm and algorithm parameters corresponding to the image data.
Specifically, the image sample data is post-photography processed data by a professional photographer, and a data set processed accordingly. The image sample data comprises a plurality of image data processed by image processing algorithms and algorithm parameters corresponding to the image data.
In an embodiment, in order to calculate available algorithm parameters for the convolutional neural network model conveniently, before performing model training, quantization processing needs to be performed on the algorithm parameters in the image sample data to obtain quantized image sample data.
Specifically, the input of the image processing algorithm in all image sample data is quantized to a numerical variable having a fixed range. For example, the Exposure value (Exposure) is quantized to { -5.00, +5.00}, and the highlight restoration (HighlightsRecovery) is quantized to { -100, +100 }.
S402, carrying out iterative training according to the image sample data based on a convolutional neural network to obtain a convolutional neural network model, wherein output parameters of the convolutional neural network model are algorithm parameters corresponding to an image processing algorithm.
Specifically, the image sample data is used, model training is performed through GoogLeNet, specifically, directional propagation training can be adopted, features are extracted from the input image sample data through a convolution layer and a pooling layer of GoogLeNet, a complete connection layer is used as a classifier, and the output of the classifier is different image processing algorithms and corresponding algorithm parameters.
Specifically, all filters and parameters/weights are initialized with random values; the neural network takes the trained image sample data as input, goes through the forward propagation steps (convolution, ReLU and pooling operations for forward propagation in the fully connected layer), and obtains the output probability of each class.
In an embodiment, in order to facilitate calculation of available algorithm parameters by the convolutional neural network model, the algorithm parameters in the image sample data are quantized based on the convolutional neural network to obtain quantized image sample data. Correspondingly, performing iterative training according to the image sample data to obtain a convolutional neural network model, including: and performing iterative training according to the quantized image sample data based on a convolutional neural network algorithm to obtain a convolutional neural network model.
And S403, verifying the convolutional neural network model by taking the algorithm parameters corresponding to the image data as calibration data.
Specifically, the algorithm parameters corresponding to the image data are used as calibration data (ground route) to define a loss function (loss) to verify the accuracy of the trained convolutional neural network model.
As shown in fig. 8, the verifying the convolutional neural network model by using the algorithm parameter corresponding to the image data as calibration data specifically includes:
s4031, setting algorithm parameters corresponding to the image data as calibration data; s4032, calculating a difference value between the calibration data and an algorithm parameter corresponding to the image processing algorithm in the output parameters to serve as a gap loss value; s4033, judging whether the gap loss value meets a preset condition, wherein the preset condition is a condition for measuring the accuracy of the convolutional neural network model; s4034, when the gap loss value meets the preset condition, judging that the convolutional neural network model passes verification.
Specifically, algorithm parameters corresponding to an image processing algorithm used by a photographer are used as calibration data (groudtruth), a large-scale iterative training is performed by using prepared image sample data, the convolutional neural network outputs the algorithm parameters corresponding to the image processing algorithm after learning semantic information of a picture, the difference between the output algorithm parameters and the calibration data (groudtruth) is used as a loss difference value (loss), and the loss difference value (loss) is reduced as much as possible in model training to ensure the accuracy of the model.
In the model training, the loss of distance (loss) is reduced as much as possible, and the accuracy of the convolutional neural network model can be measured through the preset condition by judging whether the loss of distance meets the preset condition or not.
In an embodiment, the determining whether the gap loss value satisfies a predetermined condition includes: monitoring the change value of the gap loss value corresponding to each iterative training; and if the variation value of the gap loss value corresponding to each iterative training is within a preset range, judging that the gap loss value meets the preset condition.
For example, the variation value of the gap loss value corresponding to each iterative training is within a preset range, and the preset range (e.g., (0.00001, 0.00002)) indicates that the gap loss value is substantially stable and the variation is small, so that it is determined that the gap loss value satisfies the preset condition, and it is determined that the convolutional neural network model passes the verification.
In an embodiment, the determining whether the gap loss value satisfies a preset condition further includes: when the gap loss value corresponding to each iterative training is reduced, judging whether the gap loss value is smaller than a preset value or not; and if the gap loss value is smaller than the preset value, judging that the gap loss value meets the preset condition.
For example, the preset value is 0.001, when the gap loss value corresponding to each iterative training is reduced and is smaller than 0.001, it is determined that the gap loss value satisfies the preset condition, and it is determined that the convolutional neural network model passes verification.
S404, saving the convolution neural network model passing the verification as a pre-trained convolution neural network model.
Specifically, after the model training verification, all weights and parameters in the convolutional neural network are optimized, and images or other images in the image sample data can be correctly classified and identified to obtain corresponding algorithm parameters. Therefore, the convolution neural network model passing the verification is saved as a convolution neural network model trained in advance to the terminal equipment.
Of course, before the verified convolutional neural network model is used as the pre-trained convolutional neural network model, the convolutional neural network model may be compressed, and then the compressed convolutional neural network model is stored in the terminal device. The compression processing specifically comprises pruning, quantization, Huffman coding and the like of the convolutional neural network model so as to reduce the size of the convolutional neural network model and further facilitate storage in terminal equipment with small capacity.
Referring to fig. 9, fig. 9 is a schematic block diagram of a terminal device according to an embodiment of the present application. The terminal device 500 comprises a processor 501 and a memory 502, the processor 501 and the memory 502 being connected by a bus 503, such as an I2C (Inter-integrated Circuit) bus 503.
Specifically, the Processor 501 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 502 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
The processor 501 is configured to run a computer program stored in the memory 502, and when executing the computer program, implement the following steps:
acquiring a target image; acquiring a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes; and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
Optionally, the processor, when implementing the acquiring the target image, is configured to implement:
acquiring an image to be processed and an image format of the image to be processed; judging whether the image format of the image to be processed is an RAW format or not; and if the image format of the image to be processed is the RAW format, acquiring the image to be processed as a target image.
Optionally, after the determining whether the image format of the image to be processed is the RAW format, the processor is further configured to:
and if the image format of the image to be processed is not the RAW format, inquiring and acquiring the RAW format image corresponding to the image to be processed as the target image.
Optionally, the preset algorithm list includes a plurality of image processing algorithm indexes arranged in a stack.
Optionally, before implementing the obtaining of the preset algorithm list, the processor is further configured to implement:
acquiring a preset algorithm list, wherein the algorithm list comprises a plurality of image processing algorithm indexes which are arranged in sequence; and storing the image processing algorithm indexes in the algorithm list in a first-in first-out stack form to generate a preset algorithm list.
Optionally, the plurality of image processing algorithm indexes include a color temperature adjustment algorithm index, a color setting algorithm index, an exposure adjustment algorithm index, a contrast adjustment algorithm index, a highlight restoration algorithm index, a low light compensation algorithm index, a white balance algorithm index, a sharpness adjustment algorithm index, a defogging algorithm index, a natural saturation adjustment algorithm index, a saturation adjustment algorithm index, and a tone curve algorithm index.
Optionally, when implementing the reading of the pre-trained convolutional neural network model according to the sequence of the plurality of image processing algorithm indexes in the preset algorithm list to process the target image, the processor is configured to implement:
sequentially starting image processing algorithms corresponding to the image processing algorithm indexes according to the preset algorithm list; determining algorithm parameters corresponding to the image processing algorithm according to the convolutional neural network model; and processing the target image by adopting the image processing algorithm according to the algorithm parameters to obtain an output image.
Optionally, when the processor sequentially starts the image processing algorithms corresponding to the image processing algorithm indexes according to the preset algorithm list, the processor is configured to:
and determining an image processing algorithm index in the preset algorithm list and an image processing algorithm corresponding to the image processing algorithm index.
Optionally, when the determining of one image processing algorithm index in the preset algorithm list is implemented, the processor is configured to implement:
and sequentially determining one image processing algorithm in the preset algorithm list according to the arrangement sequence of the plurality of image processing algorithms in the preset algorithm list.
Optionally, when the determining of the algorithm parameter corresponding to the image processing algorithm according to the convolutional neural network model is implemented, the processor is configured to implement:
and inputting the target image into the convolutional neural network model for training to obtain algorithm parameters corresponding to the image processing algorithm.
Optionally, when the target image is input into the convolutional neural network model for training to obtain the algorithm parameter corresponding to the image processing algorithm, the processor is configured to implement:
inputting the target image into a convolutional neural network model for training to obtain a plurality of algorithm parameters corresponding to the image processing algorithm and a probability value corresponding to each algorithm parameter; and determining an algorithm parameter corresponding to the image processing algorithm according to the probability value.
Optionally, when the processor implements the processing of the target image by using the image processing algorithm according to the algorithm parameter to obtain an output image, the processor is configured to implement:
processing the target image by adopting the image processing algorithm according to the algorithm parameter to obtain an output image; and taking the output image as the target image, returning to the image processing algorithms corresponding to the image processing algorithm indexes which are sequentially started according to the preset algorithm list, and continuously executing until the image processing algorithm indexes in the preset algorithm list are all started to obtain a final output image.
Optionally, the processor is configured to perform the processing on the target image according to the image processing algorithm and the corresponding algorithm parameter to obtain an output image, and is configured to perform:
and calling a function corresponding to the image processing algorithm to process the target image according to the algorithm parameter corresponding to the image processing algorithm so as to obtain an output image.
Optionally, when implementing the reading of the pre-trained convolutional neural network model according to the sequence of the plurality of image processing algorithm indexes in the preset algorithm list to process the target image, the processor is configured to implement:
inputting the target image into a convolutional neural network model for training to obtain an algorithm parameter corresponding to one image processing algorithm in the preset algorithm list; processing the target image according to the algorithm parameters and the corresponding image processing algorithm to obtain an output image; and taking the output image as the target image, returning to the step of inputting the target image into a convolutional neural network for training and continuing to execute until the image processing algorithms in the preset algorithm list are completely trained to obtain a final output image.
Optionally, the processor is further configured to implement:
acquiring image sample data, wherein the image sample data comprises a plurality of image data processed by an image processing algorithm and algorithm parameters corresponding to the image data; performing iterative training according to the image sample data based on a convolutional neural network to obtain a convolutional neural network model, wherein the output parameters of the convolutional neural network model are algorithm parameters corresponding to an image processing algorithm; using the algorithm parameters corresponding to the image data as calibration data to verify the convolutional neural network model; and saving the convolution neural network model passing the verification as a pre-trained convolution neural network model.
Optionally, when the processor verifies the convolutional neural network model by using the algorithm parameter corresponding to the image data as calibration data, the processor is configured to implement:
setting algorithm parameters corresponding to the image data as calibration data; calculating the difference value of the calibration data and the algorithm parameter corresponding to the image processing algorithm in the output parameter as a gap loss value; judging whether the gap loss value meets a preset condition, wherein the preset condition is a condition for measuring the accuracy of the convolutional neural network model; and when the gap loss value meets the preset condition, judging that the convolutional neural network model passes the verification.
Optionally, when the processor determines whether the gap loss value meets a preset condition, the processor is configured to:
monitoring the change value of the gap loss value corresponding to each iterative training; and if the variation value of the gap loss value corresponding to each iterative training is within a preset range, judging that the gap loss value meets the preset condition.
Optionally, when the processor determines whether the gap loss value meets a preset condition, the processor is configured to:
when the gap loss value corresponding to each iterative training is reduced, judging whether the gap loss value is smaller than a preset value or not; and if the gap loss value is smaller than the preset value, judging that the gap loss value meets the preset condition.
Optionally, before implementing the convolutional neural network based on the iterative training according to the image sample data to obtain a convolutional neural network model, the processor is further configured to implement:
quantizing the algorithm parameters in the image sample data to obtain quantized image sample data; correspondingly, the iteratively training based on the convolutional neural network according to the image sample data to obtain a convolutional neural network model includes: and performing iterative training according to the quantized image sample data based on a convolutional neural network algorithm to obtain a convolutional neural network model.
In an alternative embodiment, a convolution neural network architecture of an embodiment of the present invention is described in comparison to a reinforcement learning neural network architecture, as shown in fig. 10(a), 10 (b). Fig. 10(a) is a reinforcement learning neural network architecture, which has more convolutional and pooling layers, and the output of which appears with a plurality of training parameters (not shown) corresponding to different training results of the output pictures. Of course, as a variation of the reinforcement learning neural network, it may also output one parameter at a time, but the reinforcement neural network itself still has a large volume (the number M of convolutional layers and pooling layers has a large value), and in a platform with a low hardware configuration, there may be problems of long loading time, slow response speed, and the like. Fig. 10(b) shows a neural network architecture according to an embodiment of the present invention, which is composed of K independent convolutional neural sub-networks, and since each neural sub-network only targets one parameter, its volume is small (the number of convolutional layers and pooling layers of each neural sub-network is small), according to the index, each time only one sub-network needs to be loaded in the memory, and other networks are in an unloaded state, and of course, if the hardware resources allow, the next sub-network to be calculated can be kept in a preloaded state.
As shown in fig. 11(a), the processor performs the neural network operation and the image processing in two parallel threads, and both the two processing operations are very resource-consuming, if a general neural network, such as an augmented neural network, is used, since the neural network occupies a large volume after being loaded into the memory, the system resource left for the image processing is relatively small, and in a platform with a tight computational resource, a long response time is often caused, and even a dead halt or a program non-response occurs.
As shown in fig. 11(b), the processing processor performs the neural network operation and the image processing in two parallel threads, and after the neural network with a smaller volume of the present invention is used, the system only needs to load the currently processed sub-network and at most, simultaneously pre-load the sub-network to be processed, so that more calculation power can be left for the image processing, and the response speed of the operation is increased.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and the processor executes the program instructions to implement the steps of the image processing method provided in the foregoing embodiment.
The computer-readable storage medium may be an internal storage unit of the terminal device according to any of the foregoing embodiments, for example, a hard disk or a memory of the terminal device. The computer readable storage medium may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (57)

1. An image processing method, comprising:
acquiring a target image;
acquiring a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes;
and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
2. The image processing method according to claim 1, wherein the acquiring the target image includes:
acquiring an image to be processed and an image format of the image to be processed;
judging whether the image format of the image to be processed is an RAW format or not;
and if the image format of the image to be processed is the RAW format, acquiring the image to be processed as a target image.
3. The image processing method according to claim 2, wherein after determining whether the image format of the image to be processed is a RAW format, the method further comprises:
and if the image format of the image to be processed is not the RAW format, inquiring and acquiring the RAW format image corresponding to the image to be processed as the target image.
4. The image processing method according to any one of claims 1 to 3, wherein the predetermined algorithm list comprises a plurality of image processing algorithm indexes arranged in a stack.
5. The image processing method according to claim 4, further comprising, before said obtaining the preset algorithm list:
acquiring a preset algorithm list, wherein the algorithm list comprises a plurality of image processing algorithm indexes which are arranged in sequence;
and storing the image processing algorithm indexes in the algorithm list in a first-in first-out stack form to generate a preset algorithm list.
6. The image processing method according to any one of claims 1 to 5, wherein the plurality of image processing algorithm indexes include an adjustment color temperature algorithm index, a setting color algorithm index, an adjustment exposure algorithm index, an adjustment contrast algorithm index, a highlight restoration algorithm index, a low light compensation algorithm index, a white balance algorithm index, an adjustment sharpness algorithm index, a defogging algorithm index, an adjustment natural saturation algorithm index, an adjustment saturation algorithm index, and a tone curve algorithm index.
7. The image processing method according to claim 1, wherein the reading a pre-trained convolutional neural network model according to the sequence of the plurality of image processing algorithm indexes in the preset algorithm list to process the target image comprises:
sequentially starting image processing algorithms corresponding to the image processing algorithm indexes according to the preset algorithm list;
determining algorithm parameters corresponding to the image processing algorithm according to the convolutional neural network model;
and processing the target image by adopting the image processing algorithm according to the algorithm parameters to obtain an output image.
8. The image processing method according to claim 7, wherein sequentially starting image processing algorithm indexes corresponding to image processing algorithms according to the preset algorithm list comprises:
and determining an image processing algorithm index in the preset algorithm list and an image processing algorithm corresponding to the image processing algorithm index.
9. The image processing method according to claim 8, wherein said determining an image processing algorithm index in said preset algorithm list comprises:
and sequentially determining one image processing algorithm in the preset algorithm list according to the arrangement sequence of the plurality of image processing algorithms in the preset algorithm list.
10. The image processing method according to claim 7, wherein the determining algorithm parameters corresponding to the image processing algorithm according to the convolutional neural network model comprises:
and inputting the target image into the convolutional neural network model for training to obtain algorithm parameters corresponding to the image processing algorithm.
11. The image processing method according to claim 10, wherein the inputting the target image into a convolutional neural network model for training to obtain algorithm parameters corresponding to the image processing algorithm comprises:
inputting the target image into a convolutional neural network model for training to obtain a plurality of algorithm parameters corresponding to the image processing algorithm and a probability value corresponding to each algorithm parameter;
and determining an algorithm parameter corresponding to the image processing algorithm according to the probability value.
12. The image processing method according to any one of claims 7 to 11, wherein the processing the target image with the image processing algorithm according to the algorithm parameter to obtain an output image comprises:
processing the target image by adopting the image processing algorithm according to the algorithm parameter to obtain an output image;
and taking the output image as the target image, returning to the image processing algorithms corresponding to the image processing algorithm indexes which are sequentially started according to the preset algorithm list, and continuously executing until the image processing algorithm indexes in the preset algorithm list are all started to obtain a final output image.
13. The image processing method of claim 12, wherein the processing the target image according to the image processing algorithm and corresponding algorithm parameters to obtain an output image comprises:
and calling a function corresponding to the image processing algorithm to process the target image according to the algorithm parameter corresponding to the image processing algorithm so as to obtain an output image.
14. The image processing method according to claim 1, wherein the reading a pre-trained convolutional neural network model according to the sequence of the plurality of image processing algorithm indexes in the preset algorithm list to process the target image comprises:
inputting the target image into a convolutional neural network model for training to obtain an algorithm parameter corresponding to one image processing algorithm in the preset algorithm list;
processing the target image according to the algorithm parameters and the corresponding image processing algorithm to obtain an output image;
and taking the output image as the target image, returning to the step of inputting the target image into a convolutional neural network for training and continuing to execute until the image processing algorithms in the preset algorithm list are completely trained to obtain a final output image.
15. The image processing method according to claim 1, further comprising:
acquiring image sample data, wherein the image sample data comprises a plurality of image data processed by an image processing algorithm and algorithm parameters corresponding to the image data;
performing iterative training according to the image sample data based on a convolutional neural network to obtain a convolutional neural network model, wherein the output parameters of the convolutional neural network model are algorithm parameters corresponding to an image processing algorithm;
using the algorithm parameters corresponding to the image data as calibration data to verify the convolutional neural network model;
and saving the convolution neural network model passing the verification as a pre-trained convolution neural network model.
16. The image processing method according to claim 15, wherein the verifying the convolutional neural network model by using the algorithm parameter corresponding to the image data as calibration data comprises:
setting algorithm parameters corresponding to the image data as calibration data;
calculating the difference value of the calibration data and the algorithm parameter corresponding to the image processing algorithm in the output parameter as a gap loss value;
judging whether the gap loss value meets a preset condition, wherein the preset condition is a condition for measuring the accuracy of the convolutional neural network model;
and when the gap loss value meets the preset condition, judging that the convolutional neural network model passes the verification.
17. The image processing method according to claim 16, wherein the determining whether the gap loss value satisfies a predetermined condition includes:
monitoring the change value of the gap loss value corresponding to each iterative training;
and if the variation value of the gap loss value corresponding to each iterative training is within a preset range, judging that the gap loss value meets the preset condition.
18. The image processing method according to claim 16, wherein the determining whether the gap loss value satisfies a predetermined condition includes:
when the gap loss value corresponding to each iterative training is reduced, judging whether the gap loss value is smaller than a preset value or not;
and if the gap loss value is smaller than the preset value, judging that the gap loss value meets the preset condition.
19. The image processing method of claim 15, wherein before performing iterative training based on the convolutional neural network according to the image sample data to obtain a convolutional neural network model, the method further comprises:
quantizing the algorithm parameters in the image sample data to obtain quantized image sample data;
the iterative training is carried out according to the image sample data based on the convolutional neural network to obtain a convolutional neural network model, and the iterative training comprises the following steps: and performing iterative training according to the quantized image sample data based on a convolutional neural network algorithm to obtain a convolutional neural network model.
20. A terminal device, characterized in that the terminal device comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a target image;
acquiring a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes;
and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
21. The terminal device of claim 20, wherein the processor, in performing the acquiring the target image, is configured to perform:
acquiring an image to be processed and an image format of the image to be processed;
judging whether the image format of the image to be processed is an RAW format or not;
and if the image format of the image to be processed is the RAW format, acquiring the image to be processed as a target image.
22. The terminal device of claim 21, wherein the processor, after implementing the determining whether the image format of the image to be processed is a RAW format, is further configured to implement:
and if the image format of the image to be processed is not the RAW format, inquiring and acquiring the RAW format image corresponding to the image to be processed as the target image.
23. The terminal device according to any of claims 20 to 22, wherein the predetermined algorithm list comprises a plurality of image processing algorithm indexes arranged in a stack.
24. The terminal device of claim 23, wherein the processor, prior to implementing the obtaining the preset algorithm list, is further configured to implement:
acquiring a preset algorithm list, wherein the algorithm list comprises a plurality of image processing algorithm indexes which are arranged in sequence;
and storing the image processing algorithm indexes in the algorithm list in a first-in first-out stack form to generate a preset algorithm list.
25. The terminal device of any of claims 20 to 24, wherein the plurality of image processing algorithm indices comprises an adjust color temperature algorithm index, a set color algorithm index, an adjust exposure algorithm index, an adjust contrast algorithm index, a highlight restoration algorithm index, a low light compensation algorithm index, a white balance algorithm index, an adjust sharpness algorithm index, a defogging algorithm index, an adjust natural saturation algorithm index, an adjust saturation algorithm index, and a tone curve algorithm index.
26. The terminal device of claim 20, wherein the processor, when implementing the sequential reading of the pre-trained convolutional neural network model in the preset algorithm list according to the plurality of image processing algorithm indices to process the target image, is configured to implement:
sequentially starting image processing algorithms corresponding to the image processing algorithm indexes according to the preset algorithm list;
determining algorithm parameters corresponding to the image processing algorithm according to the convolutional neural network model;
and processing the target image by adopting the image processing algorithm according to the algorithm parameters to obtain an output image.
27. The terminal device according to claim 26, wherein the processor, when implementing the image processing algorithms corresponding to the image processing algorithm indexes that are sequentially started according to the preset algorithm list, is configured to implement:
and determining an image processing algorithm index in the preset algorithm list and an image processing algorithm corresponding to the image processing algorithm index.
28. The terminal device of claim 27, wherein the processor, when implementing the determining of one image processing algorithm index in the preset algorithm list, is configured to implement:
and sequentially determining one image processing algorithm in the preset algorithm list according to the arrangement sequence of the plurality of image processing algorithms in the preset algorithm list.
29. The terminal device of claim 26, wherein the processor, when implementing the determining of the algorithm parameters corresponding to the image processing algorithm according to the convolutional neural network model, is configured to implement:
and inputting the target image into the convolutional neural network model for training to obtain algorithm parameters corresponding to the image processing algorithm.
30. The terminal device of claim 29, wherein the processor, when implementing the inputting of the target image into a convolutional neural network model for training to obtain algorithm parameters corresponding to the image processing algorithm, is configured to implement:
inputting the target image into a convolutional neural network model for training to obtain a plurality of algorithm parameters corresponding to the image processing algorithm and a probability value corresponding to each algorithm parameter;
and determining an algorithm parameter corresponding to the image processing algorithm according to the probability value.
31. The terminal device according to any one of claims 26 to 30, wherein the processor, when implementing the processing of the target image with the image processing algorithm according to the algorithm parameters to obtain an output image, is configured to implement:
processing the target image by adopting the image processing algorithm according to the algorithm parameter to obtain an output image;
and taking the output image as the target image, returning to the image processing algorithms corresponding to the image processing algorithm indexes which are sequentially started according to the preset algorithm list, and continuously executing until the image processing algorithm indexes in the preset algorithm list are all started to obtain a final output image.
32. The terminal device of claim 31, wherein the processor, in performing the processing of the target image according to the image processing algorithm and corresponding algorithm parameters to obtain an output image, is configured to perform:
and calling a function corresponding to the image processing algorithm to process the target image according to the algorithm parameter corresponding to the image processing algorithm so as to obtain an output image.
33. The terminal device of claim 20, wherein the processor, when implementing the sequential reading of the pre-trained convolutional neural network model in the preset algorithm list according to the plurality of image processing algorithm indices to process the target image, is configured to implement:
inputting the target image into a convolutional neural network model for training to obtain an algorithm parameter corresponding to one image processing algorithm in the preset algorithm list;
processing the target image according to the algorithm parameters and the corresponding image processing algorithm to obtain an output image;
and taking the output image as the target image, returning to the step of inputting the target image into a convolutional neural network for training and continuing to execute until the image processing algorithms in the preset algorithm list are completely trained to obtain a final output image.
34. The terminal device of claim 20, wherein the processor is further configured to implement:
acquiring image sample data, wherein the image sample data comprises a plurality of image data processed by an image processing algorithm and algorithm parameters corresponding to the image data;
performing iterative training according to the image sample data based on a convolutional neural network to obtain a convolutional neural network model, wherein the output parameters of the convolutional neural network model are algorithm parameters corresponding to an image processing algorithm;
using the algorithm parameters corresponding to the image data as calibration data to verify the convolutional neural network model;
and saving the convolution neural network model passing the verification as a pre-trained convolution neural network model.
35. The terminal device according to claim 34, wherein the processor, when implementing the verification of the convolutional neural network model using the algorithm parameter corresponding to the image data as calibration data, is configured to implement:
setting algorithm parameters corresponding to the image data as calibration data;
calculating the difference value of the calibration data and the algorithm parameter corresponding to the image processing algorithm in the output parameter as a gap loss value;
judging whether the gap loss value meets a preset condition, wherein the preset condition is a condition for measuring the accuracy of the convolutional neural network model;
and when the gap loss value meets the preset condition, judging that the convolutional neural network model passes the verification.
36. The terminal device of claim 35, wherein the processor, in performing the determining whether the gap loss value satisfies a predetermined condition, is configured to perform:
monitoring the change value of the gap loss value corresponding to each iterative training;
and if the variation value of the gap loss value corresponding to each iterative training is within a preset range, judging that the gap loss value meets the preset condition.
37. The terminal device of claim 35, wherein the processor, in performing the determining whether the gap loss value satisfies a predetermined condition, is configured to perform:
when the gap loss value corresponding to each iterative training is reduced, judging whether the gap loss value is smaller than a preset value or not;
and if the gap loss value is smaller than the preset value, judging that the gap loss value meets the preset condition.
38. The terminal device of claim 34, wherein the processor, prior to implementing the convolutional neural network-based iterative training from the image sample data to obtain a convolutional neural network model, is further configured to implement:
quantizing the algorithm parameters in the image sample data to obtain quantized image sample data;
the iterative training is carried out according to the image sample data based on the convolutional neural network to obtain a convolutional neural network model, and the iterative training comprises the following steps: and performing iterative training according to the quantized image sample data based on a convolutional neural network algorithm to obtain a convolutional neural network model.
39. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement:
acquiring a target image;
acquiring a preset algorithm list, wherein the preset algorithm list comprises a plurality of image processing algorithm indexes;
and reading a pre-trained convolutional neural network model according to the sequence of the image processing algorithm indexes in the preset algorithm list so as to process the target image.
40. The computer-readable storage medium of claim 39, wherein the processor, in implementing the acquiring the target image, is configured to implement:
acquiring an image to be processed and an image format of the image to be processed;
judging whether the image format of the image to be processed is an RAW format or not;
and if the image format of the image to be processed is the RAW format, acquiring the image to be processed as a target image.
41. The computer-readable storage medium of claim 40, wherein the processor, after implementing the determining whether the image format of the image to be processed is a RAW format, is further configured to implement:
and if the image format of the image to be processed is not the RAW format, inquiring and acquiring the RAW format image corresponding to the image to be processed as the target image.
42. The computer-readable storage medium according to any one of claims 39 to 41, wherein the predetermined algorithm list comprises a plurality of image processing algorithm indexes arranged in a stack.
43. The computer-readable storage medium of claim 42, wherein the processor, prior to implementing the obtaining the preset algorithm list, is further configured to implement:
acquiring a preset algorithm list, wherein the algorithm list comprises a plurality of image processing algorithm indexes which are arranged in sequence;
and storing the image processing algorithm indexes in the algorithm list in a first-in first-out stack form to generate a preset algorithm list.
44. The computer-readable storage medium according to any one of claims 39 to 43, wherein the plurality of image processing algorithm indices includes an adjust color temperature algorithm index, a set color algorithm index, an adjust exposure algorithm index, an adjust contrast algorithm index, a highlight restoration algorithm index, a low light compensation algorithm index, a white balance algorithm index, an adjust sharpness algorithm index, a defogg algorithm index, an adjust natural saturation algorithm index, an adjust saturation algorithm index, and a tone curve algorithm index.
45. The computer-readable storage medium of claim 39, wherein the processor, when implementing the sequential reading of pre-trained convolutional neural network models in the pre-defined algorithm list according to the plurality of image processing algorithm indices to process the target image, is configured to implement:
sequentially starting image processing algorithms corresponding to the image processing algorithm indexes according to the preset algorithm list;
determining algorithm parameters corresponding to the image processing algorithm according to the convolutional neural network model;
and processing the target image by adopting the image processing algorithm according to the algorithm parameters to obtain an output image.
46. The computer-readable storage medium of claim 45, wherein the processor, when implementing the sequentially starting of image processing algorithm indexes corresponding image processing algorithms according to the preset algorithm list, is configured to implement:
and determining an image processing algorithm index in the preset algorithm list and an image processing algorithm corresponding to the image processing algorithm index.
47. The computer-readable storage medium of claim 46, wherein said processor, when performing said determining an index of an image processing algorithm in said list of predetermined algorithms, is configured to perform:
and sequentially determining one image processing algorithm in the preset algorithm list according to the arrangement sequence of the plurality of image processing algorithms in the preset algorithm list.
48. The computer-readable storage medium of claim 45, wherein the processor, in performing the determining the algorithm parameters corresponding to the image processing algorithm from the convolutional neural network model, is configured to perform:
and inputting the target image into the convolutional neural network model for training to obtain algorithm parameters corresponding to the image processing algorithm.
49. The computer-readable storage medium of claim 48, wherein the processor, in implementing the input of the target image into the convolutional neural network model for training to obtain the algorithm parameters corresponding to the image processing algorithm, is configured to implement:
inputting the target image into a convolutional neural network model for training to obtain a plurality of algorithm parameters corresponding to the image processing algorithm and a probability value corresponding to each algorithm parameter;
and determining an algorithm parameter corresponding to the image processing algorithm according to the probability value.
50. A computer readable storage medium according to any one of claims 45 to 49, wherein the processor, in performing the processing of the target image with the image processing algorithm according to the algorithm parameters to obtain an output image, is configured to perform:
processing the target image by adopting the image processing algorithm according to the algorithm parameter to obtain an output image;
and taking the output image as the target image, returning to the image processing algorithms corresponding to the image processing algorithm indexes which are sequentially started according to the preset algorithm list, and continuously executing until the image processing algorithm indexes in the preset algorithm list are all started to obtain a final output image.
51. The computer-readable storage medium of claim 50, wherein the processor, in performing the processing of the target image according to the image processing algorithm and corresponding algorithm parameters to obtain an output image, is configured to perform:
and calling a function corresponding to the image processing algorithm to process the target image according to the algorithm parameter corresponding to the image processing algorithm so as to obtain an output image.
52. The computer-readable storage medium of claim 39, wherein the processor, when implementing the sequential reading of pre-trained convolutional neural network models in the pre-defined algorithm list according to the plurality of image processing algorithm indices to process the target image, is configured to implement:
inputting the target image into a convolutional neural network model for training to obtain an algorithm parameter corresponding to one image processing algorithm in the preset algorithm list;
processing the target image according to the algorithm parameters and the corresponding image processing algorithm to obtain an output image;
and taking the output image as the target image, returning to the step of inputting the target image into a convolutional neural network for training and continuing to execute until the image processing algorithms in the preset algorithm list are completely trained to obtain a final output image.
53. The computer-readable storage medium of claim 39, wherein the processor is further configured to:
acquiring image sample data, wherein the image sample data comprises a plurality of image data processed by an image processing algorithm and algorithm parameters corresponding to the image data;
performing iterative training according to the image sample data based on a convolutional neural network to obtain a convolutional neural network model, wherein the output parameters of the convolutional neural network model are algorithm parameters corresponding to an image processing algorithm;
using the algorithm parameters corresponding to the image data as calibration data to verify the convolutional neural network model;
and saving the convolution neural network model passing the verification as a pre-trained convolution neural network model.
54. The computer-readable storage medium of claim 53, wherein the processor, in performing the verification of the convolutional neural network model using the algorithm parameters corresponding to the image data as calibration data, is configured to perform:
setting algorithm parameters corresponding to the image data as calibration data;
calculating the difference value of the calibration data and the algorithm parameter corresponding to the image processing algorithm in the output parameter as a gap loss value;
judging whether the gap loss value meets a preset condition, wherein the preset condition is a condition for measuring the accuracy of the convolutional neural network model;
and when the gap loss value meets the preset condition, judging that the convolutional neural network model passes the verification.
55. The computer-readable storage medium of claim 54, wherein the processor, in performing the determining whether the gap loss value satisfies a predetermined condition, is configured to perform:
monitoring the change value of the gap loss value corresponding to each iterative training;
and if the variation value of the gap loss value corresponding to each iterative training is within a preset range, judging that the gap loss value meets the preset condition.
56. The computer-readable storage medium of claim 54, wherein the processor, in performing the determining whether the gap loss value satisfies a predetermined condition, is configured to perform:
when the gap loss value corresponding to each iterative training is reduced, judging whether the gap loss value is smaller than a preset value or not;
and if the gap loss value is smaller than the preset value, judging that the gap loss value meets the preset condition.
57. The computer-readable storage medium of claim 53, wherein the processor, prior to implementing the convolutional neural network-based, iterative training from the image sample data to obtain a convolutional neural network model, is further configured to implement:
quantizing the algorithm parameters in the image sample data to obtain quantized image sample data;
the iterative training is carried out according to the image sample data based on the convolutional neural network to obtain a convolutional neural network model, and the iterative training comprises the following steps: and performing iterative training according to the quantized image sample data based on a convolutional neural network algorithm to obtain a convolutional neural network model.
CN201880071017.2A 2018-12-18 2018-12-18 Image processing method, terminal device and storage medium Pending CN111373436A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/121796 WO2020124374A1 (en) 2018-12-18 2018-12-18 Image processing method, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN111373436A true CN111373436A (en) 2020-07-03

Family

ID=71100102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880071017.2A Pending CN111373436A (en) 2018-12-18 2018-12-18 Image processing method, terminal device and storage medium

Country Status (2)

Country Link
CN (1) CN111373436A (en)
WO (1) WO2020124374A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153343A (en) * 2020-09-25 2020-12-29 北京百度网讯科技有限公司 Elevator safety monitoring method and device, monitoring camera and storage medium
CN112163468A (en) * 2020-09-11 2021-01-01 浙江大华技术股份有限公司 Image processing method and device based on multiple threads
CN112507833A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Face recognition and model training method, device, equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111726592B (en) * 2020-06-30 2022-06-21 北京市商汤科技开发有限公司 Method and apparatus for obtaining architecture of image signal processor

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101523888A (en) * 2006-10-13 2009-09-02 苹果公司 System and method for processing images using predetermined tone reproduction curves
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device
CN106934426A (en) * 2015-12-29 2017-07-07 三星电子株式会社 The method and apparatus of the neutral net based on picture signal treatment
CN107391244A (en) * 2017-07-11 2017-11-24 重庆邮电大学 A kind of Internet of Things operating system dispatching method based on mixed scheduling model
CN107820069A (en) * 2017-11-16 2018-03-20 安徽亿联智能有限公司 A kind of video monitoring equipment ISP adjustment methods
CN107943571A (en) * 2017-11-14 2018-04-20 广东欧珀移动通信有限公司 Background application management-control method, device, storage medium and electronic equipment
CN108876745A (en) * 2018-06-27 2018-11-23 厦门美图之家科技有限公司 Image processing method and device
CN108985147A (en) * 2018-05-31 2018-12-11 成都通甲优博科技有限责任公司 Object detection method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101321270B (en) * 2008-07-16 2011-06-29 中国人民解放军国防科学技术大学 Monitoring system and method for real-time optimized image
US20140307930A1 (en) * 2013-04-15 2014-10-16 Drvision Technologies Llc Teachable object contour mapping for biology image region partition
CN108364267B (en) * 2018-02-13 2019-07-05 北京旷视科技有限公司 Image processing method, device and equipment
CN108965723A (en) * 2018-09-30 2018-12-07 易诚高科(大连)科技有限公司 A kind of original image processing method, image processor and image imaging sensor

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101523888A (en) * 2006-10-13 2009-09-02 苹果公司 System and method for processing images using predetermined tone reproduction curves
CN106934426A (en) * 2015-12-29 2017-07-07 三星电子株式会社 The method and apparatus of the neutral net based on picture signal treatment
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device
CN107391244A (en) * 2017-07-11 2017-11-24 重庆邮电大学 A kind of Internet of Things operating system dispatching method based on mixed scheduling model
CN107943571A (en) * 2017-11-14 2018-04-20 广东欧珀移动通信有限公司 Background application management-control method, device, storage medium and electronic equipment
CN107820069A (en) * 2017-11-16 2018-03-20 安徽亿联智能有限公司 A kind of video monitoring equipment ISP adjustment methods
CN108985147A (en) * 2018-05-31 2018-12-11 成都通甲优博科技有限责任公司 Object detection method and device
CN108876745A (en) * 2018-06-27 2018-11-23 厦门美图之家科技有限公司 Image processing method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163468A (en) * 2020-09-11 2021-01-01 浙江大华技术股份有限公司 Image processing method and device based on multiple threads
CN112153343A (en) * 2020-09-25 2020-12-29 北京百度网讯科技有限公司 Elevator safety monitoring method and device, monitoring camera and storage medium
CN112507833A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Face recognition and model training method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2020124374A1 (en) 2020-06-25

Similar Documents

Publication Publication Date Title
CN111373436A (en) Image processing method, terminal device and storage medium
JP6267224B2 (en) Method and system for detecting and selecting the best pictures
US11151700B2 (en) Image processing method, terminal, and non-transitory computer-readable storage medium
US11107198B2 (en) Method and apparatus for incorporating noise pattern into image on which bokeh processing has been performed
US10580122B2 (en) Method and system for image enhancement
CN114385280B (en) Parameter determination method and electronic equipment
CN107343188A (en) image processing method, device and terminal
CN110852385A (en) Image processing method, device, equipment and storage medium
CN104535178A (en) Light strength value detecting method and terminal
CN110689496B (en) Method and device for determining noise reduction model, electronic equipment and computer storage medium
CN116957948A (en) Image processing method, electronic product and storage medium
CN105354228A (en) Similar image searching method and apparatus
CN113011328B (en) Image processing method, device, electronic equipment and storage medium
CN110727810A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111726592B (en) Method and apparatus for obtaining architecture of image signal processor
US10970587B2 (en) Electronic device for notifying of update of image signal processing and method for operating the same
US9311342B1 (en) Tree based image storage system
CN111476731A (en) Image correction method, image correction device, storage medium and electronic equipment
CN111179158A (en) Image processing method, image processing apparatus, electronic device, and medium
CN116645282A (en) Data processing method and system based on big data
CN115797267A (en) Image quality evaluation method, system, electronic device, and storage medium
CN116385369A (en) Depth image quality evaluation method and device, electronic equipment and storage medium
CN115660991A (en) Model training method, image exposure correction method, device, equipment and medium
JP2021047653A (en) Learning model generation device, image correction device, learning model generation program, and image correction program
US20200311305A1 (en) Electronic device and method for securing personal information included in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200703

WD01 Invention patent application deemed withdrawn after publication